Aug 13 01:24:39.867612 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:24:39.867633 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:24:39.867641 kernel: BIOS-provided physical RAM map: Aug 13 01:24:39.867650 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:24:39.867655 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:24:39.867661 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:24:39.867667 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:24:39.867673 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:24:39.867679 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:24:39.867684 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:24:39.867690 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:24:39.867696 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:24:39.867703 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:24:39.867709 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:24:39.867716 kernel: NX (Execute Disable) protection: active Aug 13 01:24:39.867722 kernel: APIC: Static calls initialized Aug 13 01:24:39.867728 kernel: SMBIOS 2.8 present. Aug 13 01:24:39.867736 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:24:39.867742 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:24:39.867748 kernel: Hypervisor detected: KVM Aug 13 01:24:39.867754 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:24:39.867759 kernel: kvm-clock: using sched offset of 5594870878 cycles Aug 13 01:24:39.867766 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:24:39.867772 kernel: tsc: Detected 1999.996 MHz processor Aug 13 01:24:39.867779 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:24:39.867785 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:24:39.867791 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:24:39.867799 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:24:39.867806 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:24:39.871033 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:24:39.871042 kernel: Using GB pages for direct mapping Aug 13 01:24:39.871048 kernel: ACPI: Early table checksum verification disabled Aug 13 01:24:39.871055 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:24:39.871061 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871067 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871074 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871084 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:24:39.871090 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871096 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871102 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871112 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:24:39.871118 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:24:39.871126 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:24:39.871133 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:24:39.871139 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:24:39.871146 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:24:39.871152 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:24:39.871158 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:24:39.871165 kernel: No NUMA configuration found Aug 13 01:24:39.871171 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:24:39.871179 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:24:39.871186 kernel: Zone ranges: Aug 13 01:24:39.871192 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:24:39.871199 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:24:39.871205 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:24:39.871212 kernel: Device empty Aug 13 01:24:39.871218 kernel: Movable zone start for each node Aug 13 01:24:39.871224 kernel: Early memory node ranges Aug 13 01:24:39.871230 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:24:39.871237 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:24:39.871245 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:24:39.871251 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:24:39.871257 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:24:39.871264 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:24:39.871270 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:24:39.871276 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:24:39.871283 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:24:39.871289 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:24:39.871295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:24:39.871304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:24:39.871310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:24:39.871317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:24:39.871323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:24:39.871329 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:24:39.871335 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:24:39.871342 kernel: TSC deadline timer available Aug 13 01:24:39.871348 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:24:39.871354 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:24:39.871362 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:24:39.871369 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:24:39.871375 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:24:39.871381 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:24:39.871388 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:24:39.871394 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:24:39.871400 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:24:39.871407 kernel: kvm-guest: setup PV sched yield Aug 13 01:24:39.871413 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:24:39.871421 kernel: Booting paravirtualized kernel on KVM Aug 13 01:24:39.871428 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:24:39.871434 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:24:39.871441 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:24:39.871447 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:24:39.871453 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:24:39.871459 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:24:39.871466 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:24:39.871473 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:24:39.871482 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:24:39.871488 kernel: random: crng init done Aug 13 01:24:39.871495 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:24:39.871501 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:24:39.871507 kernel: Fallback order for Node 0: 0 Aug 13 01:24:39.871514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:24:39.871520 kernel: Policy zone: Normal Aug 13 01:24:39.871527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:24:39.871535 kernel: software IO TLB: area num 2. Aug 13 01:24:39.871541 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:24:39.871547 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:24:39.871554 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:24:39.871560 kernel: Dynamic Preempt: voluntary Aug 13 01:24:39.871566 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:24:39.871573 kernel: rcu: RCU event tracing is enabled. Aug 13 01:24:39.871580 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:24:39.871586 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:24:39.871593 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:24:39.871601 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:24:39.871608 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:24:39.871614 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:24:39.871621 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:24:39.871633 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:24:39.871642 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:24:39.871649 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:24:39.871655 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:24:39.871662 kernel: Console: colour VGA+ 80x25 Aug 13 01:24:39.871668 kernel: printk: legacy console [tty0] enabled Aug 13 01:24:39.871675 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:24:39.871683 kernel: ACPI: Core revision 20240827 Aug 13 01:24:39.871690 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:24:39.871697 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:24:39.871704 kernel: x2apic enabled Aug 13 01:24:39.871710 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:24:39.871764 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:24:39.871771 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:24:39.871778 kernel: kvm-guest: setup PV IPIs Aug 13 01:24:39.871785 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:24:39.871791 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Aug 13 01:24:39.871798 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Aug 13 01:24:39.871805 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:24:39.871852 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:24:39.871859 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:24:39.871868 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:24:39.871875 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:24:39.871882 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:24:39.871889 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:24:39.871895 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:24:39.871902 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:24:39.871909 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:24:39.871916 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:24:39.871925 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:24:39.871931 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:24:39.871938 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:24:39.871945 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:24:39.871951 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:24:39.871958 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:24:39.871965 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:24:39.871971 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:24:39.871978 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:24:39.871987 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:24:39.871994 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:24:39.872000 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:24:39.872007 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:24:39.872013 kernel: landlock: Up and running. Aug 13 01:24:39.872020 kernel: SELinux: Initializing. Aug 13 01:24:39.872027 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:24:39.872033 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:24:39.872040 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:24:39.872049 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:24:39.872055 kernel: ... version: 0 Aug 13 01:24:39.872062 kernel: ... bit width: 48 Aug 13 01:24:39.872068 kernel: ... generic registers: 6 Aug 13 01:24:39.872075 kernel: ... value mask: 0000ffffffffffff Aug 13 01:24:39.872082 kernel: ... max period: 00007fffffffffff Aug 13 01:24:39.872088 kernel: ... fixed-purpose events: 0 Aug 13 01:24:39.872095 kernel: ... event mask: 000000000000003f Aug 13 01:24:39.872101 kernel: signal: max sigframe size: 3376 Aug 13 01:24:39.872110 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:24:39.872117 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:24:39.872123 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:24:39.872130 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:24:39.872167 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:24:39.872174 kernel: .... node #0, CPUs: #1 Aug 13 01:24:39.872181 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:24:39.872188 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Aug 13 01:24:39.872195 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:24:39.872204 kernel: devtmpfs: initialized Aug 13 01:24:39.872211 kernel: x86/mm: Memory block size: 128MB Aug 13 01:24:39.872218 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:24:39.872225 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:24:39.872231 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:24:39.872238 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:24:39.872245 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:24:39.872251 kernel: audit: type=2000 audit(1755048277.499:1): state=initialized audit_enabled=0 res=1 Aug 13 01:24:39.872259 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:24:39.872267 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:24:39.872274 kernel: cpuidle: using governor menu Aug 13 01:24:39.872281 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:24:39.872287 kernel: dca service started, version 1.12.1 Aug 13 01:24:39.872294 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:24:39.872301 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:24:39.872308 kernel: PCI: Using configuration type 1 for base access Aug 13 01:24:39.872314 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:24:39.872321 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:24:39.872330 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:24:39.872336 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:24:39.872343 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:24:39.872350 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:24:39.872356 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:24:39.872363 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:24:39.872369 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:24:39.872376 kernel: ACPI: Interpreter enabled Aug 13 01:24:39.872383 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:24:39.872391 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:24:39.872398 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:24:39.872404 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:24:39.872411 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:24:39.872418 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:24:39.872584 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:24:39.872699 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:24:39.873886 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:24:39.873902 kernel: PCI host bridge to bus 0000:00 Aug 13 01:24:39.874030 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:24:39.874133 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:24:39.874231 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:24:39.874326 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:24:39.874421 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:24:39.874516 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:24:39.874618 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:24:39.874747 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:24:39.874894 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:24:39.875007 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:24:39.875113 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:24:39.875219 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:24:39.875327 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:24:39.875442 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:24:39.875549 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:24:39.875654 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:24:39.875760 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:24:39.879758 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:24:39.879900 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:24:39.880018 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:24:39.880125 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:24:39.880230 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:24:39.880363 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:24:39.880471 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:24:39.880584 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:24:39.880693 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:24:39.880798 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:24:39.880953 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:24:39.881062 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:24:39.881072 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:24:39.881080 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:24:39.881087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:24:39.881094 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:24:39.881104 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:24:39.881111 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:24:39.881118 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:24:39.881125 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:24:39.881132 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:24:39.881138 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:24:39.881145 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:24:39.881152 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:24:39.881159 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:24:39.881167 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:24:39.881174 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:24:39.881181 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:24:39.881187 kernel: iommu: Default domain type: Translated Aug 13 01:24:39.881194 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:24:39.881201 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:24:39.881208 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:24:39.881215 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:24:39.881222 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:24:39.881328 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:24:39.881433 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:24:39.881536 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:24:39.881545 kernel: vgaarb: loaded Aug 13 01:24:39.881553 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:24:39.881560 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:24:39.881566 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:24:39.881573 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:24:39.881583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:24:39.881590 kernel: pnp: PnP ACPI init Aug 13 01:24:39.881706 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:24:39.881716 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:24:39.881724 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:24:39.881731 kernel: NET: Registered PF_INET protocol family Aug 13 01:24:39.881738 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:24:39.881745 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:24:39.881754 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:24:39.881761 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:24:39.881768 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:24:39.881775 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:24:39.881782 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:24:39.881788 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:24:39.881795 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:24:39.881802 kernel: NET: Registered PF_XDP protocol family Aug 13 01:24:39.903672 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:24:39.903972 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:24:39.904859 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:24:39.904989 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:24:39.905100 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:24:39.905205 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:24:39.905214 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:24:39.905222 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:24:39.905229 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:24:39.905241 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Aug 13 01:24:39.905248 kernel: Initialise system trusted keyrings Aug 13 01:24:39.905255 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:24:39.905261 kernel: Key type asymmetric registered Aug 13 01:24:39.905268 kernel: Asymmetric key parser 'x509' registered Aug 13 01:24:39.905275 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:24:39.905282 kernel: io scheduler mq-deadline registered Aug 13 01:24:39.905288 kernel: io scheduler kyber registered Aug 13 01:24:39.905295 kernel: io scheduler bfq registered Aug 13 01:24:39.905304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:24:39.905312 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:24:39.905319 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:24:39.905325 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:24:39.905332 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:24:39.905339 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:24:39.905346 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:24:39.905353 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:24:39.905474 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:24:39.905488 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:24:39.905596 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:24:39.905703 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:24:39 UTC (1755048279) Aug 13 01:24:39.905830 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:24:39.905841 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:24:39.905848 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:24:39.905855 kernel: Segment Routing with IPv6 Aug 13 01:24:39.905861 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:24:39.905871 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:24:39.905878 kernel: Key type dns_resolver registered Aug 13 01:24:39.905884 kernel: IPI shorthand broadcast: enabled Aug 13 01:24:39.905891 kernel: sched_clock: Marking stable (2713004501, 214375799)->(2967347073, -39966773) Aug 13 01:24:39.905898 kernel: registered taskstats version 1 Aug 13 01:24:39.905905 kernel: Loading compiled-in X.509 certificates Aug 13 01:24:39.905912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:24:39.905918 kernel: Demotion targets for Node 0: null Aug 13 01:24:39.905925 kernel: Key type .fscrypt registered Aug 13 01:24:39.905934 kernel: Key type fscrypt-provisioning registered Aug 13 01:24:39.905940 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:24:39.905947 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:24:39.905954 kernel: ima: No architecture policies found Aug 13 01:24:39.905960 kernel: clk: Disabling unused clocks Aug 13 01:24:39.905967 kernel: Warning: unable to open an initial console. Aug 13 01:24:39.905974 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:24:39.905981 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:24:39.905987 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:24:39.905996 kernel: Run /init as init process Aug 13 01:24:39.906003 kernel: with arguments: Aug 13 01:24:39.906010 kernel: /init Aug 13 01:24:39.906016 kernel: with environment: Aug 13 01:24:39.906023 kernel: HOME=/ Aug 13 01:24:39.906231 kernel: TERM=linux Aug 13 01:24:39.906240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:24:39.906248 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:24:39.906259 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:24:39.906268 systemd[1]: Detected virtualization kvm. Aug 13 01:24:39.906275 systemd[1]: Detected architecture x86-64. Aug 13 01:24:39.906282 systemd[1]: Running in initrd. Aug 13 01:24:39.906289 systemd[1]: No hostname configured, using default hostname. Aug 13 01:24:39.906299 systemd[1]: Hostname set to . Aug 13 01:24:39.906306 systemd[1]: Initializing machine ID from random generator. Aug 13 01:24:39.906314 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:24:39.906323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:24:39.906330 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:24:39.906338 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:24:39.906346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:24:39.906354 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:24:39.906362 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:24:39.906370 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:24:39.906379 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:24:39.906387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:24:39.906394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:24:39.906401 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:24:39.906409 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:24:39.906416 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:24:39.906423 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:24:39.906431 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:24:39.906440 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:24:39.906448 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:24:39.906455 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:24:39.906462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:24:39.906470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:24:39.906477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:24:39.906485 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:24:39.906494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:24:39.906501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:24:39.906509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:24:39.906516 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:24:39.906524 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:24:39.906531 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:24:39.906539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:24:39.906548 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:39.906555 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:24:39.906563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:24:39.906571 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:24:39.906580 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:24:39.906607 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:24:39.906625 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:24:39.906634 systemd-journald[206]: Journal started Aug 13 01:24:39.906652 systemd-journald[206]: Runtime Journal (/run/log/journal/ea60b2e5427841fb922a9d88677da747) is 8M, max 78.5M, 70.5M free. Aug 13 01:24:39.863514 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:24:39.959942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:24:39.959959 kernel: Bridge firewalling registered Aug 13 01:24:39.959969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:24:39.959981 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:24:39.909386 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:24:39.965915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:24:39.967768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:39.972951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:24:39.975861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:24:39.980910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:24:39.983863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:24:39.990068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:24:39.993592 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:24:39.995905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:24:39.997929 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:24:40.001055 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:24:40.007904 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:24:40.017008 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:24:40.052215 systemd-resolved[246]: Positive Trust Anchors: Aug 13 01:24:40.052929 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:24:40.052959 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:24:40.055862 systemd-resolved[246]: Defaulting to hostname 'linux'. Aug 13 01:24:40.056896 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:24:40.059783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:24:40.093840 kernel: SCSI subsystem initialized Aug 13 01:24:40.102869 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:24:40.111845 kernel: iscsi: registered transport (tcp) Aug 13 01:24:40.130951 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:24:40.130990 kernel: QLogic iSCSI HBA Driver Aug 13 01:24:40.148659 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:24:40.162435 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:24:40.165179 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:24:40.203988 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:24:40.205709 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:24:40.261028 kernel: raid6: avx2x4 gen() 32115 MB/s Aug 13 01:24:40.278832 kernel: raid6: avx2x2 gen() 30420 MB/s Aug 13 01:24:40.297380 kernel: raid6: avx2x1 gen() 21788 MB/s Aug 13 01:24:40.297401 kernel: raid6: using algorithm avx2x4 gen() 32115 MB/s Aug 13 01:24:40.316453 kernel: raid6: .... xor() 4215 MB/s, rmw enabled Aug 13 01:24:40.316485 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:24:40.335839 kernel: xor: automatically using best checksumming function avx Aug 13 01:24:40.463843 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:24:40.470346 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:24:40.472283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:24:40.499400 systemd-udevd[455]: Using default interface naming scheme 'v255'. Aug 13 01:24:40.504294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:24:40.507093 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:24:40.531893 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Aug 13 01:24:40.553970 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:24:40.555593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:24:40.615405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:24:40.617936 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:24:40.666830 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:24:40.679839 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:24:40.686857 kernel: libata version 3.00 loaded. Aug 13 01:24:40.692833 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:24:40.797881 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:24:40.829039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:24:40.829201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:40.831556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:40.834519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:40.844590 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:24:40.854487 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:24:40.854524 kernel: AES CTR mode by8 optimization enabled Aug 13 01:24:40.854536 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:24:40.858143 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:24:40.858312 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:24:40.858449 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:24:40.867096 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:24:40.880862 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:24:40.887367 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:24:40.887380 kernel: GPT:9289727 != 9297919 Aug 13 01:24:40.887390 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:24:40.887405 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:24:40.887416 kernel: GPT:9289727 != 9297919 Aug 13 01:24:40.887425 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:24:40.887434 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:40.887444 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:24:40.889909 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:24:40.890069 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:24:40.895459 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:24:40.907848 kernel: scsi host1: ahci Aug 13 01:24:40.908027 kernel: scsi host2: ahci Aug 13 01:24:40.908162 kernel: scsi host3: ahci Aug 13 01:24:40.908878 kernel: scsi host4: ahci Aug 13 01:24:40.909268 kernel: scsi host5: ahci Aug 13 01:24:40.910833 kernel: scsi host6: ahci Aug 13 01:24:40.910996 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:24:40.911013 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:24:40.911022 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:24:40.911031 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:24:40.911039 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:24:40.911048 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:24:40.962979 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:24:41.002907 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:41.022841 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:24:41.031072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:24:41.037865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:24:41.038482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:24:41.041695 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:24:41.063273 disk-uuid[625]: Primary Header is updated. Aug 13 01:24:41.063273 disk-uuid[625]: Secondary Entries is updated. Aug 13 01:24:41.063273 disk-uuid[625]: Secondary Header is updated. Aug 13 01:24:41.075853 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:41.096845 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:41.218929 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:41.218991 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:41.223834 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:41.223859 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:41.223871 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:41.226869 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:24:41.257787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:24:41.280566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:24:41.281214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:24:41.282480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:24:41.284999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:24:41.306966 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:24:42.089865 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:24:42.090456 disk-uuid[626]: The operation has completed successfully. Aug 13 01:24:42.132641 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:24:42.132775 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:24:42.160961 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:24:42.173000 sh[653]: Success Aug 13 01:24:42.191906 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:24:42.191968 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:24:42.195890 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:24:42.205849 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:24:42.253558 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:24:42.257905 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:24:42.270984 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:24:42.282394 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:24:42.282424 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (665) Aug 13 01:24:42.288030 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:24:42.288097 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:42.289861 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:24:42.299098 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:24:42.300157 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:24:42.301074 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:24:42.301762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:24:42.305925 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:24:42.347910 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (707) Aug 13 01:24:42.351091 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:42.351119 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:42.352910 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:24:42.365859 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:42.367058 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:24:42.369945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:24:42.460831 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:24:42.466493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:24:42.484054 ignition[770]: Ignition 2.21.0 Aug 13 01:24:42.484071 ignition[770]: Stage: fetch-offline Aug 13 01:24:42.484101 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:42.486303 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:24:42.484110 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:42.484193 ignition[770]: parsed url from cmdline: "" Aug 13 01:24:42.484197 ignition[770]: no config URL provided Aug 13 01:24:42.484202 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:24:42.484211 ignition[770]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:24:42.484216 ignition[770]: failed to fetch config: resource requires networking Aug 13 01:24:42.484343 ignition[770]: Ignition finished successfully Aug 13 01:24:42.508148 systemd-networkd[840]: lo: Link UP Aug 13 01:24:42.508161 systemd-networkd[840]: lo: Gained carrier Aug 13 01:24:42.509694 systemd-networkd[840]: Enumeration completed Aug 13 01:24:42.509974 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:24:42.510304 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:42.510308 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:24:42.512574 systemd-networkd[840]: eth0: Link UP Aug 13 01:24:42.512717 systemd-networkd[840]: eth0: Gained carrier Aug 13 01:24:42.512726 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:42.513267 systemd[1]: Reached target network.target - Network. Aug 13 01:24:42.516933 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:24:42.548260 ignition[844]: Ignition 2.21.0 Aug 13 01:24:42.548272 ignition[844]: Stage: fetch Aug 13 01:24:42.548413 ignition[844]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:42.548423 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:42.548521 ignition[844]: parsed url from cmdline: "" Aug 13 01:24:42.548525 ignition[844]: no config URL provided Aug 13 01:24:42.548530 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:24:42.548538 ignition[844]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:24:42.548580 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:24:42.548824 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:24:42.748897 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:24:42.749047 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:24:43.091929 systemd-networkd[840]: eth0: DHCPv4 address 172.233.222.13/24, gateway 172.233.222.1 acquired from 23.40.197.103 Aug 13 01:24:43.149234 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:24:43.254022 ignition[844]: PUT result: OK Aug 13 01:24:43.254090 ignition[844]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:24:43.430009 ignition[844]: GET result: OK Aug 13 01:24:43.430381 ignition[844]: parsing config with SHA512: 68efe1ed758a0c1e250196caaacca6e5257e44d3c624133ef0b656a625eee451f86469dd6804182f2a85a8a4774b0d9d3d603bcc35be9e94bbc7ee3a80e2a1c6 Aug 13 01:24:43.436805 unknown[844]: fetched base config from "system" Aug 13 01:24:43.436834 unknown[844]: fetched base config from "system" Aug 13 01:24:43.437131 ignition[844]: fetch: fetch complete Aug 13 01:24:43.436840 unknown[844]: fetched user config from "akamai" Aug 13 01:24:43.437136 ignition[844]: fetch: fetch passed Aug 13 01:24:43.437181 ignition[844]: Ignition finished successfully Aug 13 01:24:43.440425 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:24:43.463285 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:24:43.489633 ignition[851]: Ignition 2.21.0 Aug 13 01:24:43.489646 ignition[851]: Stage: kargs Aug 13 01:24:43.489755 ignition[851]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:43.489764 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:43.490390 ignition[851]: kargs: kargs passed Aug 13 01:24:43.493032 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:24:43.490420 ignition[851]: Ignition finished successfully Aug 13 01:24:43.495282 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:24:43.514598 ignition[858]: Ignition 2.21.0 Aug 13 01:24:43.514609 ignition[858]: Stage: disks Aug 13 01:24:43.514707 ignition[858]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:43.514716 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:43.515652 ignition[858]: disks: disks passed Aug 13 01:24:43.516959 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:24:43.515689 ignition[858]: Ignition finished successfully Aug 13 01:24:43.518204 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:24:43.519146 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:24:43.520304 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:24:43.521594 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:24:43.523091 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:24:43.525326 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:24:43.550401 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:24:43.552970 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:24:43.555256 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:24:43.664861 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:24:43.664829 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:24:43.665852 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:24:43.667598 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:24:43.670881 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:24:43.672193 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:24:43.673068 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:24:43.673091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:24:43.678157 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:24:43.680436 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:24:43.688875 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Aug 13 01:24:43.688909 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:43.691086 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:43.693599 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:24:43.698171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:24:43.729121 systemd-networkd[840]: eth0: Gained IPv6LL Aug 13 01:24:43.731988 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:24:43.735122 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:24:43.739556 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:24:43.743029 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:24:43.820232 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:24:43.822069 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:24:43.823790 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:24:43.836884 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:24:43.839828 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:43.853792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:24:43.864806 ignition[986]: INFO : Ignition 2.21.0 Aug 13 01:24:43.864806 ignition[986]: INFO : Stage: mount Aug 13 01:24:43.867256 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:43.867256 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:43.867256 ignition[986]: INFO : mount: mount passed Aug 13 01:24:43.867256 ignition[986]: INFO : Ignition finished successfully Aug 13 01:24:43.867429 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:24:43.869387 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:24:44.666683 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:24:44.694856 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (998) Aug 13 01:24:44.699117 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:24:44.699160 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:24:44.699172 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:24:44.704762 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:24:44.735415 ignition[1014]: INFO : Ignition 2.21.0 Aug 13 01:24:44.735415 ignition[1014]: INFO : Stage: files Aug 13 01:24:44.736772 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:44.736772 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:44.736772 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:24:44.738971 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:24:44.738971 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:24:44.740546 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:24:44.740546 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:24:44.740546 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:24:44.739559 unknown[1014]: wrote ssh authorized keys file for user: core Aug 13 01:24:44.743628 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:24:44.743628 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 01:24:44.767602 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:24:45.522527 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:24:45.522527 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:24:45.524919 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:24:45.660173 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:24:45.789824 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:24:45.791017 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:24:45.798236 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:24:45.798236 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:24:45.798236 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:24:45.798236 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:24:45.798236 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:24:45.798236 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 01:24:46.184448 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:24:46.578908 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:24:46.578908 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 01:24:46.581273 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:24:46.583064 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:24:46.583064 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 01:24:46.583064 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:24:46.587292 ignition[1014]: INFO : files: files passed Aug 13 01:24:46.587292 ignition[1014]: INFO : Ignition finished successfully Aug 13 01:24:46.587465 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:24:46.591971 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:24:46.596988 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:24:46.602805 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:24:46.602957 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:24:46.612617 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:24:46.613848 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:24:46.614627 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:24:46.615799 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:24:46.616631 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:24:46.618533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:24:46.664655 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:24:46.664799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:24:46.666412 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:24:46.667289 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:24:46.668559 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:24:46.669293 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:24:46.691424 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:24:46.693593 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:24:46.712956 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:24:46.714293 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:24:46.714877 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:24:46.715395 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:24:46.715482 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:24:46.716792 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:24:46.717456 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:24:46.718404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:24:46.719429 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:24:46.720439 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:24:46.721512 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:24:46.722782 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:24:46.723839 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:24:46.724962 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:24:46.725976 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:24:46.727032 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:24:46.727996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:24:46.728080 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:24:46.729596 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:24:46.730499 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:24:46.731512 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:24:46.731735 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:24:46.732706 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:24:46.732829 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:24:46.734162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:24:46.734254 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:24:46.734915 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:24:46.734995 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:24:46.737887 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:24:46.739785 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:24:46.742757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:24:46.742993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:24:46.743965 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:24:46.744101 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:24:46.754139 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:24:46.754692 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:24:46.776861 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:24:46.778706 ignition[1068]: INFO : Ignition 2.21.0 Aug 13 01:24:46.778706 ignition[1068]: INFO : Stage: umount Aug 13 01:24:46.778706 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:24:46.778706 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:24:46.778706 ignition[1068]: INFO : umount: umount passed Aug 13 01:24:46.778706 ignition[1068]: INFO : Ignition finished successfully Aug 13 01:24:46.776971 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:24:46.778490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:24:46.779443 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:24:46.779518 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:24:46.780328 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:24:46.780367 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:24:46.782705 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:24:46.782764 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:24:46.783300 systemd[1]: Stopped target network.target - Network. Aug 13 01:24:46.784284 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:24:46.784329 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:24:46.785375 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:24:46.786217 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:24:46.787882 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:24:46.788570 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:24:46.789429 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:24:46.790358 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:24:46.790396 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:24:46.791393 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:24:46.791436 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:24:46.792472 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:24:46.792522 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:24:46.793541 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:24:46.793588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:24:46.794761 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:24:46.796031 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:24:46.797571 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:24:46.797662 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:24:46.799020 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:24:46.799110 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:24:46.802285 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:24:46.802402 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:24:46.807905 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:24:46.808120 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:24:46.808258 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:24:46.810266 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:24:46.811366 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:24:46.812524 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:24:46.812566 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:24:46.814324 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:24:46.816867 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:24:46.816934 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:24:46.817800 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:24:46.817870 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:24:46.820092 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:24:46.820143 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:24:46.820978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:24:46.821027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:24:46.822677 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:24:46.827408 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:24:46.827459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:24:46.838212 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:24:46.840155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:24:46.841799 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:24:46.841988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:24:46.843426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:24:46.843487 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:24:46.844095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:24:46.844126 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:24:46.845098 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:24:46.845143 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:24:46.846575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:24:46.846615 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:24:46.847635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:24:46.847675 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:24:46.849918 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:24:46.850876 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:24:46.850929 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:24:46.853962 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:24:46.854008 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:24:46.854895 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:24:46.854933 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:24:46.855930 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:24:46.855969 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:24:46.856787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:24:46.856845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:46.859155 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:24:46.859201 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:24:46.859236 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:24:46.859270 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:24:46.865420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:24:46.865512 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:24:46.866388 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:24:46.867951 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:24:46.900202 systemd[1]: Switching root. Aug 13 01:24:46.937573 systemd-journald[206]: Journal stopped Aug 13 01:24:47.893670 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:24:47.893698 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:24:47.893710 kernel: SELinux: policy capability open_perms=1 Aug 13 01:24:47.893721 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:24:47.893730 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:24:47.893738 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:24:47.893747 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:24:47.893756 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:24:47.893764 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:24:47.893772 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:24:47.893783 kernel: audit: type=1403 audit(1755048287.083:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:24:47.893792 systemd[1]: Successfully loaded SELinux policy in 51.910ms. Aug 13 01:24:47.893803 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.712ms. Aug 13 01:24:47.895602 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:24:47.895619 systemd[1]: Detected virtualization kvm. Aug 13 01:24:47.895633 systemd[1]: Detected architecture x86-64. Aug 13 01:24:47.895644 systemd[1]: Detected first boot. Aug 13 01:24:47.895653 systemd[1]: Initializing machine ID from random generator. Aug 13 01:24:47.895663 zram_generator::config[1113]: No configuration found. Aug 13 01:24:47.895673 kernel: Guest personality initialized and is inactive Aug 13 01:24:47.895682 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:24:47.895691 kernel: Initialized host personality Aug 13 01:24:47.895702 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:24:47.895711 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:24:47.895722 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:24:47.895731 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:24:47.895741 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:24:47.895752 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:24:47.895762 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:24:47.895773 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:24:47.895783 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:24:47.895793 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:24:47.895803 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:24:47.895834 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:24:47.895845 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:24:47.895854 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:24:47.895867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:24:47.895876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:24:47.895885 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:24:47.895895 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:24:47.895908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:24:47.895917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:24:47.895927 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:24:47.895936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:24:47.895948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:24:47.895958 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:24:47.895967 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:24:47.895977 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:24:47.895989 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:24:47.895998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:24:47.896008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:24:47.896017 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:24:47.896029 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:24:47.896038 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:24:47.896048 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:24:47.896057 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:24:47.896067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:24:47.896079 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:24:47.896088 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:24:47.896098 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:24:47.896108 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:24:47.896117 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:24:47.896127 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:24:47.896136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:47.896146 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:24:47.896157 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:24:47.896167 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:24:47.896177 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:24:47.896187 systemd[1]: Reached target machines.target - Containers. Aug 13 01:24:47.896197 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:24:47.896207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:47.896216 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:24:47.896226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:24:47.896238 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:24:47.896331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:24:47.896343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:24:47.896353 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:24:47.896363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:24:47.896373 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:24:47.896383 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:24:47.896393 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:24:47.896402 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:24:47.896414 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:24:47.896425 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:47.896435 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:24:47.896444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:24:47.896454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:24:47.896463 kernel: fuse: init (API version 7.41) Aug 13 01:24:47.896472 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:24:47.896482 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:24:47.896998 kernel: ACPI: bus type drm_connector registered Aug 13 01:24:47.897011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:24:47.897021 kernel: loop: module loaded Aug 13 01:24:47.897030 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:24:47.897040 systemd[1]: Stopped verity-setup.service. Aug 13 01:24:47.897050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:47.897060 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:24:47.897092 systemd-journald[1203]: Collecting audit messages is disabled. Aug 13 01:24:47.897116 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:24:47.897126 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:24:47.897136 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:24:47.897146 systemd-journald[1203]: Journal started Aug 13 01:24:47.897167 systemd-journald[1203]: Runtime Journal (/run/log/journal/1fe509d614854edf8da6487671c6ff03) is 8M, max 78.5M, 70.5M free. Aug 13 01:24:47.593678 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:24:47.602533 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:24:47.603105 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:24:47.902778 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:24:47.902290 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:24:47.903022 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:24:47.904909 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:24:47.905771 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:24:47.906550 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:24:47.906780 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:24:47.907684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:24:47.907986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:24:47.909182 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:24:47.909465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:24:47.910442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:24:47.910720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:24:47.911732 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:24:47.912021 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:24:47.912790 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:24:47.913018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:24:47.913785 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:24:47.914709 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:24:47.916069 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:24:47.919432 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:24:47.931021 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:24:47.935900 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:24:47.938328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:24:47.939036 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:24:47.939066 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:24:47.940520 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:24:47.943906 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:24:47.945909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:47.947151 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:24:47.949926 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:24:47.951895 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:24:47.953025 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:24:47.953628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:24:47.958924 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:24:47.960646 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:24:47.965894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:24:47.969399 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:24:47.970204 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:24:47.992894 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 01:24:47.995479 systemd-journald[1203]: Time spent on flushing to /var/log/journal/1fe509d614854edf8da6487671c6ff03 is 38.154ms for 1006 entries. Aug 13 01:24:47.995479 systemd-journald[1203]: System Journal (/var/log/journal/1fe509d614854edf8da6487671c6ff03) is 8M, max 195.6M, 187.6M free. Aug 13 01:24:48.043996 systemd-journald[1203]: Received client request to flush runtime journal. Aug 13 01:24:48.044031 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:24:48.003580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:24:48.005627 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:24:48.008578 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:24:48.049668 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:24:48.071878 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 01:24:48.069234 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:24:48.072776 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:24:48.080435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:24:48.086204 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Aug 13 01:24:48.086419 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Aug 13 01:24:48.095970 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:24:48.099713 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:24:48.114553 kernel: loop2: detected capacity change from 0 to 8 Aug 13 01:24:48.140840 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 01:24:48.166332 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:24:48.170356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:24:48.181851 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 01:24:48.196203 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:24:48.196438 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:24:48.199844 kernel: loop5: detected capacity change from 0 to 224512 Aug 13 01:24:48.201535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:24:48.225916 kernel: loop6: detected capacity change from 0 to 8 Aug 13 01:24:48.229835 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 01:24:48.248795 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:24:48.249383 (sd-merge)[1263]: Merged extensions into '/usr'. Aug 13 01:24:48.256451 systemd[1]: Reload requested from client PID 1238 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:24:48.256527 systemd[1]: Reloading... Aug 13 01:24:48.365839 zram_generator::config[1300]: No configuration found. Aug 13 01:24:48.455709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:24:48.488635 ldconfig[1233]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:24:48.534092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:24:48.534397 systemd[1]: Reloading finished in 277 ms. Aug 13 01:24:48.548480 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:24:48.550271 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:24:48.561932 systemd[1]: Starting ensure-sysext.service... Aug 13 01:24:48.565910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:24:48.587298 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:24:48.587367 systemd[1]: Reloading... Aug 13 01:24:48.603113 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:24:48.603528 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:24:48.603837 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:24:48.604076 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:24:48.604880 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:24:48.605220 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Aug 13 01:24:48.605419 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Aug 13 01:24:48.608627 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:24:48.608695 systemd-tmpfiles[1335]: Skipping /boot Aug 13 01:24:48.621053 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:24:48.621114 systemd-tmpfiles[1335]: Skipping /boot Aug 13 01:24:48.648844 zram_generator::config[1365]: No configuration found. Aug 13 01:24:48.743590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:24:48.804990 systemd[1]: Reloading finished in 217 ms. Aug 13 01:24:48.826642 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:24:48.836428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:24:48.844282 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:24:48.847316 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:24:48.857980 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:24:48.861826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:24:48.865448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:24:48.870039 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:24:48.873886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:48.874023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:48.876042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:24:48.883800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:24:48.891087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:24:48.892207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:48.892300 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:48.899955 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:24:48.900452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:48.905804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:48.906773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:48.906971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:48.907071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:48.907164 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:48.914744 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:24:48.916290 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:24:48.916473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:24:48.918382 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:24:48.919051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:24:48.921398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:24:48.924163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:24:48.928487 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:24:48.932941 systemd[1]: Finished ensure-sysext.service. Aug 13 01:24:48.936470 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:48.936603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:24:48.937735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:24:48.939224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:24:48.939256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:24:48.939294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:24:48.939339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:24:48.941257 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:24:48.944598 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:24:48.945874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:24:48.962346 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:24:48.962570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:24:48.964477 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Aug 13 01:24:48.970423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:24:48.971934 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:24:48.974755 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:24:48.976091 augenrules[1447]: No rules Aug 13 01:24:48.977295 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:24:48.977514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:24:48.993293 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:24:48.994270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:24:48.998417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:24:49.115189 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:24:49.209101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:24:49.210621 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:24:49.222843 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:24:49.231898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:24:49.235853 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:24:49.239835 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:24:49.263969 systemd-networkd[1464]: lo: Link UP Aug 13 01:24:49.263981 systemd-networkd[1464]: lo: Gained carrier Aug 13 01:24:49.265379 systemd-networkd[1464]: Enumeration completed Aug 13 01:24:49.265454 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:24:49.266232 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:49.266245 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:24:49.266594 systemd-networkd[1464]: eth0: Link UP Aug 13 01:24:49.266747 systemd-networkd[1464]: eth0: Gained carrier Aug 13 01:24:49.266767 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:24:49.268131 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:24:49.269962 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:24:49.303840 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:24:49.316486 systemd-resolved[1410]: Positive Trust Anchors: Aug 13 01:24:49.316713 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:24:49.316744 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:24:49.320344 systemd-resolved[1410]: Defaulting to hostname 'linux'. Aug 13 01:24:49.322498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:24:49.323962 systemd[1]: Reached target network.target - Network. Aug 13 01:24:49.324472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:24:49.325548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:24:49.327146 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:24:49.327755 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:24:49.328552 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:24:49.329877 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:24:49.330451 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:24:49.331224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:24:49.331260 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:24:49.331983 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:24:49.333016 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:24:49.333981 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:24:49.335401 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:24:49.337584 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:24:49.340767 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:24:49.343843 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:24:49.346852 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:24:49.349490 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:24:49.351142 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:24:49.352197 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:24:49.361681 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:24:49.364015 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:24:49.393462 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:24:49.396461 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:24:49.396996 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:24:49.398616 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:24:49.398645 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:24:49.399682 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:24:49.403922 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:24:49.407194 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:24:49.413834 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:24:49.418382 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:24:49.420805 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:24:49.421395 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:24:49.422309 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:24:49.429988 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:24:49.445219 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:24:49.456985 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:24:49.461605 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:24:49.475121 jq[1523]: false Aug 13 01:24:49.476707 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:24:49.478590 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:24:49.479487 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:24:49.484343 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:24:49.488357 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Aug 13 01:24:49.488364 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Aug 13 01:24:49.490646 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:24:49.494488 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:24:49.495477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:24:49.496109 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:24:49.507595 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Aug 13 01:24:49.507595 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:24:49.507595 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Aug 13 01:24:49.507410 oslogin_cache_refresh[1525]: Failure getting users, quitting Aug 13 01:24:49.507425 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:24:49.507461 oslogin_cache_refresh[1525]: Refreshing group entry cache Aug 13 01:24:49.508281 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Aug 13 01:24:49.508281 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:24:49.507978 oslogin_cache_refresh[1525]: Failure getting groups, quitting Aug 13 01:24:49.507987 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:24:49.509747 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:24:49.512098 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:24:49.513291 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:24:49.513523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:24:49.520192 extend-filesystems[1524]: Found /dev/sda6 Aug 13 01:24:49.533336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:24:49.535887 extend-filesystems[1524]: Found /dev/sda9 Aug 13 01:24:49.550245 extend-filesystems[1524]: Checking size of /dev/sda9 Aug 13 01:24:49.550807 jq[1536]: true Aug 13 01:24:49.557337 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:24:49.597367 jq[1559]: true Aug 13 01:24:49.604405 dbus-daemon[1520]: [system] SELinux support is enabled Aug 13 01:24:49.604583 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:24:49.609449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:24:49.614455 extend-filesystems[1524]: Resized partition /dev/sda9 Aug 13 01:24:49.609479 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:24:49.610560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:24:49.610576 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:24:49.618241 extend-filesystems[1572]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:24:49.619144 update_engine[1534]: I20250813 01:24:49.617940 1534 main.cc:92] Flatcar Update Engine starting Aug 13 01:24:49.619323 tar[1552]: linux-amd64/LICENSE Aug 13 01:24:49.619323 tar[1552]: linux-amd64/helm Aug 13 01:24:49.636209 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:24:49.644670 update_engine[1534]: I20250813 01:24:49.641366 1534 update_check_scheduler.cc:74] Next update check in 2m22s Aug 13 01:24:49.649958 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:24:49.651285 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:24:49.661977 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:24:49.659742 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:24:49.662722 extend-filesystems[1572]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:24:49.662722 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:24:49.662722 extend-filesystems[1572]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:24:49.671197 extend-filesystems[1524]: Resized filesystem in /dev/sda9 Aug 13 01:24:49.671739 coreos-metadata[1519]: Aug 13 01:24:49.668 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:24:49.663417 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:24:49.664288 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:24:49.667506 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:24:49.667735 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:24:49.710827 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:24:49.713492 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:24:49.716512 systemd[1]: Starting sshkeys.service... Aug 13 01:24:49.749280 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:24:49.754041 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:24:49.810163 systemd-networkd[1464]: eth0: DHCPv4 address 172.233.222.13/24, gateway 172.233.222.1 acquired from 23.40.197.103 Aug 13 01:24:49.810244 dbus-daemon[1520]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1464 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:24:49.811416 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Aug 13 01:24:49.820014 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:24:49.866878 containerd[1555]: time="2025-08-13T01:24:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:24:49.894391 containerd[1555]: time="2025-08-13T01:24:49.893641402Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:24:49.920376 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:24:49.921379 coreos-metadata[1599]: Aug 13 01:24:49.921 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:24:49.925095 containerd[1555]: time="2025-08-13T01:24:49.924708064Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.9µs" Aug 13 01:24:49.925095 containerd[1555]: time="2025-08-13T01:24:49.924753444Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:24:49.925095 containerd[1555]: time="2025-08-13T01:24:49.924779204Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:24:49.925095 containerd[1555]: time="2025-08-13T01:24:49.925014075Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:24:49.925095 containerd[1555]: time="2025-08-13T01:24:49.925029765Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:24:49.925095 containerd[1555]: time="2025-08-13T01:24:49.925052305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:24:49.925042 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:24:49.925234 containerd[1555]: time="2025-08-13T01:24:49.925107225Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:24:49.925234 containerd[1555]: time="2025-08-13T01:24:49.925118355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.941855009Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.941882539Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.941940349Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.941954799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942087199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942304429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942336259Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942346570Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942382260Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942560090Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:24:49.943847 containerd[1555]: time="2025-08-13T01:24:49.942611050Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:24:49.950355 containerd[1555]: time="2025-08-13T01:24:49.950305565Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950424676Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950444156Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950497016Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950512076Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950523986Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950546476Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950580446Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950593116Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950602676Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950612546Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:24:49.950730 containerd[1555]: time="2025-08-13T01:24:49.950625206Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:24:49.951016 containerd[1555]: time="2025-08-13T01:24:49.950999227Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954855855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954879045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954890445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954917155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954927915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954937115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954947045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954958805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954968805Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.954993695Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.955071855Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.955086415Z" level=info msg="Start snapshots syncer" Aug 13 01:24:49.956552 containerd[1555]: time="2025-08-13T01:24:49.955111755Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:24:49.956774 containerd[1555]: time="2025-08-13T01:24:49.955414056Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:24:49.956774 containerd[1555]: time="2025-08-13T01:24:49.955453186Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.957909901Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958052641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958075021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958086151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958096541Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958107241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958117061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958127301Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958149251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958158721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:24:49.958577 containerd[1555]: time="2025-08-13T01:24:49.958168561Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.962905631Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.962930181Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.962939971Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.962971591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.962981091Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.962991761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.963002151Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.963019641Z" level=info msg="runtime interface created" Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.963024861Z" level=info msg="created NRI interface" Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.963044381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.963057851Z" level=info msg="Connect containerd service" Aug 13 01:24:49.967833 containerd[1555]: time="2025-08-13T01:24:49.963084181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:24:49.972092 containerd[1555]: time="2025-08-13T01:24:49.971692208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:24:50.020177 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:24:50.874923 systemd-resolved[1410]: Clock change detected. Flushing caches. Aug 13 01:24:50.877989 systemd-timesyncd[1440]: Contacted time server 192.189.65.187:123 (0.flatcar.pool.ntp.org). Aug 13 01:24:50.878040 systemd-timesyncd[1440]: Initial clock synchronization to Wed 2025-08-13 01:24:50.874701 UTC. Aug 13 01:24:50.890778 coreos-metadata[1599]: Aug 13 01:24:50.890 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:24:50.896017 systemd-logind[1532]: New seat seat0. Aug 13 01:24:50.897026 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:24:50.910465 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:24:50.924029 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:24:50.925785 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:24:50.933012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:24:50.983008 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:24:50.984370 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:24:50.984617 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:24:50.987898 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:24:50.988307 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:24:50.991135 dbus-daemon[1520]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1604 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:24:50.998702 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:24:51.001717 containerd[1555]: time="2025-08-13T01:24:51.001680925Z" level=info msg="Start subscribing containerd event" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.001729945Z" level=info msg="Start recovering state" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.002878828Z" level=info msg="Start event monitor" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003005248Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003021198Z" level=info msg="Start streaming server" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003031118Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003038488Z" level=info msg="runtime interface starting up..." Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003044228Z" level=info msg="starting plugins..." Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003060438Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003266959Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003319409Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:24:51.003940 containerd[1555]: time="2025-08-13T01:24:51.003365819Z" level=info msg="containerd successfully booted in 0.331122s" Aug 13 01:24:51.003821 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:24:51.016955 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:24:51.020592 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:24:51.023437 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:24:51.024088 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:24:51.047436 coreos-metadata[1599]: Aug 13 01:24:51.047 INFO Fetch successful Aug 13 01:24:51.075811 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:24:51.077446 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:24:51.081449 systemd[1]: Finished sshkeys.service. Aug 13 01:24:51.084837 polkitd[1645]: Started polkitd version 126 Aug 13 01:24:51.088818 polkitd[1645]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:24:51.089045 polkitd[1645]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:24:51.089088 polkitd[1645]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:24:51.089257 polkitd[1645]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:24:51.089282 polkitd[1645]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:24:51.089310 polkitd[1645]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:24:51.089850 polkitd[1645]: Finished loading, compiling and executing 2 rules Aug 13 01:24:51.090549 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:24:51.091855 dbus-daemon[1520]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:24:51.092190 polkitd[1645]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:24:51.103761 systemd-resolved[1410]: System hostname changed to '172-233-222-13'. Aug 13 01:24:51.103860 systemd-hostnamed[1604]: Hostname set to <172-233-222-13> (transient) Aug 13 01:24:51.190848 systemd-networkd[1464]: eth0: Gained IPv6LL Aug 13 01:24:51.195668 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:24:51.198459 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:24:51.202768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:24:51.204204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:24:51.231087 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:24:51.331974 tar[1552]: linux-amd64/README.md Aug 13 01:24:51.351330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:24:51.484890 coreos-metadata[1519]: Aug 13 01:24:51.484 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:24:51.589106 coreos-metadata[1519]: Aug 13 01:24:51.589 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:24:51.802615 coreos-metadata[1519]: Aug 13 01:24:51.802 INFO Fetch successful Aug 13 01:24:51.802615 coreos-metadata[1519]: Aug 13 01:24:51.802 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:24:52.017071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:24:52.023930 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:24:52.204203 coreos-metadata[1519]: Aug 13 01:24:52.204 INFO Fetch successful Aug 13 01:24:52.325731 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:24:52.327550 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:24:52.327929 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:24:52.361625 systemd[1]: Startup finished in 2.801s (kernel) + 7.429s (initrd) + 4.523s (userspace) = 14.753s. Aug 13 01:24:52.494881 kubelet[1683]: E0813 01:24:52.494795 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:24:52.498704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:24:52.498889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:24:52.499274 systemd[1]: kubelet.service: Consumed 765ms CPU time, 265M memory peak. Aug 13 01:24:55.142514 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:24:55.143668 systemd[1]: Started sshd@0-172.233.222.13:22-147.75.109.163:42396.service - OpenSSH per-connection server daemon (147.75.109.163:42396). Aug 13 01:24:55.506973 sshd[1714]: Accepted publickey for core from 147.75.109.163 port 42396 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:55.508998 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:55.516379 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:24:55.517682 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:24:55.527533 systemd-logind[1532]: New session 1 of user core. Aug 13 01:24:55.544344 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:24:55.549906 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:24:55.562873 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:55.565582 systemd-logind[1532]: New session c1 of user core. Aug 13 01:24:55.692287 systemd[1718]: Queued start job for default target default.target. Aug 13 01:24:55.698825 systemd[1718]: Created slice app.slice - User Application Slice. Aug 13 01:24:55.698854 systemd[1718]: Reached target paths.target - Paths. Aug 13 01:24:55.698979 systemd[1718]: Reached target timers.target - Timers. Aug 13 01:24:55.700518 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:24:55.722907 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:24:55.723041 systemd[1718]: Reached target sockets.target - Sockets. Aug 13 01:24:55.723077 systemd[1718]: Reached target basic.target - Basic System. Aug 13 01:24:55.723118 systemd[1718]: Reached target default.target - Main User Target. Aug 13 01:24:55.723148 systemd[1718]: Startup finished in 149ms. Aug 13 01:24:55.723584 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:24:55.731793 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:24:56.002994 systemd[1]: Started sshd@1-172.233.222.13:22-147.75.109.163:42412.service - OpenSSH per-connection server daemon (147.75.109.163:42412). Aug 13 01:24:56.349985 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 42412 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:56.351954 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:56.358121 systemd-logind[1532]: New session 2 of user core. Aug 13 01:24:56.365775 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:24:56.603897 sshd[1731]: Connection closed by 147.75.109.163 port 42412 Aug 13 01:24:56.604921 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:56.610373 systemd[1]: sshd@1-172.233.222.13:22-147.75.109.163:42412.service: Deactivated successfully. Aug 13 01:24:56.612963 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:24:56.614401 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:24:56.616284 systemd-logind[1532]: Removed session 2. Aug 13 01:24:56.664455 systemd[1]: Started sshd@2-172.233.222.13:22-147.75.109.163:42422.service - OpenSSH per-connection server daemon (147.75.109.163:42422). Aug 13 01:24:57.008461 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 42422 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:57.010169 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:57.016226 systemd-logind[1532]: New session 3 of user core. Aug 13 01:24:57.021790 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:24:57.253668 sshd[1739]: Connection closed by 147.75.109.163 port 42422 Aug 13 01:24:57.254255 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:57.258785 systemd[1]: sshd@2-172.233.222.13:22-147.75.109.163:42422.service: Deactivated successfully. Aug 13 01:24:57.261371 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:24:57.262167 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:24:57.263844 systemd-logind[1532]: Removed session 3. Aug 13 01:24:57.315126 systemd[1]: Started sshd@3-172.233.222.13:22-147.75.109.163:42432.service - OpenSSH per-connection server daemon (147.75.109.163:42432). Aug 13 01:24:57.676698 sshd[1745]: Accepted publickey for core from 147.75.109.163 port 42432 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:57.678868 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:57.684520 systemd-logind[1532]: New session 4 of user core. Aug 13 01:24:57.690778 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:24:57.926180 sshd[1747]: Connection closed by 147.75.109.163 port 42432 Aug 13 01:24:57.930367 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:57.935880 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:24:57.937030 systemd[1]: sshd@3-172.233.222.13:22-147.75.109.163:42432.service: Deactivated successfully. Aug 13 01:24:57.939134 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:24:57.940751 systemd-logind[1532]: Removed session 4. Aug 13 01:24:57.987495 systemd[1]: Started sshd@4-172.233.222.13:22-147.75.109.163:42440.service - OpenSSH per-connection server daemon (147.75.109.163:42440). Aug 13 01:24:58.328666 sshd[1753]: Accepted publickey for core from 147.75.109.163 port 42440 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:58.330320 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:58.336095 systemd-logind[1532]: New session 5 of user core. Aug 13 01:24:58.341771 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:24:58.534806 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:24:58.535167 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:24:58.559350 sudo[1756]: pam_unix(sudo:session): session closed for user root Aug 13 01:24:58.611426 sshd[1755]: Connection closed by 147.75.109.163 port 42440 Aug 13 01:24:58.612370 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:58.618020 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:24:58.618946 systemd[1]: sshd@4-172.233.222.13:22-147.75.109.163:42440.service: Deactivated successfully. Aug 13 01:24:58.621588 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:24:58.623626 systemd-logind[1532]: Removed session 5. Aug 13 01:24:58.672595 systemd[1]: Started sshd@5-172.233.222.13:22-147.75.109.163:37780.service - OpenSSH per-connection server daemon (147.75.109.163:37780). Aug 13 01:24:59.013602 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 37780 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:59.015290 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:59.020397 systemd-logind[1532]: New session 6 of user core. Aug 13 01:24:59.023779 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:24:59.214717 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:24:59.215041 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:24:59.220582 sudo[1766]: pam_unix(sudo:session): session closed for user root Aug 13 01:24:59.226906 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:24:59.227215 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:24:59.238192 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:24:59.288971 augenrules[1788]: No rules Aug 13 01:24:59.290911 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:24:59.291212 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:24:59.292676 sudo[1765]: pam_unix(sudo:session): session closed for user root Aug 13 01:24:59.344384 sshd[1764]: Connection closed by 147.75.109.163 port 37780 Aug 13 01:24:59.345087 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:59.350387 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:24:59.351137 systemd[1]: sshd@5-172.233.222.13:22-147.75.109.163:37780.service: Deactivated successfully. Aug 13 01:24:59.353531 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:24:59.355535 systemd-logind[1532]: Removed session 6. Aug 13 01:24:59.407436 systemd[1]: Started sshd@6-172.233.222.13:22-147.75.109.163:37792.service - OpenSSH per-connection server daemon (147.75.109.163:37792). Aug 13 01:24:59.742876 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 37792 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:24:59.744606 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:24:59.750854 systemd-logind[1532]: New session 7 of user core. Aug 13 01:24:59.763793 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:24:59.940469 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:24:59.940828 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:25:00.234064 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:25:00.254103 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:25:00.461912 dockerd[1819]: time="2025-08-13T01:25:00.461842241Z" level=info msg="Starting up" Aug 13 01:25:00.464749 dockerd[1819]: time="2025-08-13T01:25:00.464671667Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:25:00.531773 dockerd[1819]: time="2025-08-13T01:25:00.530615479Z" level=info msg="Loading containers: start." Aug 13 01:25:00.541177 kernel: Initializing XFRM netlink socket Aug 13 01:25:00.804632 systemd-networkd[1464]: docker0: Link UP Aug 13 01:25:00.808019 dockerd[1819]: time="2025-08-13T01:25:00.807954103Z" level=info msg="Loading containers: done." Aug 13 01:25:00.824232 dockerd[1819]: time="2025-08-13T01:25:00.824171246Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:25:00.824432 dockerd[1819]: time="2025-08-13T01:25:00.824275666Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:25:00.824432 dockerd[1819]: time="2025-08-13T01:25:00.824390056Z" level=info msg="Initializing buildkit" Aug 13 01:25:00.846727 dockerd[1819]: time="2025-08-13T01:25:00.846632661Z" level=info msg="Completed buildkit initialization" Aug 13 01:25:00.855629 dockerd[1819]: time="2025-08-13T01:25:00.855579519Z" level=info msg="Daemon has completed initialization" Aug 13 01:25:00.855833 dockerd[1819]: time="2025-08-13T01:25:00.855787329Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:25:00.855828 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:25:01.468570 containerd[1555]: time="2025-08-13T01:25:01.468512704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 01:25:02.145498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776942582.mount: Deactivated successfully. Aug 13 01:25:02.688371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:25:02.691744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:02.928546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:02.936914 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:25:02.976122 kubelet[2082]: E0813 01:25:02.975884 2082 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:25:02.982044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:25:02.982435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:25:02.983327 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.4M memory peak. Aug 13 01:25:03.173928 containerd[1555]: time="2025-08-13T01:25:03.173852524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:03.176963 containerd[1555]: time="2025-08-13T01:25:03.176916690Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 01:25:03.178390 containerd[1555]: time="2025-08-13T01:25:03.178354813Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:03.182069 containerd[1555]: time="2025-08-13T01:25:03.182023491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:03.182878 containerd[1555]: time="2025-08-13T01:25:03.182852142Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.714288478s" Aug 13 01:25:03.182969 containerd[1555]: time="2025-08-13T01:25:03.182954242Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 01:25:03.183831 containerd[1555]: time="2025-08-13T01:25:03.183747864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 01:25:04.543774 containerd[1555]: time="2025-08-13T01:25:04.543711433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:04.544719 containerd[1555]: time="2025-08-13T01:25:04.544677095Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 01:25:04.545872 containerd[1555]: time="2025-08-13T01:25:04.545451937Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:04.547620 containerd[1555]: time="2025-08-13T01:25:04.547581311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:04.548491 containerd[1555]: time="2025-08-13T01:25:04.548467593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.364682899s" Aug 13 01:25:04.548570 containerd[1555]: time="2025-08-13T01:25:04.548556263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 01:25:04.549495 containerd[1555]: time="2025-08-13T01:25:04.549479515Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 01:25:05.738029 containerd[1555]: time="2025-08-13T01:25:05.737951411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:05.739106 containerd[1555]: time="2025-08-13T01:25:05.738852683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 01:25:05.739762 containerd[1555]: time="2025-08-13T01:25:05.739723085Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:05.742707 containerd[1555]: time="2025-08-13T01:25:05.742676351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:05.743604 containerd[1555]: time="2025-08-13T01:25:05.743562292Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.193934207s" Aug 13 01:25:05.743670 containerd[1555]: time="2025-08-13T01:25:05.743604403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 01:25:05.746631 containerd[1555]: time="2025-08-13T01:25:05.746584829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 01:25:06.911003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962303907.mount: Deactivated successfully. Aug 13 01:25:07.267122 containerd[1555]: time="2025-08-13T01:25:07.267065509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:07.268067 containerd[1555]: time="2025-08-13T01:25:07.268038071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 01:25:07.270667 containerd[1555]: time="2025-08-13T01:25:07.268607852Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:07.274696 containerd[1555]: time="2025-08-13T01:25:07.274670904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:07.275231 containerd[1555]: time="2025-08-13T01:25:07.275187295Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.528570996s" Aug 13 01:25:07.275231 containerd[1555]: time="2025-08-13T01:25:07.275230325Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 01:25:07.275982 containerd[1555]: time="2025-08-13T01:25:07.275863146Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:25:08.012163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904291644.mount: Deactivated successfully. Aug 13 01:25:08.661998 containerd[1555]: time="2025-08-13T01:25:08.661935828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:08.665654 containerd[1555]: time="2025-08-13T01:25:08.664172502Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:25:08.665654 containerd[1555]: time="2025-08-13T01:25:08.664430533Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:08.668896 containerd[1555]: time="2025-08-13T01:25:08.668868762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:08.669375 containerd[1555]: time="2025-08-13T01:25:08.669349113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.393454497s" Aug 13 01:25:08.669416 containerd[1555]: time="2025-08-13T01:25:08.669380883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:25:08.670379 containerd[1555]: time="2025-08-13T01:25:08.670337075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:25:09.362955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228768985.mount: Deactivated successfully. Aug 13 01:25:09.366375 containerd[1555]: time="2025-08-13T01:25:09.366337266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:25:09.367481 containerd[1555]: time="2025-08-13T01:25:09.367273818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:25:09.368101 containerd[1555]: time="2025-08-13T01:25:09.368076230Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:25:09.369716 containerd[1555]: time="2025-08-13T01:25:09.369689723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:25:09.370462 containerd[1555]: time="2025-08-13T01:25:09.370440315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 700.07773ms" Aug 13 01:25:09.370527 containerd[1555]: time="2025-08-13T01:25:09.370513935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:25:09.371435 containerd[1555]: time="2025-08-13T01:25:09.371397836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:25:10.101793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192903348.mount: Deactivated successfully. Aug 13 01:25:11.434084 containerd[1555]: time="2025-08-13T01:25:11.434005141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:11.434992 containerd[1555]: time="2025-08-13T01:25:11.434905973Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 01:25:11.435478 containerd[1555]: time="2025-08-13T01:25:11.435438524Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:11.437763 containerd[1555]: time="2025-08-13T01:25:11.437726908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:11.438707 containerd[1555]: time="2025-08-13T01:25:11.438685400Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.067225583s" Aug 13 01:25:11.438784 containerd[1555]: time="2025-08-13T01:25:11.438768260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:25:13.188480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:25:13.191297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:13.211037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:25:13.211116 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:25:13.211377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:13.221144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:13.238031 systemd[1]: Reload requested from client PID 2247 ('systemctl') (unit session-7.scope)... Aug 13 01:25:13.238119 systemd[1]: Reloading... Aug 13 01:25:13.342356 zram_generator::config[2291]: No configuration found. Aug 13 01:25:13.435711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:25:13.523087 systemd[1]: Reloading finished in 284 ms. Aug 13 01:25:13.564811 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:25:13.564900 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:25:13.565194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:13.565234 systemd[1]: kubelet.service: Consumed 129ms CPU time, 98.3M memory peak. Aug 13 01:25:13.566610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:13.721680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:13.724457 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:25:13.754758 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:13.754758 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:25:13.754758 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:13.755004 kubelet[2346]: I0813 01:25:13.754787 2346 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:25:13.871573 kubelet[2346]: I0813 01:25:13.871507 2346 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:25:13.871573 kubelet[2346]: I0813 01:25:13.871524 2346 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:25:13.871903 kubelet[2346]: I0813 01:25:13.871883 2346 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:25:13.898714 kubelet[2346]: E0813 01:25:13.898689 2346 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.222.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.222.13:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:13.902343 kubelet[2346]: I0813 01:25:13.902258 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:25:13.911855 kubelet[2346]: I0813 01:25:13.911836 2346 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:25:13.916909 kubelet[2346]: I0813 01:25:13.916711 2346 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:25:13.917978 kubelet[2346]: I0813 01:25:13.917846 2346 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:25:13.918186 kubelet[2346]: I0813 01:25:13.918035 2346 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:25:13.918313 kubelet[2346]: I0813 01:25:13.918194 2346 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:25:13.918313 kubelet[2346]: I0813 01:25:13.918203 2346 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:25:13.918359 kubelet[2346]: I0813 01:25:13.918318 2346 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:13.921137 kubelet[2346]: I0813 01:25:13.921044 2346 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:25:13.921137 kubelet[2346]: I0813 01:25:13.921071 2346 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:25:13.921137 kubelet[2346]: I0813 01:25:13.921088 2346 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:25:13.921137 kubelet[2346]: I0813 01:25:13.921096 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:25:13.925183 kubelet[2346]: W0813 01:25:13.925132 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.222.13:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-13&limit=500&resourceVersion=0": dial tcp 172.233.222.13:6443: connect: connection refused Aug 13 01:25:13.925269 kubelet[2346]: E0813 01:25:13.925234 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.222.13:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-222-13&limit=500&resourceVersion=0\": dial tcp 172.233.222.13:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:13.925320 kubelet[2346]: I0813 01:25:13.925307 2346 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:25:13.926017 kubelet[2346]: I0813 01:25:13.925586 2346 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:25:13.926135 kubelet[2346]: W0813 01:25:13.926113 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:25:13.927764 kubelet[2346]: I0813 01:25:13.927688 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:25:13.927764 kubelet[2346]: I0813 01:25:13.927711 2346 server.go:1287] "Started kubelet" Aug 13 01:25:13.929358 kubelet[2346]: W0813 01:25:13.929309 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.222.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.222.13:6443: connect: connection refused Aug 13 01:25:13.929416 kubelet[2346]: E0813 01:25:13.929403 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.222.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.222.13:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:13.929528 kubelet[2346]: I0813 01:25:13.929496 2346 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:25:13.930244 kubelet[2346]: I0813 01:25:13.930100 2346 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:25:13.931639 kubelet[2346]: I0813 01:25:13.931619 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:25:13.931735 kubelet[2346]: I0813 01:25:13.931705 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:25:13.931913 kubelet[2346]: I0813 01:25:13.931901 2346 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:25:13.934707 kubelet[2346]: E0813 01:25:13.933783 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.222.13:6443/api/v1/namespaces/default/events\": dial tcp 172.233.222.13:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-222-13.185b2f24934f052b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-222-13,UID:172-233-222-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-222-13,},FirstTimestamp:2025-08-13 01:25:13.927697707 +0000 UTC m=+0.200248201,LastTimestamp:2025-08-13 01:25:13.927697707 +0000 UTC m=+0.200248201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-222-13,}" Aug 13 01:25:13.935966 kubelet[2346]: E0813 01:25:13.935914 2346 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:25:13.936053 kubelet[2346]: I0813 01:25:13.936045 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:25:13.936179 kubelet[2346]: I0813 01:25:13.936168 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:25:13.936961 kubelet[2346]: E0813 01:25:13.936936 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-222-13\" not found" Aug 13 01:25:13.937495 kubelet[2346]: I0813 01:25:13.937482 2346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:25:13.937572 kubelet[2346]: I0813 01:25:13.937562 2346 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:25:13.939331 kubelet[2346]: W0813 01:25:13.939161 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.222.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.222.13:6443: connect: connection refused Aug 13 01:25:13.939331 kubelet[2346]: E0813 01:25:13.939187 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.222.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.222.13:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:13.939331 kubelet[2346]: E0813 01:25:13.939228 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-13?timeout=10s\": dial tcp 172.233.222.13:6443: connect: connection refused" interval="200ms" Aug 13 01:25:13.940511 kubelet[2346]: I0813 01:25:13.940499 2346 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:25:13.940561 kubelet[2346]: I0813 01:25:13.940554 2346 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:25:13.940661 kubelet[2346]: I0813 01:25:13.940635 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:25:13.947363 kubelet[2346]: I0813 01:25:13.947323 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:25:13.948657 kubelet[2346]: I0813 01:25:13.948373 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:25:13.948657 kubelet[2346]: I0813 01:25:13.948390 2346 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:25:13.948657 kubelet[2346]: I0813 01:25:13.948407 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:25:13.948657 kubelet[2346]: I0813 01:25:13.948414 2346 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:25:13.948657 kubelet[2346]: E0813 01:25:13.948472 2346 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:25:13.953932 kubelet[2346]: W0813 01:25:13.953912 2346 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.222.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.222.13:6443: connect: connection refused Aug 13 01:25:13.954124 kubelet[2346]: E0813 01:25:13.954110 2346 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.222.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.222.13:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:25:13.966554 kubelet[2346]: I0813 01:25:13.966473 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:25:13.966620 kubelet[2346]: I0813 01:25:13.966612 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:25:13.966746 kubelet[2346]: I0813 01:25:13.966736 2346 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:13.968606 kubelet[2346]: I0813 01:25:13.968595 2346 policy_none.go:49] "None policy: Start" Aug 13 01:25:13.968759 kubelet[2346]: I0813 01:25:13.968695 2346 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:25:13.968759 kubelet[2346]: I0813 01:25:13.968708 2346 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:25:13.973515 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:25:13.983747 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:25:13.986972 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:25:13.997681 kubelet[2346]: I0813 01:25:13.997555 2346 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:25:13.997782 kubelet[2346]: I0813 01:25:13.997770 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:25:13.997840 kubelet[2346]: I0813 01:25:13.997818 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:25:13.998007 kubelet[2346]: I0813 01:25:13.997995 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:25:13.999237 kubelet[2346]: E0813 01:25:13.999219 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:25:13.999285 kubelet[2346]: E0813 01:25:13.999247 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-222-13\" not found" Aug 13 01:25:14.056088 systemd[1]: Created slice kubepods-burstable-pod661421ce6db7d363431c62f048929318.slice - libcontainer container kubepods-burstable-pod661421ce6db7d363431c62f048929318.slice. Aug 13 01:25:14.074110 kubelet[2346]: E0813 01:25:14.074096 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:14.076220 systemd[1]: Created slice kubepods-burstable-pod16c9391e19f5934ca0747bfabe2465f8.slice - libcontainer container kubepods-burstable-pod16c9391e19f5934ca0747bfabe2465f8.slice. Aug 13 01:25:14.090515 kubelet[2346]: E0813 01:25:14.090491 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:14.092691 systemd[1]: Created slice kubepods-burstable-podc31a2ab30429e4db40c722cce1eb3a07.slice - libcontainer container kubepods-burstable-podc31a2ab30429e4db40c722cce1eb3a07.slice. Aug 13 01:25:14.094138 kubelet[2346]: E0813 01:25:14.094120 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:14.099113 kubelet[2346]: I0813 01:25:14.099091 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-13" Aug 13 01:25:14.099311 kubelet[2346]: E0813 01:25:14.099287 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.13:6443/api/v1/nodes\": dial tcp 172.233.222.13:6443: connect: connection refused" node="172-233-222-13" Aug 13 01:25:14.138726 kubelet[2346]: I0813 01:25:14.138660 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16c9391e19f5934ca0747bfabe2465f8-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-13\" (UID: \"16c9391e19f5934ca0747bfabe2465f8\") " pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:14.138726 kubelet[2346]: I0813 01:25:14.138678 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-ca-certs\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:14.138726 kubelet[2346]: I0813 01:25:14.138690 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:14.139287 kubelet[2346]: I0813 01:25:14.138703 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:14.139315 kubelet[2346]: I0813 01:25:14.139299 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661421ce6db7d363431c62f048929318-kubeconfig\") pod \"kube-scheduler-172-233-222-13\" (UID: \"661421ce6db7d363431c62f048929318\") " pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:14.139315 kubelet[2346]: I0813 01:25:14.139312 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16c9391e19f5934ca0747bfabe2465f8-ca-certs\") pod \"kube-apiserver-172-233-222-13\" (UID: \"16c9391e19f5934ca0747bfabe2465f8\") " pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:14.139365 kubelet[2346]: I0813 01:25:14.139322 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16c9391e19f5934ca0747bfabe2465f8-k8s-certs\") pod \"kube-apiserver-172-233-222-13\" (UID: \"16c9391e19f5934ca0747bfabe2465f8\") " pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:14.139365 kubelet[2346]: I0813 01:25:14.139333 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-k8s-certs\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:14.139365 kubelet[2346]: I0813 01:25:14.139343 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-kubeconfig\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:14.139540 kubelet[2346]: E0813 01:25:14.139524 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-13?timeout=10s\": dial tcp 172.233.222.13:6443: connect: connection refused" interval="400ms" Aug 13 01:25:14.301081 kubelet[2346]: I0813 01:25:14.301059 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-13" Aug 13 01:25:14.301290 kubelet[2346]: E0813 01:25:14.301263 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.13:6443/api/v1/nodes\": dial tcp 172.233.222.13:6443: connect: connection refused" node="172-233-222-13" Aug 13 01:25:14.374871 kubelet[2346]: E0813 01:25:14.374846 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.375492 containerd[1555]: time="2025-08-13T01:25:14.375257252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-13,Uid:661421ce6db7d363431c62f048929318,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:14.391930 kubelet[2346]: E0813 01:25:14.391672 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.392739 containerd[1555]: time="2025-08-13T01:25:14.392534506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-13,Uid:16c9391e19f5934ca0747bfabe2465f8,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:14.394953 kubelet[2346]: E0813 01:25:14.394938 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.397841 containerd[1555]: time="2025-08-13T01:25:14.397812487Z" level=info msg="connecting to shim f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa" address="unix:///run/containerd/s/1ac31a83c633173f23c491871b1954b79d4145d2fb3d485656ddbb8020a10c9b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:14.398451 containerd[1555]: time="2025-08-13T01:25:14.398403658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-13,Uid:c31a2ab30429e4db40c722cce1eb3a07,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:14.428579 containerd[1555]: time="2025-08-13T01:25:14.428540398Z" level=info msg="connecting to shim 4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6" address="unix:///run/containerd/s/6edd718cb05bdee7f0c97fdf26ab499dad9ca920603431a467bf37426a45604a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:14.435760 systemd[1]: Started cri-containerd-f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa.scope - libcontainer container f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa. Aug 13 01:25:14.440233 containerd[1555]: time="2025-08-13T01:25:14.440198762Z" level=info msg="connecting to shim 061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2" address="unix:///run/containerd/s/fed845add4200f9b9fab5dfb2765a83c60f69b73ea8aaa99d5498116239e6639" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:14.468721 systemd[1]: Started cri-containerd-4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6.scope - libcontainer container 4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6. Aug 13 01:25:14.473159 systemd[1]: Started cri-containerd-061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2.scope - libcontainer container 061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2. Aug 13 01:25:14.529565 containerd[1555]: time="2025-08-13T01:25:14.529506420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-222-13,Uid:661421ce6db7d363431c62f048929318,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa\"" Aug 13 01:25:14.529713 kubelet[2346]: E0813 01:25:14.529461 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.222.13:6443/api/v1/namespaces/default/events\": dial tcp 172.233.222.13:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-222-13.185b2f24934f052b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-222-13,UID:172-233-222-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-222-13,},FirstTimestamp:2025-08-13 01:25:13.927697707 +0000 UTC m=+0.200248201,LastTimestamp:2025-08-13 01:25:13.927697707 +0000 UTC m=+0.200248201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-222-13,}" Aug 13 01:25:14.533557 kubelet[2346]: E0813 01:25:14.533395 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.537517 containerd[1555]: time="2025-08-13T01:25:14.537493336Z" level=info msg="CreateContainer within sandbox \"f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:25:14.539235 containerd[1555]: time="2025-08-13T01:25:14.539209730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-222-13,Uid:16c9391e19f5934ca0747bfabe2465f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6\"" Aug 13 01:25:14.539833 kubelet[2346]: E0813 01:25:14.539814 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.222.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-222-13?timeout=10s\": dial tcp 172.233.222.13:6443: connect: connection refused" interval="800ms" Aug 13 01:25:14.541003 kubelet[2346]: E0813 01:25:14.540989 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.542087 containerd[1555]: time="2025-08-13T01:25:14.542066165Z" level=info msg="CreateContainer within sandbox \"4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:25:14.548773 containerd[1555]: time="2025-08-13T01:25:14.548457828Z" level=info msg="Container 5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:14.549010 containerd[1555]: time="2025-08-13T01:25:14.548989229Z" level=info msg="Container 1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:14.553541 containerd[1555]: time="2025-08-13T01:25:14.553521668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-222-13,Uid:c31a2ab30429e4db40c722cce1eb3a07,Namespace:kube-system,Attempt:0,} returns sandbox id \"061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2\"" Aug 13 01:25:14.554314 kubelet[2346]: E0813 01:25:14.554287 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.556527 containerd[1555]: time="2025-08-13T01:25:14.556498604Z" level=info msg="CreateContainer within sandbox \"061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:25:14.556615 containerd[1555]: time="2025-08-13T01:25:14.556592915Z" level=info msg="CreateContainer within sandbox \"f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d\"" Aug 13 01:25:14.557329 containerd[1555]: time="2025-08-13T01:25:14.557277406Z" level=info msg="CreateContainer within sandbox \"4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1\"" Aug 13 01:25:14.557829 containerd[1555]: time="2025-08-13T01:25:14.557742067Z" level=info msg="StartContainer for \"5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d\"" Aug 13 01:25:14.559691 containerd[1555]: time="2025-08-13T01:25:14.559672311Z" level=info msg="StartContainer for \"1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1\"" Aug 13 01:25:14.560286 containerd[1555]: time="2025-08-13T01:25:14.560258352Z" level=info msg="connecting to shim 5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d" address="unix:///run/containerd/s/1ac31a83c633173f23c491871b1954b79d4145d2fb3d485656ddbb8020a10c9b" protocol=ttrpc version=3 Aug 13 01:25:14.561774 containerd[1555]: time="2025-08-13T01:25:14.561718655Z" level=info msg="connecting to shim 1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1" address="unix:///run/containerd/s/6edd718cb05bdee7f0c97fdf26ab499dad9ca920603431a467bf37426a45604a" protocol=ttrpc version=3 Aug 13 01:25:14.571510 containerd[1555]: time="2025-08-13T01:25:14.571480144Z" level=info msg="Container 28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:14.579833 systemd[1]: Started cri-containerd-5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d.scope - libcontainer container 5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d. Aug 13 01:25:14.583913 containerd[1555]: time="2025-08-13T01:25:14.583845169Z" level=info msg="CreateContainer within sandbox \"061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df\"" Aug 13 01:25:14.585025 containerd[1555]: time="2025-08-13T01:25:14.584967901Z" level=info msg="StartContainer for \"28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df\"" Aug 13 01:25:14.586483 containerd[1555]: time="2025-08-13T01:25:14.586408314Z" level=info msg="connecting to shim 28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df" address="unix:///run/containerd/s/fed845add4200f9b9fab5dfb2765a83c60f69b73ea8aaa99d5498116239e6639" protocol=ttrpc version=3 Aug 13 01:25:14.587800 systemd[1]: Started cri-containerd-1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1.scope - libcontainer container 1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1. Aug 13 01:25:14.611855 systemd[1]: Started cri-containerd-28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df.scope - libcontainer container 28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df. Aug 13 01:25:14.672554 containerd[1555]: time="2025-08-13T01:25:14.670811723Z" level=info msg="StartContainer for \"1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1\" returns successfully" Aug 13 01:25:14.681207 containerd[1555]: time="2025-08-13T01:25:14.681165314Z" level=info msg="StartContainer for \"28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df\" returns successfully" Aug 13 01:25:14.701501 containerd[1555]: time="2025-08-13T01:25:14.701470464Z" level=info msg="StartContainer for \"5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d\" returns successfully" Aug 13 01:25:14.704965 kubelet[2346]: I0813 01:25:14.704885 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-13" Aug 13 01:25:14.705531 kubelet[2346]: E0813 01:25:14.705509 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.222.13:6443/api/v1/nodes\": dial tcp 172.233.222.13:6443: connect: connection refused" node="172-233-222-13" Aug 13 01:25:14.968196 kubelet[2346]: E0813 01:25:14.967960 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:14.968196 kubelet[2346]: E0813 01:25:14.968071 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.970196 kubelet[2346]: E0813 01:25:14.970073 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:14.970196 kubelet[2346]: E0813 01:25:14.970138 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:14.973750 kubelet[2346]: E0813 01:25:14.973730 2346 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:14.973875 kubelet[2346]: E0813 01:25:14.973865 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:15.509128 kubelet[2346]: I0813 01:25:15.508882 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-13" Aug 13 01:25:15.859606 kubelet[2346]: E0813 01:25:15.859057 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-222-13\" not found" node="172-233-222-13" Aug 13 01:25:15.913780 kubelet[2346]: I0813 01:25:15.913694 2346 kubelet_node_status.go:78] "Successfully registered node" node="172-233-222-13" Aug 13 01:25:15.913780 kubelet[2346]: E0813 01:25:15.913715 2346 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-233-222-13\": node \"172-233-222-13\" not found" Aug 13 01:25:15.930841 kubelet[2346]: I0813 01:25:15.930811 2346 apiserver.go:52] "Watching apiserver" Aug 13 01:25:15.937476 kubelet[2346]: I0813 01:25:15.937460 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:15.937757 kubelet[2346]: I0813 01:25:15.937743 2346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:25:15.952846 kubelet[2346]: E0813 01:25:15.952832 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:15.952928 kubelet[2346]: I0813 01:25:15.952919 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:15.955317 kubelet[2346]: E0813 01:25:15.955214 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:15.955317 kubelet[2346]: I0813 01:25:15.955226 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:15.959112 kubelet[2346]: E0813 01:25:15.959099 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-222-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:15.974488 kubelet[2346]: I0813 01:25:15.974256 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:15.974488 kubelet[2346]: I0813 01:25:15.974353 2346 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:15.976110 kubelet[2346]: E0813 01:25:15.975948 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:15.976110 kubelet[2346]: E0813 01:25:15.976063 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:15.976775 kubelet[2346]: E0813 01:25:15.976730 2346 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:15.976903 kubelet[2346]: E0813 01:25:15.976879 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:17.672769 systemd[1]: Reload requested from client PID 2614 ('systemctl') (unit session-7.scope)... Aug 13 01:25:17.672794 systemd[1]: Reloading... Aug 13 01:25:17.760694 zram_generator::config[2658]: No configuration found. Aug 13 01:25:17.836741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:25:17.937952 systemd[1]: Reloading finished in 264 ms. Aug 13 01:25:17.959998 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:17.973839 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:25:17.974121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:17.974173 systemd[1]: kubelet.service: Consumed 512ms CPU time, 131.6M memory peak. Aug 13 01:25:17.975750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:25:18.153191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:25:18.161499 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:25:18.207671 kubelet[2708]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:18.209674 kubelet[2708]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:25:18.209674 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:25:18.209674 kubelet[2708]: I0813 01:25:18.208000 2708 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:25:18.213039 kubelet[2708]: I0813 01:25:18.213017 2708 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:25:18.213039 kubelet[2708]: I0813 01:25:18.213035 2708 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:25:18.213198 kubelet[2708]: I0813 01:25:18.213181 2708 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:25:18.216428 kubelet[2708]: I0813 01:25:18.216407 2708 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:25:18.218603 kubelet[2708]: I0813 01:25:18.218579 2708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:25:18.221167 kubelet[2708]: I0813 01:25:18.221142 2708 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:25:18.225212 kubelet[2708]: I0813 01:25:18.224141 2708 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:25:18.225212 kubelet[2708]: I0813 01:25:18.224342 2708 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:25:18.225212 kubelet[2708]: I0813 01:25:18.224366 2708 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-222-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:25:18.225212 kubelet[2708]: I0813 01:25:18.224511 2708 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:25:18.225386 kubelet[2708]: I0813 01:25:18.224519 2708 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:25:18.225386 kubelet[2708]: I0813 01:25:18.224565 2708 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:18.225386 kubelet[2708]: I0813 01:25:18.224716 2708 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:25:18.225386 kubelet[2708]: I0813 01:25:18.224736 2708 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:25:18.225386 kubelet[2708]: I0813 01:25:18.224754 2708 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:25:18.225386 kubelet[2708]: I0813 01:25:18.224762 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:25:18.229881 kubelet[2708]: I0813 01:25:18.229868 2708 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:25:18.230212 kubelet[2708]: I0813 01:25:18.230191 2708 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:25:18.231154 kubelet[2708]: I0813 01:25:18.231133 2708 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:25:18.231193 kubelet[2708]: I0813 01:25:18.231164 2708 server.go:1287] "Started kubelet" Aug 13 01:25:18.234161 kubelet[2708]: I0813 01:25:18.234145 2708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:25:18.235857 kubelet[2708]: E0813 01:25:18.235845 2708 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:25:18.238018 kubelet[2708]: I0813 01:25:18.237994 2708 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:25:18.238869 kubelet[2708]: I0813 01:25:18.238859 2708 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:25:18.240929 kubelet[2708]: I0813 01:25:18.240916 2708 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:25:18.240929 kubelet[2708]: I0813 01:25:18.239252 2708 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:25:18.241695 kubelet[2708]: I0813 01:25:18.241675 2708 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:25:18.241731 kubelet[2708]: I0813 01:25:18.239433 2708 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:25:18.241761 kubelet[2708]: I0813 01:25:18.239224 2708 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:25:18.242346 kubelet[2708]: I0813 01:25:18.242328 2708 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:25:18.244853 kubelet[2708]: I0813 01:25:18.244754 2708 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:25:18.244853 kubelet[2708]: I0813 01:25:18.244809 2708 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:25:18.249891 kubelet[2708]: I0813 01:25:18.249846 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:25:18.251943 kubelet[2708]: I0813 01:25:18.251224 2708 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:25:18.252028 kubelet[2708]: I0813 01:25:18.252016 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:25:18.252077 kubelet[2708]: I0813 01:25:18.252070 2708 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:25:18.252129 kubelet[2708]: I0813 01:25:18.252122 2708 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:25:18.252170 kubelet[2708]: I0813 01:25:18.252163 2708 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:25:18.252257 kubelet[2708]: E0813 01:25:18.252234 2708 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:25:18.299168 kubelet[2708]: I0813 01:25:18.299142 2708 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:25:18.299168 kubelet[2708]: I0813 01:25:18.299158 2708 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:25:18.299168 kubelet[2708]: I0813 01:25:18.299174 2708 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:25:18.299305 kubelet[2708]: I0813 01:25:18.299288 2708 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:25:18.299335 kubelet[2708]: I0813 01:25:18.299298 2708 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:25:18.299335 kubelet[2708]: I0813 01:25:18.299312 2708 policy_none.go:49] "None policy: Start" Aug 13 01:25:18.299335 kubelet[2708]: I0813 01:25:18.299320 2708 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:25:18.299335 kubelet[2708]: I0813 01:25:18.299328 2708 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:25:18.299403 kubelet[2708]: I0813 01:25:18.299396 2708 state_mem.go:75] "Updated machine memory state" Aug 13 01:25:18.304074 kubelet[2708]: I0813 01:25:18.303394 2708 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:25:18.304074 kubelet[2708]: I0813 01:25:18.303547 2708 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:25:18.304074 kubelet[2708]: I0813 01:25:18.303557 2708 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:25:18.304074 kubelet[2708]: I0813 01:25:18.303835 2708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:25:18.304543 kubelet[2708]: E0813 01:25:18.304530 2708 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:25:18.352996 kubelet[2708]: I0813 01:25:18.352976 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:18.353091 kubelet[2708]: I0813 01:25:18.353080 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:18.353604 kubelet[2708]: I0813 01:25:18.353590 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:18.413200 kubelet[2708]: I0813 01:25:18.412954 2708 kubelet_node_status.go:75] "Attempting to register node" node="172-233-222-13" Aug 13 01:25:18.421484 kubelet[2708]: I0813 01:25:18.421451 2708 kubelet_node_status.go:124] "Node was previously registered" node="172-233-222-13" Aug 13 01:25:18.421565 kubelet[2708]: I0813 01:25:18.421512 2708 kubelet_node_status.go:78] "Successfully registered node" node="172-233-222-13" Aug 13 01:25:18.443305 kubelet[2708]: I0813 01:25:18.443282 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16c9391e19f5934ca0747bfabe2465f8-ca-certs\") pod \"kube-apiserver-172-233-222-13\" (UID: \"16c9391e19f5934ca0747bfabe2465f8\") " pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:18.443355 kubelet[2708]: I0813 01:25:18.443309 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16c9391e19f5934ca0747bfabe2465f8-k8s-certs\") pod \"kube-apiserver-172-233-222-13\" (UID: \"16c9391e19f5934ca0747bfabe2465f8\") " pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:18.443355 kubelet[2708]: I0813 01:25:18.443325 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-ca-certs\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:18.443355 kubelet[2708]: I0813 01:25:18.443339 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-flexvolume-dir\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:18.443355 kubelet[2708]: I0813 01:25:18.443354 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-k8s-certs\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:18.443448 kubelet[2708]: I0813 01:25:18.443368 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16c9391e19f5934ca0747bfabe2465f8-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-222-13\" (UID: \"16c9391e19f5934ca0747bfabe2465f8\") " pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:18.443448 kubelet[2708]: I0813 01:25:18.443382 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-kubeconfig\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:18.443448 kubelet[2708]: I0813 01:25:18.443400 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c31a2ab30429e4db40c722cce1eb3a07-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-222-13\" (UID: \"c31a2ab30429e4db40c722cce1eb3a07\") " pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:18.443448 kubelet[2708]: I0813 01:25:18.443415 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/661421ce6db7d363431c62f048929318-kubeconfig\") pod \"kube-scheduler-172-233-222-13\" (UID: \"661421ce6db7d363431c62f048929318\") " pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:18.658504 kubelet[2708]: E0813 01:25:18.658356 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:18.659547 kubelet[2708]: E0813 01:25:18.659529 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:18.660824 kubelet[2708]: E0813 01:25:18.660136 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:18.669169 sudo[2740]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:25:18.669427 sudo[2740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 01:25:19.086446 sudo[2740]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:19.230424 kubelet[2708]: I0813 01:25:19.230395 2708 apiserver.go:52] "Watching apiserver" Aug 13 01:25:19.241656 kubelet[2708]: I0813 01:25:19.241405 2708 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:25:19.282600 kubelet[2708]: E0813 01:25:19.282581 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:19.282991 kubelet[2708]: I0813 01:25:19.282977 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:19.283175 kubelet[2708]: I0813 01:25:19.283161 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:19.292717 kubelet[2708]: E0813 01:25:19.292699 2708 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-222-13\" already exists" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:19.292794 kubelet[2708]: E0813 01:25:19.292780 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:19.293285 kubelet[2708]: E0813 01:25:19.293266 2708 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-222-13\" already exists" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:19.293500 kubelet[2708]: E0813 01:25:19.293485 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:19.312283 kubelet[2708]: I0813 01:25:19.312251 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-222-13" podStartSLOduration=1.312243654 podStartE2EDuration="1.312243654s" podCreationTimestamp="2025-08-13 01:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:19.306179512 +0000 UTC m=+1.138990009" watchObservedRunningTime="2025-08-13 01:25:19.312243654 +0000 UTC m=+1.145054151" Aug 13 01:25:19.317971 kubelet[2708]: I0813 01:25:19.317942 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-222-13" podStartSLOduration=1.317935265 podStartE2EDuration="1.317935265s" podCreationTimestamp="2025-08-13 01:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:19.312666865 +0000 UTC m=+1.145477362" watchObservedRunningTime="2025-08-13 01:25:19.317935265 +0000 UTC m=+1.150745762" Aug 13 01:25:19.318016 kubelet[2708]: I0813 01:25:19.317993 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-222-13" podStartSLOduration=1.317990265 podStartE2EDuration="1.317990265s" podCreationTimestamp="2025-08-13 01:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:19.317660574 +0000 UTC m=+1.150471071" watchObservedRunningTime="2025-08-13 01:25:19.317990265 +0000 UTC m=+1.150800762" Aug 13 01:25:20.242120 sudo[1800]: pam_unix(sudo:session): session closed for user root Aug 13 01:25:20.283754 kubelet[2708]: E0813 01:25:20.283696 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:20.283754 kubelet[2708]: E0813 01:25:20.283704 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:20.292399 sshd[1799]: Connection closed by 147.75.109.163 port 37792 Aug 13 01:25:20.292805 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:20.297057 systemd[1]: sshd@6-172.233.222.13:22-147.75.109.163:37792.service: Deactivated successfully. Aug 13 01:25:20.299666 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:25:20.299842 systemd[1]: session-7.scope: Consumed 3.325s CPU time, 271.1M memory peak. Aug 13 01:25:20.301588 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:25:20.302528 systemd-logind[1532]: Removed session 7. Aug 13 01:25:21.116857 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:25:21.491783 kubelet[2708]: E0813 01:25:21.491729 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:22.961663 kubelet[2708]: I0813 01:25:22.961625 2708 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:25:22.962070 containerd[1555]: time="2025-08-13T01:25:22.961920921Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:25:22.962359 kubelet[2708]: I0813 01:25:22.962342 2708 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:25:23.751681 systemd[1]: Created slice kubepods-besteffort-pod6d190833_f725_4f75_9505_0d7f84e1125c.slice - libcontainer container kubepods-besteffort-pod6d190833_f725_4f75_9505_0d7f84e1125c.slice. Aug 13 01:25:23.773683 systemd[1]: Created slice kubepods-burstable-pod89c96383_cf88_46bf_a4a6_13402be041b3.slice - libcontainer container kubepods-burstable-pod89c96383_cf88_46bf_a4a6_13402be041b3.slice. Aug 13 01:25:23.778691 kubelet[2708]: I0813 01:25:23.777831 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89c96383-cf88-46bf-a4a6-13402be041b3-clustermesh-secrets\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778794 kubelet[2708]: I0813 01:25:23.778703 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-kernel\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778794 kubelet[2708]: I0813 01:25:23.778724 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-hubble-tls\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778794 kubelet[2708]: I0813 01:25:23.778740 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d190833-f725-4f75-9505-0d7f84e1125c-lib-modules\") pod \"kube-proxy-rhk77\" (UID: \"6d190833-f725-4f75-9505-0d7f84e1125c\") " pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:23.778794 kubelet[2708]: I0813 01:25:23.778753 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-cgroup\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778794 kubelet[2708]: I0813 01:25:23.778764 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d190833-f725-4f75-9505-0d7f84e1125c-kube-proxy\") pod \"kube-proxy-rhk77\" (UID: \"6d190833-f725-4f75-9505-0d7f84e1125c\") " pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:23.778928 kubelet[2708]: I0813 01:25:23.778775 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6kk2\" (UniqueName: \"kubernetes.io/projected/6d190833-f725-4f75-9505-0d7f84e1125c-kube-api-access-n6kk2\") pod \"kube-proxy-rhk77\" (UID: \"6d190833-f725-4f75-9505-0d7f84e1125c\") " pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:23.778928 kubelet[2708]: I0813 01:25:23.778789 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpj8p\" (UniqueName: \"kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-kube-api-access-tpj8p\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778928 kubelet[2708]: I0813 01:25:23.778800 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-lib-modules\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778928 kubelet[2708]: I0813 01:25:23.778814 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-xtables-lock\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.778928 kubelet[2708]: I0813 01:25:23.778825 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-etc-cni-netd\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.779014 kubelet[2708]: I0813 01:25:23.778836 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-config-path\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.779014 kubelet[2708]: I0813 01:25:23.778847 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d190833-f725-4f75-9505-0d7f84e1125c-xtables-lock\") pod \"kube-proxy-rhk77\" (UID: \"6d190833-f725-4f75-9505-0d7f84e1125c\") " pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:23.779014 kubelet[2708]: I0813 01:25:23.778859 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-run\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.779014 kubelet[2708]: I0813 01:25:23.778869 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cni-path\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.779014 kubelet[2708]: I0813 01:25:23.778880 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-bpf-maps\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.779014 kubelet[2708]: I0813 01:25:23.778890 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-hostproc\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:23.779109 kubelet[2708]: I0813 01:25:23.778902 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-net\") pod \"cilium-bj2vr\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " pod="kube-system/cilium-bj2vr" Aug 13 01:25:24.070966 kubelet[2708]: E0813 01:25:24.070684 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:24.071769 containerd[1555]: time="2025-08-13T01:25:24.071638754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhk77,Uid:6d190833-f725-4f75-9505-0d7f84e1125c,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:24.078319 kubelet[2708]: E0813 01:25:24.078260 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:24.079413 containerd[1555]: time="2025-08-13T01:25:24.079339738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj2vr,Uid:89c96383-cf88-46bf-a4a6-13402be041b3,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:24.100670 containerd[1555]: time="2025-08-13T01:25:24.100581673Z" level=info msg="connecting to shim 0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e" address="unix:///run/containerd/s/aea5fbdbe4eb395f4c26de2f00312ba58b450587dfe33023f06752cc6afd3079" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:24.114989 systemd[1]: Created slice kubepods-besteffort-pod192b6a6f_9b7f_4883_9bfc_133f6967ebfa.slice - libcontainer container kubepods-besteffort-pod192b6a6f_9b7f_4883_9bfc_133f6967ebfa.slice. Aug 13 01:25:24.131966 containerd[1555]: time="2025-08-13T01:25:24.131932831Z" level=info msg="connecting to shim 75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4" address="unix:///run/containerd/s/0c470f55763453da52f5dfe392099dac2261e97d3872621dce8c5bad265c691a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:24.142265 systemd[1]: Started cri-containerd-0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e.scope - libcontainer container 0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e. Aug 13 01:25:24.161744 systemd[1]: Started cri-containerd-75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4.scope - libcontainer container 75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4. Aug 13 01:25:24.182117 kubelet[2708]: I0813 01:25:24.182092 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbvh\" (UniqueName: \"kubernetes.io/projected/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-kube-api-access-bhbvh\") pod \"cilium-operator-6c4d7847fc-gn8p8\" (UID: \"192b6a6f-9b7f-4883-9bfc-133f6967ebfa\") " pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:25:24.182117 kubelet[2708]: I0813 01:25:24.182120 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gn8p8\" (UID: \"192b6a6f-9b7f-4883-9bfc-133f6967ebfa\") " pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:25:24.183878 containerd[1555]: time="2025-08-13T01:25:24.183853700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhk77,Uid:6d190833-f725-4f75-9505-0d7f84e1125c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e\"" Aug 13 01:25:24.184621 kubelet[2708]: E0813 01:25:24.184599 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:24.190672 containerd[1555]: time="2025-08-13T01:25:24.190474232Z" level=info msg="CreateContainer within sandbox \"0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:25:24.196082 containerd[1555]: time="2025-08-13T01:25:24.196028224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bj2vr,Uid:89c96383-cf88-46bf-a4a6-13402be041b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\"" Aug 13 01:25:24.196487 kubelet[2708]: E0813 01:25:24.196468 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:24.198292 containerd[1555]: time="2025-08-13T01:25:24.198267535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:25:24.200374 containerd[1555]: time="2025-08-13T01:25:24.200359457Z" level=info msg="Container 49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:24.205290 containerd[1555]: time="2025-08-13T01:25:24.205272834Z" level=info msg="CreateContainer within sandbox \"0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24\"" Aug 13 01:25:24.205673 containerd[1555]: time="2025-08-13T01:25:24.205657720Z" level=info msg="StartContainer for \"49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24\"" Aug 13 01:25:24.207938 containerd[1555]: time="2025-08-13T01:25:24.207921351Z" level=info msg="connecting to shim 49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24" address="unix:///run/containerd/s/aea5fbdbe4eb395f4c26de2f00312ba58b450587dfe33023f06752cc6afd3079" protocol=ttrpc version=3 Aug 13 01:25:24.225752 systemd[1]: Started cri-containerd-49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24.scope - libcontainer container 49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24. Aug 13 01:25:24.261067 containerd[1555]: time="2025-08-13T01:25:24.261038839Z" level=info msg="StartContainer for \"49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24\" returns successfully" Aug 13 01:25:24.293368 kubelet[2708]: E0813 01:25:24.293347 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:24.422518 kubelet[2708]: E0813 01:25:24.422215 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:24.422784 containerd[1555]: time="2025-08-13T01:25:24.422752544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gn8p8,Uid:192b6a6f-9b7f-4883-9bfc-133f6967ebfa,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:24.434017 containerd[1555]: time="2025-08-13T01:25:24.433755578Z" level=info msg="connecting to shim 0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec" address="unix:///run/containerd/s/e0193ea49342181d880c2bffacb0b2e005c4f7a7769aa8f26acfc8ec16b4d03a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:25:24.456776 systemd[1]: Started cri-containerd-0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec.scope - libcontainer container 0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec. Aug 13 01:25:24.495936 containerd[1555]: time="2025-08-13T01:25:24.495864880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gn8p8,Uid:192b6a6f-9b7f-4883-9bfc-133f6967ebfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\"" Aug 13 01:25:24.496566 kubelet[2708]: E0813 01:25:24.496518 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:27.256169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255374562.mount: Deactivated successfully. Aug 13 01:25:27.987327 kubelet[2708]: E0813 01:25:27.987218 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:28.005434 kubelet[2708]: I0813 01:25:28.005273 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhk77" podStartSLOduration=5.005264484 podStartE2EDuration="5.005264484s" podCreationTimestamp="2025-08-13 01:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:24.303805408 +0000 UTC m=+6.136615905" watchObservedRunningTime="2025-08-13 01:25:28.005264484 +0000 UTC m=+9.838074981" Aug 13 01:25:28.302913 kubelet[2708]: E0813 01:25:28.302540 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:28.342525 kubelet[2708]: I0813 01:25:28.342478 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:28.343674 kubelet[2708]: I0813 01:25:28.342727 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:25:28.345786 kubelet[2708]: I0813 01:25:28.345727 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:28.356737 kubelet[2708]: I0813 01:25:28.356492 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:28.356890 kubelet[2708]: I0813 01:25:28.356873 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13","kube-system/kube-proxy-rhk77"] Aug 13 01:25:28.356964 kubelet[2708]: E0813 01:25:28.356953 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:25:28.357012 kubelet[2708]: E0813 01:25:28.357004 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:25:28.357059 kubelet[2708]: E0813 01:25:28.357051 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:28.357106 kubelet[2708]: E0813 01:25:28.357098 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:28.357159 kubelet[2708]: E0813 01:25:28.357150 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:28.357220 kubelet[2708]: E0813 01:25:28.357198 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:28.357220 kubelet[2708]: I0813 01:25:28.357211 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:25:28.639878 containerd[1555]: time="2025-08-13T01:25:28.639766285Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:28.640946 containerd[1555]: time="2025-08-13T01:25:28.640917887Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 01:25:28.641367 containerd[1555]: time="2025-08-13T01:25:28.641336555Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:28.642630 containerd[1555]: time="2025-08-13T01:25:28.642537527Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.444074844s" Aug 13 01:25:28.642630 containerd[1555]: time="2025-08-13T01:25:28.642572096Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:25:28.645137 containerd[1555]: time="2025-08-13T01:25:28.645069311Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:25:28.646696 containerd[1555]: time="2025-08-13T01:25:28.646608591Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:25:28.658195 containerd[1555]: time="2025-08-13T01:25:28.657638112Z" level=info msg="Container 4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:28.658855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551665875.mount: Deactivated successfully. Aug 13 01:25:28.664974 containerd[1555]: time="2025-08-13T01:25:28.664942517Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\"" Aug 13 01:25:28.665601 containerd[1555]: time="2025-08-13T01:25:28.665525683Z" level=info msg="StartContainer for \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\"" Aug 13 01:25:28.666478 containerd[1555]: time="2025-08-13T01:25:28.666448067Z" level=info msg="connecting to shim 4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7" address="unix:///run/containerd/s/0c470f55763453da52f5dfe392099dac2261e97d3872621dce8c5bad265c691a" protocol=ttrpc version=3 Aug 13 01:25:28.685757 systemd[1]: Started cri-containerd-4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7.scope - libcontainer container 4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7. Aug 13 01:25:28.711514 containerd[1555]: time="2025-08-13T01:25:28.711471556Z" level=info msg="StartContainer for \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" returns successfully" Aug 13 01:25:28.722383 systemd[1]: cri-containerd-4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7.scope: Deactivated successfully. Aug 13 01:25:28.725321 containerd[1555]: time="2025-08-13T01:25:28.725272030Z" level=info msg="received exit event container_id:\"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" id:\"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" pid:3125 exited_at:{seconds:1755048328 nanos:724780903}" Aug 13 01:25:28.725371 containerd[1555]: time="2025-08-13T01:25:28.725303789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" id:\"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" pid:3125 exited_at:{seconds:1755048328 nanos:724780903}" Aug 13 01:25:29.194212 kubelet[2708]: E0813 01:25:29.194163 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:29.304091 kubelet[2708]: E0813 01:25:29.304051 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:29.304690 kubelet[2708]: E0813 01:25:29.304549 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:29.311668 containerd[1555]: time="2025-08-13T01:25:29.310841106Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:25:29.322795 containerd[1555]: time="2025-08-13T01:25:29.322763947Z" level=info msg="Container 922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:29.328468 containerd[1555]: time="2025-08-13T01:25:29.328441445Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\"" Aug 13 01:25:29.328956 containerd[1555]: time="2025-08-13T01:25:29.328933773Z" level=info msg="StartContainer for \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\"" Aug 13 01:25:29.332076 containerd[1555]: time="2025-08-13T01:25:29.332050524Z" level=info msg="connecting to shim 922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58" address="unix:///run/containerd/s/0c470f55763453da52f5dfe392099dac2261e97d3872621dce8c5bad265c691a" protocol=ttrpc version=3 Aug 13 01:25:29.349776 systemd[1]: Started cri-containerd-922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58.scope - libcontainer container 922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58. Aug 13 01:25:29.381291 containerd[1555]: time="2025-08-13T01:25:29.381244132Z" level=info msg="StartContainer for \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" returns successfully" Aug 13 01:25:29.396388 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:25:29.396579 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:25:29.397182 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:25:29.398862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:25:29.401473 systemd[1]: cri-containerd-922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58.scope: Deactivated successfully. Aug 13 01:25:29.401731 containerd[1555]: time="2025-08-13T01:25:29.401484426Z" level=info msg="received exit event container_id:\"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" id:\"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" pid:3176 exited_at:{seconds:1755048329 nanos:401324606}" Aug 13 01:25:29.403751 containerd[1555]: time="2025-08-13T01:25:29.403717303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" id:\"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" pid:3176 exited_at:{seconds:1755048329 nanos:401324606}" Aug 13 01:25:29.423590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:25:29.655830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7-rootfs.mount: Deactivated successfully. Aug 13 01:25:29.725877 containerd[1555]: time="2025-08-13T01:25:29.725825294Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:29.726789 containerd[1555]: time="2025-08-13T01:25:29.726519840Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 01:25:29.727274 containerd[1555]: time="2025-08-13T01:25:29.727245146Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:25:29.728478 containerd[1555]: time="2025-08-13T01:25:29.728451058Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.083219729s" Aug 13 01:25:29.728620 containerd[1555]: time="2025-08-13T01:25:29.728558299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:25:29.731345 containerd[1555]: time="2025-08-13T01:25:29.731318182Z" level=info msg="CreateContainer within sandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:25:29.741529 containerd[1555]: time="2025-08-13T01:25:29.739885613Z" level=info msg="Container bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:29.745418 containerd[1555]: time="2025-08-13T01:25:29.745386411Z" level=info msg="CreateContainer within sandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\"" Aug 13 01:25:29.746050 containerd[1555]: time="2025-08-13T01:25:29.745840389Z" level=info msg="StartContainer for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\"" Aug 13 01:25:29.746942 containerd[1555]: time="2025-08-13T01:25:29.746812603Z" level=info msg="connecting to shim bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322" address="unix:///run/containerd/s/e0193ea49342181d880c2bffacb0b2e005c4f7a7769aa8f26acfc8ec16b4d03a" protocol=ttrpc version=3 Aug 13 01:25:29.766753 systemd[1]: Started cri-containerd-bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322.scope - libcontainer container bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322. Aug 13 01:25:29.800946 containerd[1555]: time="2025-08-13T01:25:29.800911623Z" level=info msg="StartContainer for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" returns successfully" Aug 13 01:25:30.309668 kubelet[2708]: E0813 01:25:30.309618 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:30.311405 kubelet[2708]: E0813 01:25:30.311363 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:30.312297 containerd[1555]: time="2025-08-13T01:25:30.312245799Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:25:30.328518 containerd[1555]: time="2025-08-13T01:25:30.326887452Z" level=info msg="Container 9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:30.340521 containerd[1555]: time="2025-08-13T01:25:30.340370321Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\"" Aug 13 01:25:30.343660 containerd[1555]: time="2025-08-13T01:25:30.340853358Z" level=info msg="StartContainer for \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\"" Aug 13 01:25:30.344824 containerd[1555]: time="2025-08-13T01:25:30.344788388Z" level=info msg="connecting to shim 9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec" address="unix:///run/containerd/s/0c470f55763453da52f5dfe392099dac2261e97d3872621dce8c5bad265c691a" protocol=ttrpc version=3 Aug 13 01:25:30.374432 systemd[1]: Started cri-containerd-9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec.scope - libcontainer container 9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec. Aug 13 01:25:30.460298 containerd[1555]: time="2025-08-13T01:25:30.460258231Z" level=info msg="StartContainer for \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" returns successfully" Aug 13 01:25:30.460861 systemd[1]: cri-containerd-9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec.scope: Deactivated successfully. Aug 13 01:25:30.462420 containerd[1555]: time="2025-08-13T01:25:30.462389680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" id:\"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" pid:3265 exited_at:{seconds:1755048330 nanos:462250681}" Aug 13 01:25:30.462460 containerd[1555]: time="2025-08-13T01:25:30.462452419Z" level=info msg="received exit event container_id:\"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" id:\"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" pid:3265 exited_at:{seconds:1755048330 nanos:462250681}" Aug 13 01:25:31.320091 kubelet[2708]: E0813 01:25:31.320045 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:31.322273 kubelet[2708]: E0813 01:25:31.322166 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:31.324100 containerd[1555]: time="2025-08-13T01:25:31.323976787Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:25:31.335759 containerd[1555]: time="2025-08-13T01:25:31.335708770Z" level=info msg="Container 1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:31.345297 kubelet[2708]: I0813 01:25:31.345240 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" podStartSLOduration=2.1131056089999998 podStartE2EDuration="7.345215625s" podCreationTimestamp="2025-08-13 01:25:24 +0000 UTC" firstStartedPulling="2025-08-13 01:25:24.497061599 +0000 UTC m=+6.329872096" lastFinishedPulling="2025-08-13 01:25:29.729171615 +0000 UTC m=+11.561982112" observedRunningTime="2025-08-13 01:25:30.381316616 +0000 UTC m=+12.214127103" watchObservedRunningTime="2025-08-13 01:25:31.345215625 +0000 UTC m=+13.178026122" Aug 13 01:25:31.347098 containerd[1555]: time="2025-08-13T01:25:31.347065616Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\"" Aug 13 01:25:31.348739 containerd[1555]: time="2025-08-13T01:25:31.348719138Z" level=info msg="StartContainer for \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\"" Aug 13 01:25:31.349561 containerd[1555]: time="2025-08-13T01:25:31.349534424Z" level=info msg="connecting to shim 1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf" address="unix:///run/containerd/s/0c470f55763453da52f5dfe392099dac2261e97d3872621dce8c5bad265c691a" protocol=ttrpc version=3 Aug 13 01:25:31.373777 systemd[1]: Started cri-containerd-1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf.scope - libcontainer container 1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf. Aug 13 01:25:31.405754 systemd[1]: cri-containerd-1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf.scope: Deactivated successfully. Aug 13 01:25:31.412847 containerd[1555]: time="2025-08-13T01:25:31.412726141Z" level=info msg="received exit event container_id:\"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" id:\"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" pid:3303 exited_at:{seconds:1755048331 nanos:409036899}" Aug 13 01:25:31.413152 containerd[1555]: time="2025-08-13T01:25:31.413058969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" id:\"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" pid:3303 exited_at:{seconds:1755048331 nanos:409036899}" Aug 13 01:25:31.414701 containerd[1555]: time="2025-08-13T01:25:31.414103544Z" level=info msg="StartContainer for \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" returns successfully" Aug 13 01:25:31.436553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf-rootfs.mount: Deactivated successfully. Aug 13 01:25:31.496630 kubelet[2708]: E0813 01:25:31.496553 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:32.323525 kubelet[2708]: E0813 01:25:32.323479 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:32.338905 containerd[1555]: time="2025-08-13T01:25:32.338806347Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:25:32.369221 containerd[1555]: time="2025-08-13T01:25:32.368910965Z" level=info msg="Container f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:25:32.378114 containerd[1555]: time="2025-08-13T01:25:32.378073815Z" level=info msg="CreateContainer within sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\"" Aug 13 01:25:32.380256 containerd[1555]: time="2025-08-13T01:25:32.380061467Z" level=info msg="StartContainer for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\"" Aug 13 01:25:32.383618 containerd[1555]: time="2025-08-13T01:25:32.383598131Z" level=info msg="connecting to shim f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509" address="unix:///run/containerd/s/0c470f55763453da52f5dfe392099dac2261e97d3872621dce8c5bad265c691a" protocol=ttrpc version=3 Aug 13 01:25:32.421907 systemd[1]: Started cri-containerd-f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509.scope - libcontainer container f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509. Aug 13 01:25:32.465360 containerd[1555]: time="2025-08-13T01:25:32.465315793Z" level=info msg="StartContainer for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" returns successfully" Aug 13 01:25:32.557461 containerd[1555]: time="2025-08-13T01:25:32.557352220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" id:\"768f86df2dcc34fbf9b38a5050d8238137082683e20b7f391f5cac118b8733d0\" pid:3370 exited_at:{seconds:1755048332 nanos:555574377}" Aug 13 01:25:32.596919 kubelet[2708]: I0813 01:25:32.596782 2708 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:25:33.331098 kubelet[2708]: E0813 01:25:33.331032 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:33.348189 kubelet[2708]: I0813 01:25:33.348102 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bj2vr" podStartSLOduration=5.901755989 podStartE2EDuration="10.348081417s" podCreationTimestamp="2025-08-13 01:25:23 +0000 UTC" firstStartedPulling="2025-08-13 01:25:24.197442502 +0000 UTC m=+6.030252999" lastFinishedPulling="2025-08-13 01:25:28.64376793 +0000 UTC m=+10.476578427" observedRunningTime="2025-08-13 01:25:33.346884102 +0000 UTC m=+15.179694599" watchObservedRunningTime="2025-08-13 01:25:33.348081417 +0000 UTC m=+15.180891914" Aug 13 01:25:34.332896 kubelet[2708]: E0813 01:25:34.332826 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:34.696984 systemd-networkd[1464]: cilium_host: Link UP Aug 13 01:25:34.697151 systemd-networkd[1464]: cilium_net: Link UP Aug 13 01:25:34.697328 systemd-networkd[1464]: cilium_net: Gained carrier Aug 13 01:25:34.697495 systemd-networkd[1464]: cilium_host: Gained carrier Aug 13 01:25:34.722752 systemd-networkd[1464]: cilium_host: Gained IPv6LL Aug 13 01:25:34.800779 systemd-networkd[1464]: cilium_vxlan: Link UP Aug 13 01:25:34.800792 systemd-networkd[1464]: cilium_vxlan: Gained carrier Aug 13 01:25:35.006707 kernel: NET: Registered PF_ALG protocol family Aug 13 01:25:35.287587 systemd-networkd[1464]: cilium_net: Gained IPv6LL Aug 13 01:25:35.335399 kubelet[2708]: E0813 01:25:35.335356 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:35.611171 update_engine[1534]: I20250813 01:25:35.610723 1534 update_attempter.cc:509] Updating boot flags... Aug 13 01:25:35.613417 systemd-networkd[1464]: lxc_health: Link UP Aug 13 01:25:35.616820 systemd-networkd[1464]: lxc_health: Gained carrier Aug 13 01:25:36.336488 kubelet[2708]: E0813 01:25:36.336444 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:36.824588 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL Aug 13 01:25:37.529114 systemd-networkd[1464]: lxc_health: Gained IPv6LL Aug 13 01:25:38.388559 kubelet[2708]: I0813 01:25:38.388504 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:38.388559 kubelet[2708]: I0813 01:25:38.388557 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:25:38.389960 kubelet[2708]: I0813 01:25:38.389928 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:38.409960 kubelet[2708]: I0813 01:25:38.409922 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:38.411678 kubelet[2708]: I0813 01:25:38.410078 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/cilium-bj2vr","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:25:38.411678 kubelet[2708]: E0813 01:25:38.410115 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:25:38.411678 kubelet[2708]: E0813 01:25:38.410128 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:38.411678 kubelet[2708]: E0813 01:25:38.410153 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:38.411678 kubelet[2708]: E0813 01:25:38.410165 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:38.411678 kubelet[2708]: E0813 01:25:38.410176 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:25:38.411678 kubelet[2708]: E0813 01:25:38.410185 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:38.411678 kubelet[2708]: I0813 01:25:38.410195 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:25:45.181967 kubelet[2708]: I0813 01:25:45.181909 2708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:25:45.186192 kubelet[2708]: E0813 01:25:45.183169 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:45.351345 kubelet[2708]: E0813 01:25:45.351300 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:25:48.422060 kubelet[2708]: I0813 01:25:48.422028 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:48.422060 kubelet[2708]: I0813 01:25:48.422062 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:25:48.424952 kubelet[2708]: I0813 01:25:48.424913 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:48.432972 kubelet[2708]: I0813 01:25:48.432960 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:48.433066 kubelet[2708]: I0813 01:25:48.433045 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:25:48.433109 kubelet[2708]: E0813 01:25:48.433081 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:25:48.433109 kubelet[2708]: E0813 01:25:48.433090 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:25:48.433109 kubelet[2708]: E0813 01:25:48.433097 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:48.433109 kubelet[2708]: E0813 01:25:48.433103 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:48.433109 kubelet[2708]: E0813 01:25:48.433110 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:48.433200 kubelet[2708]: E0813 01:25:48.433116 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:48.433200 kubelet[2708]: I0813 01:25:48.433124 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:25:58.449169 kubelet[2708]: I0813 01:25:58.449109 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:58.449169 kubelet[2708]: I0813 01:25:58.449163 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:25:58.451706 kubelet[2708]: I0813 01:25:58.451686 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:25:58.463907 kubelet[2708]: I0813 01:25:58.463858 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:25:58.463964 kubelet[2708]: I0813 01:25:58.463945 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:25:58.464003 kubelet[2708]: E0813 01:25:58.463988 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:25:58.464003 kubelet[2708]: E0813 01:25:58.464002 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:25:58.464053 kubelet[2708]: E0813 01:25:58.464016 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:25:58.464053 kubelet[2708]: E0813 01:25:58.464025 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:25:58.464053 kubelet[2708]: E0813 01:25:58.464033 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:25:58.464053 kubelet[2708]: E0813 01:25:58.464040 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:25:58.464053 kubelet[2708]: I0813 01:25:58.464049 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:08.483666 kubelet[2708]: I0813 01:26:08.483581 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:08.483666 kubelet[2708]: I0813 01:26:08.483634 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:26:08.485450 kubelet[2708]: I0813 01:26:08.485413 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:08.498574 kubelet[2708]: I0813 01:26:08.498523 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:08.498763 kubelet[2708]: I0813 01:26:08.498620 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:26:08.498763 kubelet[2708]: E0813 01:26:08.498689 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:26:08.498763 kubelet[2708]: E0813 01:26:08.498703 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:26:08.498763 kubelet[2708]: E0813 01:26:08.498716 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:26:08.498763 kubelet[2708]: E0813 01:26:08.498732 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:26:08.498763 kubelet[2708]: E0813 01:26:08.498740 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:26:08.498763 kubelet[2708]: E0813 01:26:08.498747 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:26:08.498763 kubelet[2708]: I0813 01:26:08.498757 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:18.514101 kubelet[2708]: I0813 01:26:18.514050 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:18.514487 kubelet[2708]: I0813 01:26:18.514119 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:26:18.517378 kubelet[2708]: I0813 01:26:18.517197 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:18.531220 kubelet[2708]: I0813 01:26:18.531185 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:18.531396 kubelet[2708]: I0813 01:26:18.531361 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-proxy-rhk77","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:26:18.531481 kubelet[2708]: E0813 01:26:18.531432 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:26:18.531481 kubelet[2708]: E0813 01:26:18.531454 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:26:18.531481 kubelet[2708]: E0813 01:26:18.531463 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:26:18.531566 kubelet[2708]: E0813 01:26:18.531495 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:26:18.531566 kubelet[2708]: E0813 01:26:18.531506 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:26:18.531566 kubelet[2708]: E0813 01:26:18.531514 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:26:18.531566 kubelet[2708]: I0813 01:26:18.531525 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:28.548182 kubelet[2708]: I0813 01:26:28.548138 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:28.548182 kubelet[2708]: I0813 01:26:28.548199 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:26:28.549962 kubelet[2708]: I0813 01:26:28.549942 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:28.561890 kubelet[2708]: I0813 01:26:28.561851 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:28.562116 kubelet[2708]: I0813 01:26:28.561985 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:26:28.562116 kubelet[2708]: E0813 01:26:28.562023 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:26:28.562116 kubelet[2708]: E0813 01:26:28.562037 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:26:28.562116 kubelet[2708]: E0813 01:26:28.562046 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:26:28.562116 kubelet[2708]: E0813 01:26:28.562055 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:26:28.562116 kubelet[2708]: E0813 01:26:28.562064 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:26:28.562116 kubelet[2708]: E0813 01:26:28.562072 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:26:28.562116 kubelet[2708]: I0813 01:26:28.562079 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:29.253437 kubelet[2708]: E0813 01:26:29.253342 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:26:38.584984 kubelet[2708]: I0813 01:26:38.584912 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:38.584984 kubelet[2708]: I0813 01:26:38.584981 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:26:38.587556 kubelet[2708]: I0813 01:26:38.587476 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:38.599371 kubelet[2708]: I0813 01:26:38.599335 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:38.599489 kubelet[2708]: I0813 01:26:38.599454 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:26:38.599530 kubelet[2708]: E0813 01:26:38.599499 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:26:38.599530 kubelet[2708]: E0813 01:26:38.599514 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:26:38.599530 kubelet[2708]: E0813 01:26:38.599524 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:26:38.599530 kubelet[2708]: E0813 01:26:38.599534 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:26:38.599626 kubelet[2708]: E0813 01:26:38.599545 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:26:38.599626 kubelet[2708]: E0813 01:26:38.599555 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:26:38.599626 kubelet[2708]: I0813 01:26:38.599564 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:47.253404 kubelet[2708]: E0813 01:26:47.253317 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:26:48.612283 kubelet[2708]: I0813 01:26:48.612244 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:48.612283 kubelet[2708]: I0813 01:26:48.612286 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:26:48.613903 kubelet[2708]: I0813 01:26:48.613887 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:48.623233 kubelet[2708]: I0813 01:26:48.623206 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:48.623282 kubelet[2708]: I0813 01:26:48.623269 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-proxy-rhk77","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:26:48.623310 kubelet[2708]: E0813 01:26:48.623300 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:26:48.623332 kubelet[2708]: E0813 01:26:48.623311 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:26:48.623332 kubelet[2708]: E0813 01:26:48.623319 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:26:48.623332 kubelet[2708]: E0813 01:26:48.623327 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:26:48.623396 kubelet[2708]: E0813 01:26:48.623335 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:26:48.623396 kubelet[2708]: E0813 01:26:48.623344 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:26:48.623396 kubelet[2708]: I0813 01:26:48.623353 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:26:50.253673 kubelet[2708]: E0813 01:26:50.253500 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:26:56.254551 kubelet[2708]: E0813 01:26:56.253777 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:26:57.253480 kubelet[2708]: E0813 01:26:57.253432 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:26:58.253695 kubelet[2708]: E0813 01:26:58.253250 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:26:58.641940 kubelet[2708]: I0813 01:26:58.641626 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:58.641940 kubelet[2708]: I0813 01:26:58.641694 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:26:58.644599 kubelet[2708]: I0813 01:26:58.644566 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:26:58.656355 kubelet[2708]: I0813 01:26:58.656321 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:26:58.656571 kubelet[2708]: I0813 01:26:58.656440 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:26:58.656571 kubelet[2708]: E0813 01:26:58.656479 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:26:58.656571 kubelet[2708]: E0813 01:26:58.656493 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:26:58.656571 kubelet[2708]: E0813 01:26:58.656504 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:26:58.656571 kubelet[2708]: E0813 01:26:58.656516 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:26:58.656571 kubelet[2708]: E0813 01:26:58.656528 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:26:58.656571 kubelet[2708]: E0813 01:26:58.656541 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:26:58.656571 kubelet[2708]: I0813 01:26:58.656557 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:27:04.173584 systemd[1]: Started sshd@7-172.233.222.13:22-147.75.109.163:49314.service - OpenSSH per-connection server daemon (147.75.109.163:49314). Aug 13 01:27:04.509436 sshd[3824]: Accepted publickey for core from 147.75.109.163 port 49314 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:04.511131 sshd-session[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:04.522680 systemd-logind[1532]: New session 8 of user core. Aug 13 01:27:04.528795 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:27:04.830891 sshd[3826]: Connection closed by 147.75.109.163 port 49314 Aug 13 01:27:04.832885 sshd-session[3824]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:04.838530 systemd[1]: sshd@7-172.233.222.13:22-147.75.109.163:49314.service: Deactivated successfully. Aug 13 01:27:04.841151 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:27:04.842183 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:27:04.844212 systemd-logind[1532]: Removed session 8. Aug 13 01:27:08.672764 kubelet[2708]: I0813 01:27:08.672279 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:08.672764 kubelet[2708]: I0813 01:27:08.672328 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:27:08.673883 kubelet[2708]: I0813 01:27:08.673635 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:27:08.686097 kubelet[2708]: I0813 01:27:08.686048 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:08.686303 kubelet[2708]: I0813 01:27:08.686126 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:27:08.686303 kubelet[2708]: E0813 01:27:08.686174 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:27:08.686303 kubelet[2708]: E0813 01:27:08.686188 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:27:08.686303 kubelet[2708]: E0813 01:27:08.686196 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:27:08.686303 kubelet[2708]: E0813 01:27:08.686204 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:27:08.686303 kubelet[2708]: E0813 01:27:08.686213 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:27:08.686303 kubelet[2708]: E0813 01:27:08.686221 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:27:08.686303 kubelet[2708]: I0813 01:27:08.686230 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:27:09.892315 systemd[1]: Started sshd@8-172.233.222.13:22-147.75.109.163:44102.service - OpenSSH per-connection server daemon (147.75.109.163:44102). Aug 13 01:27:10.225271 sshd[3839]: Accepted publickey for core from 147.75.109.163 port 44102 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:10.227426 sshd-session[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:10.235527 systemd-logind[1532]: New session 9 of user core. Aug 13 01:27:10.243777 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:27:10.529819 sshd[3841]: Connection closed by 147.75.109.163 port 44102 Aug 13 01:27:10.530779 sshd-session[3839]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:10.536066 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:27:10.536936 systemd[1]: sshd@8-172.233.222.13:22-147.75.109.163:44102.service: Deactivated successfully. Aug 13 01:27:10.539756 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:27:10.541618 systemd-logind[1532]: Removed session 9. Aug 13 01:27:12.609139 update_engine[1534]: I20250813 01:27:12.609056 1534 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:27:12.609139 update_engine[1534]: I20250813 01:27:12.609113 1534 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:27:12.609812 update_engine[1534]: I20250813 01:27:12.609325 1534 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:27:12.609905 update_engine[1534]: I20250813 01:27:12.609864 1534 omaha_request_params.cc:62] Current group set to beta Aug 13 01:27:12.610215 update_engine[1534]: I20250813 01:27:12.610196 1534 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:27:12.610270 update_engine[1534]: I20250813 01:27:12.610256 1534 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:27:12.610329 update_engine[1534]: I20250813 01:27:12.610314 1534 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:27:12.610396 update_engine[1534]: I20250813 01:27:12.610383 1534 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:27:12.610499 update_engine[1534]: I20250813 01:27:12.610482 1534 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:27:12.610545 update_engine[1534]: I20250813 01:27:12.610532 1534 omaha_request_action.cc:272] Request: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.610545 update_engine[1534]: Aug 13 01:27:12.611678 update_engine[1534]: I20250813 01:27:12.610760 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:27:12.611738 locksmithd[1573]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:27:12.611989 update_engine[1534]: I20250813 01:27:12.611953 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:27:12.612371 update_engine[1534]: I20250813 01:27:12.612336 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:27:12.613125 update_engine[1534]: E20250813 01:27:12.613075 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:27:12.613183 update_engine[1534]: I20250813 01:27:12.613159 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 01:27:15.594300 systemd[1]: Started sshd@9-172.233.222.13:22-147.75.109.163:44104.service - OpenSSH per-connection server daemon (147.75.109.163:44104). Aug 13 01:27:15.930694 sshd[3856]: Accepted publickey for core from 147.75.109.163 port 44104 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:15.932618 sshd-session[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:15.938831 systemd-logind[1532]: New session 10 of user core. Aug 13 01:27:15.942828 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:27:16.237006 sshd[3858]: Connection closed by 147.75.109.163 port 44104 Aug 13 01:27:16.237766 sshd-session[3856]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:16.243184 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:27:16.244603 systemd[1]: sshd@9-172.233.222.13:22-147.75.109.163:44104.service: Deactivated successfully. Aug 13 01:27:16.247387 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:27:16.250109 systemd-logind[1532]: Removed session 10. Aug 13 01:27:16.304018 systemd[1]: Started sshd@10-172.233.222.13:22-147.75.109.163:44120.service - OpenSSH per-connection server daemon (147.75.109.163:44120). Aug 13 01:27:16.655712 sshd[3871]: Accepted publickey for core from 147.75.109.163 port 44120 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:16.657427 sshd-session[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:16.663482 systemd-logind[1532]: New session 11 of user core. Aug 13 01:27:16.670781 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:27:16.996722 sshd[3873]: Connection closed by 147.75.109.163 port 44120 Aug 13 01:27:16.997029 sshd-session[3871]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:17.002239 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:27:17.002808 systemd[1]: sshd@10-172.233.222.13:22-147.75.109.163:44120.service: Deactivated successfully. Aug 13 01:27:17.005508 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:27:17.009563 systemd-logind[1532]: Removed session 11. Aug 13 01:27:17.054840 systemd[1]: Started sshd@11-172.233.222.13:22-147.75.109.163:44132.service - OpenSSH per-connection server daemon (147.75.109.163:44132). Aug 13 01:27:17.390498 sshd[3883]: Accepted publickey for core from 147.75.109.163 port 44132 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:17.392364 sshd-session[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:17.397380 systemd-logind[1532]: New session 12 of user core. Aug 13 01:27:17.402772 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:27:17.689808 sshd[3885]: Connection closed by 147.75.109.163 port 44132 Aug 13 01:27:17.690524 sshd-session[3883]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:17.695981 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:27:17.696979 systemd[1]: sshd@11-172.233.222.13:22-147.75.109.163:44132.service: Deactivated successfully. Aug 13 01:27:17.700134 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:27:17.702307 systemd-logind[1532]: Removed session 12. Aug 13 01:27:18.702673 kubelet[2708]: I0813 01:27:18.702618 2708 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:18.703083 kubelet[2708]: I0813 01:27:18.702696 2708 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:27:18.705951 kubelet[2708]: I0813 01:27:18.705829 2708 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:27:18.707853 kubelet[2708]: I0813 01:27:18.707798 2708 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 01:27:18.708468 containerd[1555]: time="2025-08-13T01:27:18.708433353Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:27:18.710159 containerd[1555]: time="2025-08-13T01:27:18.710087604Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:27:18.710779 containerd[1555]: time="2025-08-13T01:27:18.710749120Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 01:27:18.711206 containerd[1555]: time="2025-08-13T01:27:18.711140008Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 01:27:18.711206 containerd[1555]: time="2025-08-13T01:27:18.711194758Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:27:18.711383 kubelet[2708]: I0813 01:27:18.711344 2708 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:27:18.711770 containerd[1555]: time="2025-08-13T01:27:18.711737385Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:27:18.712829 containerd[1555]: time="2025-08-13T01:27:18.712801049Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:27:18.713579 containerd[1555]: time="2025-08-13T01:27:18.713549945Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:27:18.714069 containerd[1555]: time="2025-08-13T01:27:18.714038943Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:27:18.714135 containerd[1555]: time="2025-08-13T01:27:18.714126572Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:27:18.729355 kubelet[2708]: I0813 01:27:18.729330 2708 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:27:18.729451 kubelet[2708]: I0813 01:27:18.729427 2708 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-gn8p8","kube-system/cilium-bj2vr","kube-system/kube-controller-manager-172-233-222-13","kube-system/kube-proxy-rhk77","kube-system/kube-apiserver-172-233-222-13","kube-system/kube-scheduler-172-233-222-13"] Aug 13 01:27:18.729511 kubelet[2708]: E0813 01:27:18.729471 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-gn8p8" Aug 13 01:27:18.729511 kubelet[2708]: E0813 01:27:18.729485 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-bj2vr" Aug 13 01:27:18.729511 kubelet[2708]: E0813 01:27:18.729495 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-222-13" Aug 13 01:27:18.729511 kubelet[2708]: E0813 01:27:18.729504 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-rhk77" Aug 13 01:27:18.729511 kubelet[2708]: E0813 01:27:18.729516 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-222-13" Aug 13 01:27:18.729836 kubelet[2708]: E0813 01:27:18.729525 2708 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-222-13" Aug 13 01:27:18.729836 kubelet[2708]: I0813 01:27:18.729535 2708 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:27:22.609200 update_engine[1534]: I20250813 01:27:22.609090 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:27:22.609717 update_engine[1534]: I20250813 01:27:22.609489 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:27:22.609828 update_engine[1534]: I20250813 01:27:22.609784 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:27:22.610661 update_engine[1534]: E20250813 01:27:22.610603 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:27:22.610696 update_engine[1534]: I20250813 01:27:22.610674 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 01:27:22.762780 systemd[1]: Started sshd@12-172.233.222.13:22-147.75.109.163:41424.service - OpenSSH per-connection server daemon (147.75.109.163:41424). Aug 13 01:27:23.112531 sshd[3901]: Accepted publickey for core from 147.75.109.163 port 41424 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:23.114425 sshd-session[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:23.120861 systemd-logind[1532]: New session 13 of user core. Aug 13 01:27:23.125840 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:27:23.422001 sshd[3907]: Connection closed by 147.75.109.163 port 41424 Aug 13 01:27:23.422762 sshd-session[3901]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:23.428133 systemd[1]: sshd@12-172.233.222.13:22-147.75.109.163:41424.service: Deactivated successfully. Aug 13 01:27:23.431204 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:27:23.432127 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:27:23.433636 systemd-logind[1532]: Removed session 13. Aug 13 01:27:28.488757 systemd[1]: Started sshd@13-172.233.222.13:22-147.75.109.163:59872.service - OpenSSH per-connection server daemon (147.75.109.163:59872). Aug 13 01:27:28.833412 sshd[3926]: Accepted publickey for core from 147.75.109.163 port 59872 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:28.834995 sshd-session[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:28.840504 systemd-logind[1532]: New session 14 of user core. Aug 13 01:27:28.847803 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:27:29.138457 sshd[3928]: Connection closed by 147.75.109.163 port 59872 Aug 13 01:27:29.139436 sshd-session[3926]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:29.143346 systemd[1]: sshd@13-172.233.222.13:22-147.75.109.163:59872.service: Deactivated successfully. Aug 13 01:27:29.145581 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:27:29.147858 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:27:29.150163 systemd-logind[1532]: Removed session 14. Aug 13 01:27:29.207363 systemd[1]: Started sshd@14-172.233.222.13:22-147.75.109.163:59886.service - OpenSSH per-connection server daemon (147.75.109.163:59886). Aug 13 01:27:29.549339 sshd[3940]: Accepted publickey for core from 147.75.109.163 port 59886 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:29.550857 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:29.555711 systemd-logind[1532]: New session 15 of user core. Aug 13 01:27:29.565938 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:27:30.041169 sshd[3942]: Connection closed by 147.75.109.163 port 59886 Aug 13 01:27:30.041880 sshd-session[3940]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:30.046823 systemd[1]: sshd@14-172.233.222.13:22-147.75.109.163:59886.service: Deactivated successfully. Aug 13 01:27:30.049434 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:27:30.050402 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:27:30.052443 systemd-logind[1532]: Removed session 15. Aug 13 01:27:30.099514 systemd[1]: Started sshd@15-172.233.222.13:22-147.75.109.163:59896.service - OpenSSH per-connection server daemon (147.75.109.163:59896). Aug 13 01:27:30.430730 sshd[3952]: Accepted publickey for core from 147.75.109.163 port 59896 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:30.432253 sshd-session[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:30.448804 systemd-logind[1532]: New session 16 of user core. Aug 13 01:27:30.456784 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:27:31.142694 sshd[3954]: Connection closed by 147.75.109.163 port 59896 Aug 13 01:27:31.144858 sshd-session[3952]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:31.152305 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:27:31.154336 systemd[1]: sshd@15-172.233.222.13:22-147.75.109.163:59896.service: Deactivated successfully. Aug 13 01:27:31.158815 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:27:31.163833 systemd-logind[1532]: Removed session 16. Aug 13 01:27:31.217835 systemd[1]: Started sshd@16-172.233.222.13:22-147.75.109.163:59898.service - OpenSSH per-connection server daemon (147.75.109.163:59898). Aug 13 01:27:31.568264 sshd[3971]: Accepted publickey for core from 147.75.109.163 port 59898 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:31.570046 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:31.576260 systemd-logind[1532]: New session 17 of user core. Aug 13 01:27:31.586795 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:27:31.983907 sshd[3973]: Connection closed by 147.75.109.163 port 59898 Aug 13 01:27:31.984603 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:31.990098 systemd[1]: sshd@16-172.233.222.13:22-147.75.109.163:59898.service: Deactivated successfully. Aug 13 01:27:31.993518 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:27:31.994580 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:27:31.996997 systemd-logind[1532]: Removed session 17. Aug 13 01:27:32.044010 systemd[1]: Started sshd@17-172.233.222.13:22-147.75.109.163:59910.service - OpenSSH per-connection server daemon (147.75.109.163:59910). Aug 13 01:27:32.391204 sshd[3983]: Accepted publickey for core from 147.75.109.163 port 59910 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:32.393284 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:32.399394 systemd-logind[1532]: New session 18 of user core. Aug 13 01:27:32.408771 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:27:32.608316 update_engine[1534]: I20250813 01:27:32.607693 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:27:32.608316 update_engine[1534]: I20250813 01:27:32.607959 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:27:32.608316 update_engine[1534]: I20250813 01:27:32.608176 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:27:32.609854 update_engine[1534]: E20250813 01:27:32.609736 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:27:32.609854 update_engine[1534]: I20250813 01:27:32.609829 1534 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 01:27:32.699407 sshd[3985]: Connection closed by 147.75.109.163 port 59910 Aug 13 01:27:32.700123 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:32.705172 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:27:32.705832 systemd[1]: sshd@17-172.233.222.13:22-147.75.109.163:59910.service: Deactivated successfully. Aug 13 01:27:32.708382 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:27:32.710848 systemd-logind[1532]: Removed session 18. Aug 13 01:27:37.764628 systemd[1]: Started sshd@18-172.233.222.13:22-147.75.109.163:59926.service - OpenSSH per-connection server daemon (147.75.109.163:59926). Aug 13 01:27:38.118456 sshd[3999]: Accepted publickey for core from 147.75.109.163 port 59926 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:38.119782 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:38.124022 systemd-logind[1532]: New session 19 of user core. Aug 13 01:27:38.130756 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:27:38.423379 sshd[4001]: Connection closed by 147.75.109.163 port 59926 Aug 13 01:27:38.423913 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:38.428690 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:27:38.429362 systemd[1]: sshd@18-172.233.222.13:22-147.75.109.163:59926.service: Deactivated successfully. Aug 13 01:27:38.430942 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:27:38.432780 systemd-logind[1532]: Removed session 19. Aug 13 01:27:42.611981 update_engine[1534]: I20250813 01:27:42.611882 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:27:42.612537 update_engine[1534]: I20250813 01:27:42.612236 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:27:42.612599 update_engine[1534]: I20250813 01:27:42.612560 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:27:42.613161 update_engine[1534]: E20250813 01:27:42.613122 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:27:42.613228 update_engine[1534]: I20250813 01:27:42.613170 1534 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:27:42.613228 update_engine[1534]: I20250813 01:27:42.613181 1534 omaha_request_action.cc:617] Omaha request response: Aug 13 01:27:42.613315 update_engine[1534]: E20250813 01:27:42.613284 1534 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 01:27:42.613315 update_engine[1534]: I20250813 01:27:42.613312 1534 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 01:27:42.613371 update_engine[1534]: I20250813 01:27:42.613318 1534 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:27:42.613371 update_engine[1534]: I20250813 01:27:42.613325 1534 update_attempter.cc:306] Processing Done. Aug 13 01:27:42.613371 update_engine[1534]: E20250813 01:27:42.613341 1534 update_attempter.cc:619] Update failed. Aug 13 01:27:42.613371 update_engine[1534]: I20250813 01:27:42.613348 1534 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 01:27:42.613371 update_engine[1534]: I20250813 01:27:42.613353 1534 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 01:27:42.613371 update_engine[1534]: I20250813 01:27:42.613359 1534 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 01:27:42.613487 update_engine[1534]: I20250813 01:27:42.613440 1534 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:27:42.613487 update_engine[1534]: I20250813 01:27:42.613461 1534 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:27:42.613487 update_engine[1534]: I20250813 01:27:42.613466 1534 omaha_request_action.cc:272] Request: Aug 13 01:27:42.613487 update_engine[1534]: Aug 13 01:27:42.613487 update_engine[1534]: Aug 13 01:27:42.613487 update_engine[1534]: Aug 13 01:27:42.613487 update_engine[1534]: Aug 13 01:27:42.613487 update_engine[1534]: Aug 13 01:27:42.613487 update_engine[1534]: Aug 13 01:27:42.613487 update_engine[1534]: I20250813 01:27:42.613473 1534 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:27:42.613691 update_engine[1534]: I20250813 01:27:42.613619 1534 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:27:42.613918 update_engine[1534]: I20250813 01:27:42.613806 1534 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:27:42.615124 update_engine[1534]: E20250813 01:27:42.615001 1534 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615094 1534 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615103 1534 omaha_request_action.cc:617] Omaha request response: Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615113 1534 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615118 1534 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615123 1534 update_attempter.cc:306] Processing Done. Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615130 1534 update_attempter.cc:310] Error event sent. Aug 13 01:27:42.615124 update_engine[1534]: I20250813 01:27:42.615142 1534 update_check_scheduler.cc:74] Next update check in 42m27s Aug 13 01:27:42.615431 locksmithd[1573]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 01:27:42.615787 locksmithd[1573]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 01:27:43.488345 systemd[1]: Started sshd@19-172.233.222.13:22-147.75.109.163:41822.service - OpenSSH per-connection server daemon (147.75.109.163:41822). Aug 13 01:27:43.829848 sshd[4013]: Accepted publickey for core from 147.75.109.163 port 41822 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:43.831443 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:43.836695 systemd-logind[1532]: New session 20 of user core. Aug 13 01:27:43.841774 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:27:44.158936 sshd[4015]: Connection closed by 147.75.109.163 port 41822 Aug 13 01:27:44.160128 sshd-session[4013]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:44.164135 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:27:44.171116 systemd[1]: sshd@19-172.233.222.13:22-147.75.109.163:41822.service: Deactivated successfully. Aug 13 01:27:44.175353 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:27:44.179497 systemd-logind[1532]: Removed session 20. Aug 13 01:27:49.224094 systemd[1]: Started sshd@20-172.233.222.13:22-147.75.109.163:52286.service - OpenSSH per-connection server daemon (147.75.109.163:52286). Aug 13 01:27:49.566559 sshd[4029]: Accepted publickey for core from 147.75.109.163 port 52286 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:49.568150 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:49.573638 systemd-logind[1532]: New session 21 of user core. Aug 13 01:27:49.578758 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:27:49.869044 sshd[4031]: Connection closed by 147.75.109.163 port 52286 Aug 13 01:27:49.869795 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:49.873581 systemd[1]: sshd@20-172.233.222.13:22-147.75.109.163:52286.service: Deactivated successfully. Aug 13 01:27:49.876348 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:27:49.879102 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:27:49.880883 systemd-logind[1532]: Removed session 21. Aug 13 01:27:52.253307 kubelet[2708]: E0813 01:27:52.253258 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:27:52.254887 kubelet[2708]: E0813 01:27:52.254639 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:27:53.253387 kubelet[2708]: E0813 01:27:53.253322 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:27:54.939088 systemd[1]: Started sshd@21-172.233.222.13:22-147.75.109.163:52302.service - OpenSSH per-connection server daemon (147.75.109.163:52302). Aug 13 01:27:55.279248 sshd[4045]: Accepted publickey for core from 147.75.109.163 port 52302 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:27:55.280825 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:27:55.286403 systemd-logind[1532]: New session 22 of user core. Aug 13 01:27:55.291784 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:27:55.580124 sshd[4047]: Connection closed by 147.75.109.163 port 52302 Aug 13 01:27:55.580863 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Aug 13 01:27:55.586136 systemd[1]: sshd@21-172.233.222.13:22-147.75.109.163:52302.service: Deactivated successfully. Aug 13 01:27:55.588571 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:27:55.589399 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:27:55.590783 systemd-logind[1532]: Removed session 22. Aug 13 01:28:00.645584 systemd[1]: Started sshd@22-172.233.222.13:22-147.75.109.163:49518.service - OpenSSH per-connection server daemon (147.75.109.163:49518). Aug 13 01:28:00.986518 sshd[4059]: Accepted publickey for core from 147.75.109.163 port 49518 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:00.988086 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:00.993796 systemd-logind[1532]: New session 23 of user core. Aug 13 01:28:00.999753 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:28:01.286394 sshd[4061]: Connection closed by 147.75.109.163 port 49518 Aug 13 01:28:01.287183 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:01.291801 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:28:01.292812 systemd[1]: sshd@22-172.233.222.13:22-147.75.109.163:49518.service: Deactivated successfully. Aug 13 01:28:01.295108 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:28:01.297053 systemd-logind[1532]: Removed session 23. Aug 13 01:28:04.253681 kubelet[2708]: E0813 01:28:04.253338 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:28:05.253317 kubelet[2708]: E0813 01:28:05.253295 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:28:06.346223 systemd[1]: Started sshd@23-172.233.222.13:22-147.75.109.163:49534.service - OpenSSH per-connection server daemon (147.75.109.163:49534). Aug 13 01:28:06.680854 sshd[4072]: Accepted publickey for core from 147.75.109.163 port 49534 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:06.682124 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:06.686695 systemd-logind[1532]: New session 24 of user core. Aug 13 01:28:06.693762 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:28:06.975700 sshd[4074]: Connection closed by 147.75.109.163 port 49534 Aug 13 01:28:06.976282 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:06.980852 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:28:06.981464 systemd[1]: sshd@23-172.233.222.13:22-147.75.109.163:49534.service: Deactivated successfully. Aug 13 01:28:06.984223 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:28:06.986413 systemd-logind[1532]: Removed session 24. Aug 13 01:28:12.037572 systemd[1]: Started sshd@24-172.233.222.13:22-147.75.109.163:58080.service - OpenSSH per-connection server daemon (147.75.109.163:58080). Aug 13 01:28:12.374044 sshd[4086]: Accepted publickey for core from 147.75.109.163 port 58080 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:12.375720 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:12.380852 systemd-logind[1532]: New session 25 of user core. Aug 13 01:28:12.391786 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:28:12.669736 sshd[4088]: Connection closed by 147.75.109.163 port 58080 Aug 13 01:28:12.670581 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:12.674660 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:28:12.675393 systemd[1]: sshd@24-172.233.222.13:22-147.75.109.163:58080.service: Deactivated successfully. Aug 13 01:28:12.678040 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:28:12.680027 systemd-logind[1532]: Removed session 25. Aug 13 01:28:17.730294 systemd[1]: Started sshd@25-172.233.222.13:22-147.75.109.163:58088.service - OpenSSH per-connection server daemon (147.75.109.163:58088). Aug 13 01:28:18.064779 sshd[4100]: Accepted publickey for core from 147.75.109.163 port 58088 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:18.066326 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:18.071220 systemd-logind[1532]: New session 26 of user core. Aug 13 01:28:18.073797 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:28:18.360029 sshd[4102]: Connection closed by 147.75.109.163 port 58088 Aug 13 01:28:18.360852 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:18.365330 systemd[1]: sshd@25-172.233.222.13:22-147.75.109.163:58088.service: Deactivated successfully. Aug 13 01:28:18.367542 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:28:18.368668 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:28:18.370554 systemd-logind[1532]: Removed session 26. Aug 13 01:28:22.254887 kubelet[2708]: E0813 01:28:22.254858 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:28:23.429904 systemd[1]: Started sshd@26-172.233.222.13:22-147.75.109.163:56264.service - OpenSSH per-connection server daemon (147.75.109.163:56264). Aug 13 01:28:23.778700 sshd[4115]: Accepted publickey for core from 147.75.109.163 port 56264 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:23.779602 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:23.785044 systemd-logind[1532]: New session 27 of user core. Aug 13 01:28:23.790798 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:28:24.083242 sshd[4117]: Connection closed by 147.75.109.163 port 56264 Aug 13 01:28:24.083489 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:24.088908 systemd[1]: sshd@26-172.233.222.13:22-147.75.109.163:56264.service: Deactivated successfully. Aug 13 01:28:24.091293 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:28:24.092918 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:28:24.094510 systemd-logind[1532]: Removed session 27. Aug 13 01:28:29.150789 systemd[1]: Started sshd@27-172.233.222.13:22-147.75.109.163:58546.service - OpenSSH per-connection server daemon (147.75.109.163:58546). Aug 13 01:28:29.492290 sshd[4131]: Accepted publickey for core from 147.75.109.163 port 58546 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:29.493921 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:29.499133 systemd-logind[1532]: New session 28 of user core. Aug 13 01:28:29.501760 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:28:29.796560 sshd[4133]: Connection closed by 147.75.109.163 port 58546 Aug 13 01:28:29.797330 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:29.801720 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:28:29.802276 systemd[1]: sshd@27-172.233.222.13:22-147.75.109.163:58546.service: Deactivated successfully. Aug 13 01:28:29.804961 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:28:29.806932 systemd-logind[1532]: Removed session 28. Aug 13 01:28:34.861507 systemd[1]: Started sshd@28-172.233.222.13:22-147.75.109.163:58560.service - OpenSSH per-connection server daemon (147.75.109.163:58560). Aug 13 01:28:35.203378 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 58560 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:35.204807 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:35.209323 systemd-logind[1532]: New session 29 of user core. Aug 13 01:28:35.213752 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:28:35.496199 sshd[4147]: Connection closed by 147.75.109.163 port 58560 Aug 13 01:28:35.497077 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:35.500464 systemd-logind[1532]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:28:35.501195 systemd[1]: sshd@28-172.233.222.13:22-147.75.109.163:58560.service: Deactivated successfully. Aug 13 01:28:35.503118 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:28:35.504790 systemd-logind[1532]: Removed session 29. Aug 13 01:28:40.561754 systemd[1]: Started sshd@29-172.233.222.13:22-147.75.109.163:43730.service - OpenSSH per-connection server daemon (147.75.109.163:43730). Aug 13 01:28:40.905705 sshd[4160]: Accepted publickey for core from 147.75.109.163 port 43730 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:40.907468 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:40.913038 systemd-logind[1532]: New session 30 of user core. Aug 13 01:28:40.919757 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:28:41.210093 sshd[4163]: Connection closed by 147.75.109.163 port 43730 Aug 13 01:28:41.210753 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:41.216110 systemd-logind[1532]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:28:41.216551 systemd[1]: sshd@29-172.233.222.13:22-147.75.109.163:43730.service: Deactivated successfully. Aug 13 01:28:41.219168 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:28:41.220939 systemd-logind[1532]: Removed session 30. Aug 13 01:28:46.283928 systemd[1]: Started sshd@30-172.233.222.13:22-147.75.109.163:43742.service - OpenSSH per-connection server daemon (147.75.109.163:43742). Aug 13 01:28:46.627583 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 43742 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:46.629038 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:46.633955 systemd-logind[1532]: New session 31 of user core. Aug 13 01:28:46.644769 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:28:46.932819 sshd[4178]: Connection closed by 147.75.109.163 port 43742 Aug 13 01:28:46.933488 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:46.938351 systemd[1]: sshd@30-172.233.222.13:22-147.75.109.163:43742.service: Deactivated successfully. Aug 13 01:28:46.941013 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:28:46.942929 systemd-logind[1532]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:28:46.944093 systemd-logind[1532]: Removed session 31. Aug 13 01:28:51.995716 systemd[1]: Started sshd@31-172.233.222.13:22-147.75.109.163:57220.service - OpenSSH per-connection server daemon (147.75.109.163:57220). Aug 13 01:28:52.332761 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 57220 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:52.333887 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:52.339005 systemd-logind[1532]: New session 32 of user core. Aug 13 01:28:52.341770 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:28:52.634335 sshd[4191]: Connection closed by 147.75.109.163 port 57220 Aug 13 01:28:52.635232 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:52.640363 systemd-logind[1532]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:28:52.641116 systemd[1]: sshd@31-172.233.222.13:22-147.75.109.163:57220.service: Deactivated successfully. Aug 13 01:28:52.643328 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:28:52.645328 systemd-logind[1532]: Removed session 32. Aug 13 01:28:57.696621 systemd[1]: Started sshd@32-172.233.222.13:22-147.75.109.163:57230.service - OpenSSH per-connection server daemon (147.75.109.163:57230). Aug 13 01:28:58.031759 sshd[4205]: Accepted publickey for core from 147.75.109.163 port 57230 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:28:58.032874 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:28:58.038106 systemd-logind[1532]: New session 33 of user core. Aug 13 01:28:58.046773 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:28:58.254669 kubelet[2708]: E0813 01:28:58.254522 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:28:58.322447 sshd[4207]: Connection closed by 147.75.109.163 port 57230 Aug 13 01:28:58.323140 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Aug 13 01:28:58.327804 systemd[1]: sshd@32-172.233.222.13:22-147.75.109.163:57230.service: Deactivated successfully. Aug 13 01:28:58.329878 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:28:58.331066 systemd-logind[1532]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:28:58.333100 systemd-logind[1532]: Removed session 33. Aug 13 01:29:03.253352 kubelet[2708]: E0813 01:29:03.253319 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:29:03.390283 systemd[1]: Started sshd@33-172.233.222.13:22-147.75.109.163:51160.service - OpenSSH per-connection server daemon (147.75.109.163:51160). Aug 13 01:29:03.731692 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 51160 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:03.733193 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:03.737953 systemd-logind[1532]: New session 34 of user core. Aug 13 01:29:03.744824 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:29:04.033851 sshd[4221]: Connection closed by 147.75.109.163 port 51160 Aug 13 01:29:04.035525 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:04.040084 systemd[1]: sshd@33-172.233.222.13:22-147.75.109.163:51160.service: Deactivated successfully. Aug 13 01:29:04.042421 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:29:04.043470 systemd-logind[1532]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:29:04.045400 systemd-logind[1532]: Removed session 34. Aug 13 01:29:08.253685 kubelet[2708]: E0813 01:29:08.253036 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:29:09.102293 systemd[1]: Started sshd@34-172.233.222.13:22-147.75.109.163:36094.service - OpenSSH per-connection server daemon (147.75.109.163:36094). Aug 13 01:29:09.445202 sshd[4233]: Accepted publickey for core from 147.75.109.163 port 36094 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:09.446518 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:09.451538 systemd-logind[1532]: New session 35 of user core. Aug 13 01:29:09.460763 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:29:09.743277 sshd[4235]: Connection closed by 147.75.109.163 port 36094 Aug 13 01:29:09.743846 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:09.748689 systemd[1]: sshd@34-172.233.222.13:22-147.75.109.163:36094.service: Deactivated successfully. Aug 13 01:29:09.749018 systemd-logind[1532]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:29:09.751288 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:29:09.754382 systemd-logind[1532]: Removed session 35. Aug 13 01:29:13.253300 kubelet[2708]: E0813 01:29:13.253248 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:29:14.808733 systemd[1]: Started sshd@35-172.233.222.13:22-147.75.109.163:36110.service - OpenSSH per-connection server daemon (147.75.109.163:36110). Aug 13 01:29:15.145761 sshd[4248]: Accepted publickey for core from 147.75.109.163 port 36110 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:15.147270 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:15.152264 systemd-logind[1532]: New session 36 of user core. Aug 13 01:29:15.156769 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:29:15.443220 sshd[4250]: Connection closed by 147.75.109.163 port 36110 Aug 13 01:29:15.443868 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:15.448349 systemd[1]: sshd@35-172.233.222.13:22-147.75.109.163:36110.service: Deactivated successfully. Aug 13 01:29:15.450578 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:29:15.451711 systemd-logind[1532]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:29:15.453302 systemd-logind[1532]: Removed session 36. Aug 13 01:29:20.516269 systemd[1]: Started sshd@36-172.233.222.13:22-147.75.109.163:52908.service - OpenSSH per-connection server daemon (147.75.109.163:52908). Aug 13 01:29:20.855379 sshd[4264]: Accepted publickey for core from 147.75.109.163 port 52908 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:20.856850 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:20.861692 systemd-logind[1532]: New session 37 of user core. Aug 13 01:29:20.868946 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:29:21.149659 sshd[4266]: Connection closed by 147.75.109.163 port 52908 Aug 13 01:29:21.150972 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:21.155005 systemd-logind[1532]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:29:21.155917 systemd[1]: sshd@36-172.233.222.13:22-147.75.109.163:52908.service: Deactivated successfully. Aug 13 01:29:21.158216 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:29:21.159813 systemd-logind[1532]: Removed session 37. Aug 13 01:29:24.253719 kubelet[2708]: E0813 01:29:24.253103 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:29:26.218081 systemd[1]: Started sshd@37-172.233.222.13:22-147.75.109.163:52924.service - OpenSSH per-connection server daemon (147.75.109.163:52924). Aug 13 01:29:26.556998 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 52924 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:26.558402 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:26.563432 systemd-logind[1532]: New session 38 of user core. Aug 13 01:29:26.569781 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:29:26.856147 sshd[4282]: Connection closed by 147.75.109.163 port 52924 Aug 13 01:29:26.857023 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:26.861144 systemd-logind[1532]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:29:26.861812 systemd[1]: sshd@37-172.233.222.13:22-147.75.109.163:52924.service: Deactivated successfully. Aug 13 01:29:26.864069 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:29:26.865681 systemd-logind[1532]: Removed session 38. Aug 13 01:29:31.917745 systemd[1]: Started sshd@38-172.233.222.13:22-147.75.109.163:60994.service - OpenSSH per-connection server daemon (147.75.109.163:60994). Aug 13 01:29:32.252230 sshd[4294]: Accepted publickey for core from 147.75.109.163 port 60994 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:32.254001 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:32.259147 systemd-logind[1532]: New session 39 of user core. Aug 13 01:29:32.264752 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:29:32.551250 sshd[4296]: Connection closed by 147.75.109.163 port 60994 Aug 13 01:29:32.552822 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:32.557083 systemd[1]: sshd@38-172.233.222.13:22-147.75.109.163:60994.service: Deactivated successfully. Aug 13 01:29:32.557456 systemd-logind[1532]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:29:32.560953 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:29:32.564033 systemd-logind[1532]: Removed session 39. Aug 13 01:29:37.616720 systemd[1]: Started sshd@39-172.233.222.13:22-147.75.109.163:32776.service - OpenSSH per-connection server daemon (147.75.109.163:32776). Aug 13 01:29:37.952915 sshd[4308]: Accepted publickey for core from 147.75.109.163 port 32776 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:37.954320 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:37.958700 systemd-logind[1532]: New session 40 of user core. Aug 13 01:29:37.967774 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:29:38.253856 sshd[4310]: Connection closed by 147.75.109.163 port 32776 Aug 13 01:29:38.255174 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:38.260027 systemd-logind[1532]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:29:38.260750 systemd[1]: sshd@39-172.233.222.13:22-147.75.109.163:32776.service: Deactivated successfully. Aug 13 01:29:38.262621 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:29:38.264849 systemd-logind[1532]: Removed session 40. Aug 13 01:29:43.316953 systemd[1]: Started sshd@40-172.233.222.13:22-147.75.109.163:60488.service - OpenSSH per-connection server daemon (147.75.109.163:60488). Aug 13 01:29:43.645912 sshd[4322]: Accepted publickey for core from 147.75.109.163 port 60488 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:43.647197 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:43.651194 systemd-logind[1532]: New session 41 of user core. Aug 13 01:29:43.659764 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:29:43.932287 sshd[4324]: Connection closed by 147.75.109.163 port 60488 Aug 13 01:29:43.933064 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:43.939704 systemd[1]: sshd@40-172.233.222.13:22-147.75.109.163:60488.service: Deactivated successfully. Aug 13 01:29:43.941959 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:29:43.942862 systemd-logind[1532]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:29:43.944208 systemd-logind[1532]: Removed session 41. Aug 13 01:29:48.998287 systemd[1]: Started sshd@41-172.233.222.13:22-147.75.109.163:55440.service - OpenSSH per-connection server daemon (147.75.109.163:55440). Aug 13 01:29:49.336280 sshd[4337]: Accepted publickey for core from 147.75.109.163 port 55440 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:49.337606 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:49.342446 systemd-logind[1532]: New session 42 of user core. Aug 13 01:29:49.349760 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:29:49.623514 sshd[4339]: Connection closed by 147.75.109.163 port 55440 Aug 13 01:29:49.624237 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:49.627944 systemd-logind[1532]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:29:49.628661 systemd[1]: sshd@41-172.233.222.13:22-147.75.109.163:55440.service: Deactivated successfully. Aug 13 01:29:49.630384 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:29:49.632014 systemd-logind[1532]: Removed session 42. Aug 13 01:29:52.255016 kubelet[2708]: E0813 01:29:52.254881 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:29:54.687429 systemd[1]: Started sshd@42-172.233.222.13:22-147.75.109.163:55450.service - OpenSSH per-connection server daemon (147.75.109.163:55450). Aug 13 01:29:55.033697 sshd[4353]: Accepted publickey for core from 147.75.109.163 port 55450 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:29:55.035319 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:29:55.040202 systemd-logind[1532]: New session 43 of user core. Aug 13 01:29:55.054797 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:29:55.339355 sshd[4355]: Connection closed by 147.75.109.163 port 55450 Aug 13 01:29:55.339867 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:55.344979 systemd[1]: sshd@42-172.233.222.13:22-147.75.109.163:55450.service: Deactivated successfully. Aug 13 01:29:55.347868 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:29:55.348996 systemd-logind[1532]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:29:55.351047 systemd-logind[1532]: Removed session 43. Aug 13 01:30:00.406143 systemd[1]: Started sshd@43-172.233.222.13:22-147.75.109.163:34372.service - OpenSSH per-connection server daemon (147.75.109.163:34372). Aug 13 01:30:00.746735 sshd[4367]: Accepted publickey for core from 147.75.109.163 port 34372 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:00.748152 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:00.755240 systemd-logind[1532]: New session 44 of user core. Aug 13 01:30:00.764782 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:30:01.049030 sshd[4369]: Connection closed by 147.75.109.163 port 34372 Aug 13 01:30:01.049889 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:01.055248 systemd[1]: sshd@43-172.233.222.13:22-147.75.109.163:34372.service: Deactivated successfully. Aug 13 01:30:01.058462 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:30:01.059784 systemd-logind[1532]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:30:01.061864 systemd-logind[1532]: Removed session 44. Aug 13 01:30:06.109122 systemd[1]: Started sshd@44-172.233.222.13:22-147.75.109.163:34376.service - OpenSSH per-connection server daemon (147.75.109.163:34376). Aug 13 01:30:06.443504 sshd[4381]: Accepted publickey for core from 147.75.109.163 port 34376 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:06.445067 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:06.450632 systemd-logind[1532]: New session 45 of user core. Aug 13 01:30:06.459803 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:30:06.741553 sshd[4383]: Connection closed by 147.75.109.163 port 34376 Aug 13 01:30:06.742828 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:06.746668 systemd-logind[1532]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:30:06.747864 systemd[1]: sshd@44-172.233.222.13:22-147.75.109.163:34376.service: Deactivated successfully. Aug 13 01:30:06.751010 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:30:06.752419 systemd-logind[1532]: Removed session 45. Aug 13 01:30:11.253794 kubelet[2708]: E0813 01:30:11.253722 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:11.806687 systemd[1]: Started sshd@45-172.233.222.13:22-147.75.109.163:33744.service - OpenSSH per-connection server daemon (147.75.109.163:33744). Aug 13 01:30:12.142722 sshd[4395]: Accepted publickey for core from 147.75.109.163 port 33744 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:12.144451 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:12.149640 systemd-logind[1532]: New session 46 of user core. Aug 13 01:30:12.156775 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:30:12.439801 sshd[4397]: Connection closed by 147.75.109.163 port 33744 Aug 13 01:30:12.440419 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:12.444565 systemd-logind[1532]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:30:12.445293 systemd[1]: sshd@45-172.233.222.13:22-147.75.109.163:33744.service: Deactivated successfully. Aug 13 01:30:12.447582 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:30:12.449359 systemd-logind[1532]: Removed session 46. Aug 13 01:30:14.528769 containerd[1555]: time="2025-08-13T01:30:14.528632270Z" level=warning msg="container event discarded" container=f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa type=CONTAINER_CREATED_EVENT Aug 13 01:30:14.539956 containerd[1555]: time="2025-08-13T01:30:14.539924121Z" level=warning msg="container event discarded" container=f83aab70fdee48af20c7800227a791d633f1365494515f7fe323c8c5b2f98aaa type=CONTAINER_STARTED_EVENT Aug 13 01:30:14.551230 containerd[1555]: time="2025-08-13T01:30:14.551180161Z" level=warning msg="container event discarded" container=4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6 type=CONTAINER_CREATED_EVENT Aug 13 01:30:14.551230 containerd[1555]: time="2025-08-13T01:30:14.551216620Z" level=warning msg="container event discarded" container=4122a78d479902ec810137130d18f73f75f30fb7103e244fcff421e53d74ecc6 type=CONTAINER_STARTED_EVENT Aug 13 01:30:14.563614 containerd[1555]: time="2025-08-13T01:30:14.563536288Z" level=warning msg="container event discarded" container=061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2 type=CONTAINER_CREATED_EVENT Aug 13 01:30:14.563614 containerd[1555]: time="2025-08-13T01:30:14.563579098Z" level=warning msg="container event discarded" container=061149fd87cfcbc48b98ecc4d4d2b3615b77f1b580cf99869d5e4f585dea30d2 type=CONTAINER_STARTED_EVENT Aug 13 01:30:14.563727 containerd[1555]: time="2025-08-13T01:30:14.563623558Z" level=warning msg="container event discarded" container=5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d type=CONTAINER_CREATED_EVENT Aug 13 01:30:14.563727 containerd[1555]: time="2025-08-13T01:30:14.563636798Z" level=warning msg="container event discarded" container=1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1 type=CONTAINER_CREATED_EVENT Aug 13 01:30:14.593883 containerd[1555]: time="2025-08-13T01:30:14.593844316Z" level=warning msg="container event discarded" container=28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df type=CONTAINER_CREATED_EVENT Aug 13 01:30:14.676272 containerd[1555]: time="2025-08-13T01:30:14.676230448Z" level=warning msg="container event discarded" container=1db407edecf6aac5354d5f5d3e452b844657436010c47c75a0488397fd2ae0d1 type=CONTAINER_STARTED_EVENT Aug 13 01:30:14.690472 containerd[1555]: time="2025-08-13T01:30:14.690430191Z" level=warning msg="container event discarded" container=28519b58a50f2ef750f694907fed149d5641a30a70c2ee35ba449d3c1013b7df type=CONTAINER_STARTED_EVENT Aug 13 01:30:14.710674 containerd[1555]: time="2025-08-13T01:30:14.710617447Z" level=warning msg="container event discarded" container=5c4a694ca24e61ef011c4a38a02ebe5544d3ebbf5ecb339570e8cbaa26f4303d type=CONTAINER_STARTED_EVENT Aug 13 01:30:17.253280 kubelet[2708]: E0813 01:30:17.253194 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:17.504538 systemd[1]: Started sshd@46-172.233.222.13:22-147.75.109.163:33752.service - OpenSSH per-connection server daemon (147.75.109.163:33752). Aug 13 01:30:17.850339 sshd[4408]: Accepted publickey for core from 147.75.109.163 port 33752 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:17.851706 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:17.856530 systemd-logind[1532]: New session 47 of user core. Aug 13 01:30:17.869844 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:30:18.149851 sshd[4410]: Connection closed by 147.75.109.163 port 33752 Aug 13 01:30:18.150463 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:18.154365 systemd[1]: sshd@46-172.233.222.13:22-147.75.109.163:33752.service: Deactivated successfully. Aug 13 01:30:18.156590 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:30:18.158407 systemd-logind[1532]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:30:18.160530 systemd-logind[1532]: Removed session 47. Aug 13 01:30:23.211599 systemd[1]: Started sshd@47-172.233.222.13:22-147.75.109.163:53458.service - OpenSSH per-connection server daemon (147.75.109.163:53458). Aug 13 01:30:23.253981 kubelet[2708]: E0813 01:30:23.253676 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:23.551769 sshd[4424]: Accepted publickey for core from 147.75.109.163 port 53458 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:23.553133 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:23.557929 systemd-logind[1532]: New session 48 of user core. Aug 13 01:30:23.564772 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:30:23.852722 sshd[4426]: Connection closed by 147.75.109.163 port 53458 Aug 13 01:30:23.853550 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:23.857773 systemd-logind[1532]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:30:23.858297 systemd[1]: sshd@47-172.233.222.13:22-147.75.109.163:53458.service: Deactivated successfully. Aug 13 01:30:23.860853 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:30:23.862608 systemd-logind[1532]: Removed session 48. Aug 13 01:30:24.193935 containerd[1555]: time="2025-08-13T01:30:24.193876221Z" level=warning msg="container event discarded" container=0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e type=CONTAINER_CREATED_EVENT Aug 13 01:30:24.193935 containerd[1555]: time="2025-08-13T01:30:24.193915151Z" level=warning msg="container event discarded" container=0e66dbae091e6c2176373ea621055edd39ede598888e4d649a8d1b7895a4253e type=CONTAINER_STARTED_EVENT Aug 13 01:30:24.206072 containerd[1555]: time="2025-08-13T01:30:24.206025891Z" level=warning msg="container event discarded" container=75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4 type=CONTAINER_CREATED_EVENT Aug 13 01:30:24.206072 containerd[1555]: time="2025-08-13T01:30:24.206059631Z" level=warning msg="container event discarded" container=75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4 type=CONTAINER_STARTED_EVENT Aug 13 01:30:24.206072 containerd[1555]: time="2025-08-13T01:30:24.206067601Z" level=warning msg="container event discarded" container=49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24 type=CONTAINER_CREATED_EVENT Aug 13 01:30:24.253431 kubelet[2708]: E0813 01:30:24.253382 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:24.270246 containerd[1555]: time="2025-08-13T01:30:24.270163172Z" level=warning msg="container event discarded" container=49a3fff38846f97b8a34b169b4669646fe384d2559123f30ff9ba4bef8c50d24 type=CONTAINER_STARTED_EVENT Aug 13 01:30:24.506869 containerd[1555]: time="2025-08-13T01:30:24.506704604Z" level=warning msg="container event discarded" container=0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec type=CONTAINER_CREATED_EVENT Aug 13 01:30:24.506869 containerd[1555]: time="2025-08-13T01:30:24.506752893Z" level=warning msg="container event discarded" container=0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec type=CONTAINER_STARTED_EVENT Aug 13 01:30:28.674870 containerd[1555]: time="2025-08-13T01:30:28.674812225Z" level=warning msg="container event discarded" container=4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7 type=CONTAINER_CREATED_EVENT Aug 13 01:30:28.722094 containerd[1555]: time="2025-08-13T01:30:28.722039080Z" level=warning msg="container event discarded" container=4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7 type=CONTAINER_STARTED_EVENT Aug 13 01:30:28.823407 containerd[1555]: time="2025-08-13T01:30:28.823368605Z" level=warning msg="container event discarded" container=4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7 type=CONTAINER_STOPPED_EVENT Aug 13 01:30:28.918022 systemd[1]: Started sshd@48-172.233.222.13:22-147.75.109.163:56982.service - OpenSSH per-connection server daemon (147.75.109.163:56982). Aug 13 01:30:29.250896 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 56982 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:29.252428 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:29.257941 systemd-logind[1532]: New session 49 of user core. Aug 13 01:30:29.268785 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:30:29.338384 containerd[1555]: time="2025-08-13T01:30:29.338337492Z" level=warning msg="container event discarded" container=922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58 type=CONTAINER_CREATED_EVENT Aug 13 01:30:29.391731 containerd[1555]: time="2025-08-13T01:30:29.391671603Z" level=warning msg="container event discarded" container=922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58 type=CONTAINER_STARTED_EVENT Aug 13 01:30:29.454938 containerd[1555]: time="2025-08-13T01:30:29.454887341Z" level=warning msg="container event discarded" container=922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58 type=CONTAINER_STOPPED_EVENT Aug 13 01:30:29.553896 sshd[4441]: Connection closed by 147.75.109.163 port 56982 Aug 13 01:30:29.554956 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:29.560270 systemd[1]: sshd@48-172.233.222.13:22-147.75.109.163:56982.service: Deactivated successfully. Aug 13 01:30:29.563227 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:30:29.564829 systemd-logind[1532]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:30:29.566315 systemd-logind[1532]: Removed session 49. Aug 13 01:30:29.755499 containerd[1555]: time="2025-08-13T01:30:29.755430577Z" level=warning msg="container event discarded" container=bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322 type=CONTAINER_CREATED_EVENT Aug 13 01:30:29.809940 containerd[1555]: time="2025-08-13T01:30:29.809783966Z" level=warning msg="container event discarded" container=bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322 type=CONTAINER_STARTED_EVENT Aug 13 01:30:30.349399 containerd[1555]: time="2025-08-13T01:30:30.349293221Z" level=warning msg="container event discarded" container=9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec type=CONTAINER_CREATED_EVENT Aug 13 01:30:30.469707 containerd[1555]: time="2025-08-13T01:30:30.469632984Z" level=warning msg="container event discarded" container=9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec type=CONTAINER_STARTED_EVENT Aug 13 01:30:30.526963 containerd[1555]: time="2025-08-13T01:30:30.526909405Z" level=warning msg="container event discarded" container=9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec type=CONTAINER_STOPPED_EVENT Aug 13 01:30:31.357048 containerd[1555]: time="2025-08-13T01:30:31.356979313Z" level=warning msg="container event discarded" container=1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf type=CONTAINER_CREATED_EVENT Aug 13 01:30:31.425202 containerd[1555]: time="2025-08-13T01:30:31.425153201Z" level=warning msg="container event discarded" container=1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf type=CONTAINER_STARTED_EVENT Aug 13 01:30:31.453464 containerd[1555]: time="2025-08-13T01:30:31.453397794Z" level=warning msg="container event discarded" container=1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf type=CONTAINER_STOPPED_EVENT Aug 13 01:30:32.387531 containerd[1555]: time="2025-08-13T01:30:32.387449356Z" level=warning msg="container event discarded" container=f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509 type=CONTAINER_CREATED_EVENT Aug 13 01:30:32.474742 containerd[1555]: time="2025-08-13T01:30:32.474679100Z" level=warning msg="container event discarded" container=f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509 type=CONTAINER_STARTED_EVENT Aug 13 01:30:34.618925 systemd[1]: Started sshd@49-172.233.222.13:22-147.75.109.163:56986.service - OpenSSH per-connection server daemon (147.75.109.163:56986). Aug 13 01:30:34.970123 sshd[4453]: Accepted publickey for core from 147.75.109.163 port 56986 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:34.972301 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:34.979136 systemd-logind[1532]: New session 50 of user core. Aug 13 01:30:34.986838 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:30:35.267145 sshd[4455]: Connection closed by 147.75.109.163 port 56986 Aug 13 01:30:35.268049 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:35.273371 systemd[1]: sshd@49-172.233.222.13:22-147.75.109.163:56986.service: Deactivated successfully. Aug 13 01:30:35.276024 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:30:35.277430 systemd-logind[1532]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:30:35.279427 systemd-logind[1532]: Removed session 50. Aug 13 01:30:40.339096 systemd[1]: Started sshd@50-172.233.222.13:22-147.75.109.163:42540.service - OpenSSH per-connection server daemon (147.75.109.163:42540). Aug 13 01:30:40.675277 sshd[4467]: Accepted publickey for core from 147.75.109.163 port 42540 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:40.676889 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:40.682595 systemd-logind[1532]: New session 51 of user core. Aug 13 01:30:40.695785 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:30:40.971659 sshd[4469]: Connection closed by 147.75.109.163 port 42540 Aug 13 01:30:40.972292 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:40.976732 systemd-logind[1532]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:30:40.977405 systemd[1]: sshd@50-172.233.222.13:22-147.75.109.163:42540.service: Deactivated successfully. Aug 13 01:30:40.979600 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:30:40.982085 systemd-logind[1532]: Removed session 51. Aug 13 01:30:42.254066 kubelet[2708]: E0813 01:30:42.253552 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:46.040525 systemd[1]: Started sshd@51-172.233.222.13:22-147.75.109.163:42556.service - OpenSSH per-connection server daemon (147.75.109.163:42556). Aug 13 01:30:46.380847 sshd[4481]: Accepted publickey for core from 147.75.109.163 port 42556 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:46.382299 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:46.387297 systemd-logind[1532]: New session 52 of user core. Aug 13 01:30:46.392764 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:30:46.683249 sshd[4483]: Connection closed by 147.75.109.163 port 42556 Aug 13 01:30:46.683982 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:46.688768 systemd[1]: sshd@51-172.233.222.13:22-147.75.109.163:42556.service: Deactivated successfully. Aug 13 01:30:46.692337 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:30:46.693723 systemd-logind[1532]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:30:46.696219 systemd-logind[1532]: Removed session 52. Aug 13 01:30:51.745228 systemd[1]: Started sshd@52-172.233.222.13:22-147.75.109.163:42454.service - OpenSSH per-connection server daemon (147.75.109.163:42454). Aug 13 01:30:52.094534 sshd[4495]: Accepted publickey for core from 147.75.109.163 port 42454 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:52.095947 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:52.100836 systemd-logind[1532]: New session 53 of user core. Aug 13 01:30:52.110780 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 01:30:52.393484 sshd[4497]: Connection closed by 147.75.109.163 port 42454 Aug 13 01:30:52.394280 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:52.398627 systemd-logind[1532]: Session 53 logged out. Waiting for processes to exit. Aug 13 01:30:52.399392 systemd[1]: sshd@52-172.233.222.13:22-147.75.109.163:42454.service: Deactivated successfully. Aug 13 01:30:52.401536 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 01:30:52.403599 systemd-logind[1532]: Removed session 53. Aug 13 01:30:52.456842 systemd[1]: Started sshd@53-172.233.222.13:22-147.75.109.163:42470.service - OpenSSH per-connection server daemon (147.75.109.163:42470). Aug 13 01:30:52.814150 sshd[4509]: Accepted publickey for core from 147.75.109.163 port 42470 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:52.815933 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:52.820291 systemd-logind[1532]: New session 54 of user core. Aug 13 01:30:52.825758 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 01:30:54.301155 containerd[1555]: time="2025-08-13T01:30:54.301092690Z" level=info msg="StopContainer for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" with timeout 30 (s)" Aug 13 01:30:54.302772 containerd[1555]: time="2025-08-13T01:30:54.302170917Z" level=info msg="Stop container \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" with signal terminated" Aug 13 01:30:54.314011 systemd[1]: cri-containerd-bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322.scope: Deactivated successfully. Aug 13 01:30:54.318545 containerd[1555]: time="2025-08-13T01:30:54.318360054Z" level=info msg="received exit event container_id:\"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" id:\"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" pid:3229 exited_at:{seconds:1755048654 nanos:317892715}" Aug 13 01:30:54.319831 containerd[1555]: time="2025-08-13T01:30:54.319784951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" id:\"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" pid:3229 exited_at:{seconds:1755048654 nanos:317892715}" Aug 13 01:30:54.323005 containerd[1555]: time="2025-08-13T01:30:54.322976764Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:30:54.330248 containerd[1555]: time="2025-08-13T01:30:54.330199208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" id:\"167ddc949251e414ad08e47411213973609cf78437f2ac48fe1b9597a8466e36\" pid:4536 exited_at:{seconds:1755048654 nanos:329720749}" Aug 13 01:30:54.333085 containerd[1555]: time="2025-08-13T01:30:54.332937893Z" level=info msg="StopContainer for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" with timeout 2 (s)" Aug 13 01:30:54.333304 containerd[1555]: time="2025-08-13T01:30:54.333288023Z" level=info msg="Stop container \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" with signal terminated" Aug 13 01:30:54.344991 systemd-networkd[1464]: lxc_health: Link DOWN Aug 13 01:30:54.345110 systemd-networkd[1464]: lxc_health: Lost carrier Aug 13 01:30:54.354972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322-rootfs.mount: Deactivated successfully. Aug 13 01:30:54.366960 containerd[1555]: time="2025-08-13T01:30:54.366920732Z" level=info msg="StopContainer for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" returns successfully" Aug 13 01:30:54.367449 containerd[1555]: time="2025-08-13T01:30:54.367423501Z" level=info msg="StopPodSandbox for \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\"" Aug 13 01:30:54.367487 containerd[1555]: time="2025-08-13T01:30:54.367475250Z" level=info msg="Container to stop \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:30:54.369078 systemd[1]: cri-containerd-f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509.scope: Deactivated successfully. Aug 13 01:30:54.369703 systemd[1]: cri-containerd-f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509.scope: Consumed 5.693s CPU time, 121.4M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 01:30:54.373014 containerd[1555]: time="2025-08-13T01:30:54.372192010Z" level=info msg="received exit event container_id:\"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" id:\"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" pid:3341 exited_at:{seconds:1755048654 nanos:372024081}" Aug 13 01:30:54.373488 containerd[1555]: time="2025-08-13T01:30:54.373368528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" id:\"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" pid:3341 exited_at:{seconds:1755048654 nanos:372024081}" Aug 13 01:30:54.378521 systemd[1]: cri-containerd-0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec.scope: Deactivated successfully. Aug 13 01:30:54.388403 containerd[1555]: time="2025-08-13T01:30:54.388156106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" id:\"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" pid:2978 exit_status:137 exited_at:{seconds:1755048654 nanos:387637948}" Aug 13 01:30:54.407929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509-rootfs.mount: Deactivated successfully. Aug 13 01:30:54.419431 containerd[1555]: time="2025-08-13T01:30:54.419409441Z" level=info msg="StopContainer for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" returns successfully" Aug 13 01:30:54.421208 containerd[1555]: time="2025-08-13T01:30:54.421164667Z" level=info msg="StopPodSandbox for \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\"" Aug 13 01:30:54.421298 containerd[1555]: time="2025-08-13T01:30:54.421269317Z" level=info msg="Container to stop \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:30:54.421298 containerd[1555]: time="2025-08-13T01:30:54.421291857Z" level=info msg="Container to stop \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:30:54.421353 containerd[1555]: time="2025-08-13T01:30:54.421308287Z" level=info msg="Container to stop \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:30:54.421353 containerd[1555]: time="2025-08-13T01:30:54.421320707Z" level=info msg="Container to stop \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:30:54.421353 containerd[1555]: time="2025-08-13T01:30:54.421330277Z" level=info msg="Container to stop \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:30:54.435398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec-rootfs.mount: Deactivated successfully. Aug 13 01:30:54.438839 containerd[1555]: time="2025-08-13T01:30:54.438796721Z" level=info msg="shim disconnected" id=0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec namespace=k8s.io Aug 13 01:30:54.439877 containerd[1555]: time="2025-08-13T01:30:54.439715008Z" level=warning msg="cleaning up after shim disconnected" id=0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec namespace=k8s.io Aug 13 01:30:54.439877 containerd[1555]: time="2025-08-13T01:30:54.439731068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:30:54.439772 systemd[1]: cri-containerd-75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4.scope: Deactivated successfully. Aug 13 01:30:54.470851 containerd[1555]: time="2025-08-13T01:30:54.470814263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" id:\"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" pid:2856 exit_status:137 exited_at:{seconds:1755048654 nanos:444965717}" Aug 13 01:30:54.473240 containerd[1555]: time="2025-08-13T01:30:54.471610861Z" level=info msg="TearDown network for sandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" successfully" Aug 13 01:30:54.473302 containerd[1555]: time="2025-08-13T01:30:54.473289298Z" level=info msg="StopPodSandbox for \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" returns successfully" Aug 13 01:30:54.473748 containerd[1555]: time="2025-08-13T01:30:54.471860041Z" level=info msg="received exit event sandbox_id:\"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" exit_status:137 exited_at:{seconds:1755048654 nanos:387637948}" Aug 13 01:30:54.476397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec-shm.mount: Deactivated successfully. Aug 13 01:30:54.476539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4-rootfs.mount: Deactivated successfully. Aug 13 01:30:54.480086 containerd[1555]: time="2025-08-13T01:30:54.479899314Z" level=info msg="shim disconnected" id=75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4 namespace=k8s.io Aug 13 01:30:54.480086 containerd[1555]: time="2025-08-13T01:30:54.480022693Z" level=warning msg="cleaning up after shim disconnected" id=75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4 namespace=k8s.io Aug 13 01:30:54.480086 containerd[1555]: time="2025-08-13T01:30:54.480034613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:30:54.480379 containerd[1555]: time="2025-08-13T01:30:54.480335063Z" level=info msg="received exit event sandbox_id:\"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" exit_status:137 exited_at:{seconds:1755048654 nanos:444965717}" Aug 13 01:30:54.482102 containerd[1555]: time="2025-08-13T01:30:54.481996369Z" level=info msg="TearDown network for sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" successfully" Aug 13 01:30:54.482102 containerd[1555]: time="2025-08-13T01:30:54.482018399Z" level=info msg="StopPodSandbox for \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" returns successfully" Aug 13 01:30:54.637920 kubelet[2708]: I0813 01:30:54.636719 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-xtables-lock\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.637920 kubelet[2708]: I0813 01:30:54.636763 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-config-path\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.637920 kubelet[2708]: I0813 01:30:54.636782 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-run\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.637920 kubelet[2708]: I0813 01:30:54.636805 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpj8p\" (UniqueName: \"kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-kube-api-access-tpj8p\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.637920 kubelet[2708]: I0813 01:30:54.636819 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-bpf-maps\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.637920 kubelet[2708]: I0813 01:30:54.636832 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-net\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638474 kubelet[2708]: I0813 01:30:54.636872 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-cgroup\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638474 kubelet[2708]: I0813 01:30:54.636891 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-hubble-tls\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638474 kubelet[2708]: I0813 01:30:54.636907 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-cilium-config-path\") pod \"192b6a6f-9b7f-4883-9bfc-133f6967ebfa\" (UID: \"192b6a6f-9b7f-4883-9bfc-133f6967ebfa\") " Aug 13 01:30:54.638474 kubelet[2708]: I0813 01:30:54.636924 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhbvh\" (UniqueName: \"kubernetes.io/projected/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-kube-api-access-bhbvh\") pod \"192b6a6f-9b7f-4883-9bfc-133f6967ebfa\" (UID: \"192b6a6f-9b7f-4883-9bfc-133f6967ebfa\") " Aug 13 01:30:54.638474 kubelet[2708]: I0813 01:30:54.636938 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-kernel\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638474 kubelet[2708]: I0813 01:30:54.636952 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-etc-cni-netd\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638705 kubelet[2708]: I0813 01:30:54.636966 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-hostproc\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638705 kubelet[2708]: I0813 01:30:54.636981 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cni-path\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638705 kubelet[2708]: I0813 01:30:54.637000 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89c96383-cf88-46bf-a4a6-13402be041b3-clustermesh-secrets\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638705 kubelet[2708]: I0813 01:30:54.637012 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-lib-modules\") pod \"89c96383-cf88-46bf-a4a6-13402be041b3\" (UID: \"89c96383-cf88-46bf-a4a6-13402be041b3\") " Aug 13 01:30:54.638705 kubelet[2708]: I0813 01:30:54.637061 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.639982 kubelet[2708]: I0813 01:30:54.639897 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "192b6a6f-9b7f-4883-9bfc-133f6967ebfa" (UID: "192b6a6f-9b7f-4883-9bfc-133f6967ebfa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:30:54.640308 kubelet[2708]: I0813 01:30:54.640268 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.642936 kubelet[2708]: I0813 01:30:54.642898 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-kube-api-access-bhbvh" (OuterVolumeSpecName: "kube-api-access-bhbvh") pod "192b6a6f-9b7f-4883-9bfc-133f6967ebfa" (UID: "192b6a6f-9b7f-4883-9bfc-133f6967ebfa"). InnerVolumeSpecName "kube-api-access-bhbvh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:30:54.642992 kubelet[2708]: I0813 01:30:54.642947 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.642992 kubelet[2708]: I0813 01:30:54.642966 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.642992 kubelet[2708]: I0813 01:30:54.642981 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.643069 kubelet[2708]: I0813 01:30:54.642995 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.643244 kubelet[2708]: I0813 01:30:54.643143 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.643620 kubelet[2708]: I0813 01:30:54.643443 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.643620 kubelet[2708]: I0813 01:30:54.643470 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.643620 kubelet[2708]: I0813 01:30:54.643486 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:30:54.647526 kubelet[2708]: I0813 01:30:54.647505 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:30:54.648711 kubelet[2708]: I0813 01:30:54.647992 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:30:54.650019 kubelet[2708]: I0813 01:30:54.649971 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c96383-cf88-46bf-a4a6-13402be041b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:30:54.650352 kubelet[2708]: I0813 01:30:54.650317 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-kube-api-access-tpj8p" (OuterVolumeSpecName: "kube-api-access-tpj8p") pod "89c96383-cf88-46bf-a4a6-13402be041b3" (UID: "89c96383-cf88-46bf-a4a6-13402be041b3"). InnerVolumeSpecName "kube-api-access-tpj8p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738004 2708 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-net\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738028 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-cgroup\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738040 2708 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-hubble-tls\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738049 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-cilium-config-path\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738057 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bhbvh\" (UniqueName: \"kubernetes.io/projected/192b6a6f-9b7f-4883-9bfc-133f6967ebfa-kube-api-access-bhbvh\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738067 2708 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-host-proc-sys-kernel\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738074 2708 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-etc-cni-netd\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738116 kubelet[2708]: I0813 01:30:54.738082 2708 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-hostproc\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738090 2708 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cni-path\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738099 2708 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89c96383-cf88-46bf-a4a6-13402be041b3-clustermesh-secrets\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738109 2708 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-lib-modules\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738116 2708 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-xtables-lock\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738125 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-config-path\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738133 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-cilium-run\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738140 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpj8p\" (UniqueName: \"kubernetes.io/projected/89c96383-cf88-46bf-a4a6-13402be041b3-kube-api-access-tpj8p\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.738366 kubelet[2708]: I0813 01:30:54.738149 2708 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89c96383-cf88-46bf-a4a6-13402be041b3-bpf-maps\") on node \"172-233-222-13\" DevicePath \"\"" Aug 13 01:30:54.948829 kubelet[2708]: I0813 01:30:54.948724 2708 scope.go:117] "RemoveContainer" containerID="f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509" Aug 13 01:30:54.955457 containerd[1555]: time="2025-08-13T01:30:54.952978579Z" level=info msg="RemoveContainer for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\"" Aug 13 01:30:54.961505 systemd[1]: Removed slice kubepods-burstable-pod89c96383_cf88_46bf_a4a6_13402be041b3.slice - libcontainer container kubepods-burstable-pod89c96383_cf88_46bf_a4a6_13402be041b3.slice. Aug 13 01:30:54.961594 systemd[1]: kubepods-burstable-pod89c96383_cf88_46bf_a4a6_13402be041b3.slice: Consumed 5.782s CPU time, 121.8M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 01:30:54.963575 containerd[1555]: time="2025-08-13T01:30:54.963549617Z" level=info msg="RemoveContainer for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" returns successfully" Aug 13 01:30:54.963943 kubelet[2708]: I0813 01:30:54.963914 2708 scope.go:117] "RemoveContainer" containerID="1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf" Aug 13 01:30:54.965302 systemd[1]: Removed slice kubepods-besteffort-pod192b6a6f_9b7f_4883_9bfc_133f6967ebfa.slice - libcontainer container kubepods-besteffort-pod192b6a6f_9b7f_4883_9bfc_133f6967ebfa.slice. Aug 13 01:30:54.970264 containerd[1555]: time="2025-08-13T01:30:54.969886513Z" level=info msg="RemoveContainer for \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\"" Aug 13 01:30:54.973274 containerd[1555]: time="2025-08-13T01:30:54.973240866Z" level=info msg="RemoveContainer for \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" returns successfully" Aug 13 01:30:54.973541 kubelet[2708]: I0813 01:30:54.973400 2708 scope.go:117] "RemoveContainer" containerID="9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec" Aug 13 01:30:54.976662 containerd[1555]: time="2025-08-13T01:30:54.976495510Z" level=info msg="RemoveContainer for \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\"" Aug 13 01:30:54.980777 containerd[1555]: time="2025-08-13T01:30:54.980664000Z" level=info msg="RemoveContainer for \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" returns successfully" Aug 13 01:30:54.981000 kubelet[2708]: I0813 01:30:54.980975 2708 scope.go:117] "RemoveContainer" containerID="922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58" Aug 13 01:30:54.985345 containerd[1555]: time="2025-08-13T01:30:54.985249631Z" level=info msg="RemoveContainer for \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\"" Aug 13 01:30:54.988922 containerd[1555]: time="2025-08-13T01:30:54.988849683Z" level=info msg="RemoveContainer for \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" returns successfully" Aug 13 01:30:54.989165 kubelet[2708]: I0813 01:30:54.989131 2708 scope.go:117] "RemoveContainer" containerID="4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7" Aug 13 01:30:54.991141 containerd[1555]: time="2025-08-13T01:30:54.990905219Z" level=info msg="RemoveContainer for \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\"" Aug 13 01:30:54.993087 containerd[1555]: time="2025-08-13T01:30:54.993061515Z" level=info msg="RemoveContainer for \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" returns successfully" Aug 13 01:30:54.993632 kubelet[2708]: I0813 01:30:54.993354 2708 scope.go:117] "RemoveContainer" containerID="f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509" Aug 13 01:30:54.994369 containerd[1555]: time="2025-08-13T01:30:54.994309171Z" level=error msg="ContainerStatus for \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\": not found" Aug 13 01:30:54.994763 kubelet[2708]: E0813 01:30:54.994724 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\": not found" containerID="f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509" Aug 13 01:30:54.994849 kubelet[2708]: I0813 01:30:54.994757 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509"} err="failed to get container status \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8866336b7dcfe742a9ad6092f861521738853f3e9587926a4a1ca4814ed3509\": not found" Aug 13 01:30:54.994849 kubelet[2708]: I0813 01:30:54.994844 2708 scope.go:117] "RemoveContainer" containerID="1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf" Aug 13 01:30:54.995582 containerd[1555]: time="2025-08-13T01:30:54.995540599Z" level=error msg="ContainerStatus for \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\": not found" Aug 13 01:30:54.996146 kubelet[2708]: E0813 01:30:54.996112 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\": not found" containerID="1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf" Aug 13 01:30:54.996146 kubelet[2708]: I0813 01:30:54.996140 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf"} err="failed to get container status \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"1995f07118f6be887e69fa07568bcce0b8e40cfd016ae967ddb38f9caa2c9fdf\": not found" Aug 13 01:30:54.996235 kubelet[2708]: I0813 01:30:54.996155 2708 scope.go:117] "RemoveContainer" containerID="9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec" Aug 13 01:30:54.996886 containerd[1555]: time="2025-08-13T01:30:54.996839836Z" level=error msg="ContainerStatus for \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\": not found" Aug 13 01:30:54.997104 kubelet[2708]: E0813 01:30:54.997071 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\": not found" containerID="9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec" Aug 13 01:30:54.997104 kubelet[2708]: I0813 01:30:54.997096 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec"} err="failed to get container status \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b6a4e5c10f3e3265656b1e934792d71eee3345985ac6884afcd919eff7d79ec\": not found" Aug 13 01:30:54.997104 kubelet[2708]: I0813 01:30:54.997112 2708 scope.go:117] "RemoveContainer" containerID="922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58" Aug 13 01:30:54.997344 containerd[1555]: time="2025-08-13T01:30:54.997308745Z" level=error msg="ContainerStatus for \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\": not found" Aug 13 01:30:54.997491 kubelet[2708]: E0813 01:30:54.997466 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\": not found" containerID="922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58" Aug 13 01:30:54.997541 kubelet[2708]: I0813 01:30:54.997492 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58"} err="failed to get container status \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\": rpc error: code = NotFound desc = an error occurred when try to find container \"922bdebe8520c980cac13f159cf10ef090726d6ba8bc8ad1e72979fe14f54c58\": not found" Aug 13 01:30:54.997541 kubelet[2708]: I0813 01:30:54.997506 2708 scope.go:117] "RemoveContainer" containerID="4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7" Aug 13 01:30:54.997676 containerd[1555]: time="2025-08-13T01:30:54.997627665Z" level=error msg="ContainerStatus for \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\": not found" Aug 13 01:30:54.997824 kubelet[2708]: E0813 01:30:54.997764 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\": not found" containerID="4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7" Aug 13 01:30:54.997824 kubelet[2708]: I0813 01:30:54.997796 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7"} err="failed to get container status \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4772d7d7d60bef3a0b4077163504b3a970adaa77c83afb5247ce34e6739feaf7\": not found" Aug 13 01:30:54.997824 kubelet[2708]: I0813 01:30:54.997808 2708 scope.go:117] "RemoveContainer" containerID="bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322" Aug 13 01:30:54.999048 containerd[1555]: time="2025-08-13T01:30:54.999013422Z" level=info msg="RemoveContainer for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\"" Aug 13 01:30:55.001408 containerd[1555]: time="2025-08-13T01:30:55.001384966Z" level=info msg="RemoveContainer for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" returns successfully" Aug 13 01:30:55.001541 kubelet[2708]: I0813 01:30:55.001521 2708 scope.go:117] "RemoveContainer" containerID="bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322" Aug 13 01:30:55.001814 containerd[1555]: time="2025-08-13T01:30:55.001791586Z" level=error msg="ContainerStatus for \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\": not found" Aug 13 01:30:55.002049 kubelet[2708]: E0813 01:30:55.002018 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\": not found" containerID="bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322" Aug 13 01:30:55.002144 kubelet[2708]: I0813 01:30:55.002046 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322"} err="failed to get container status \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb5d137435e0267451d548d48f72b01ce6726231d8ee5ce9d950090e87a6e322\": not found" Aug 13 01:30:55.351854 systemd[1]: var-lib-kubelet-pods-192b6a6f\x2d9b7f\x2d4883\x2d9bfc\x2d133f6967ebfa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhbvh.mount: Deactivated successfully. Aug 13 01:30:55.352014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4-shm.mount: Deactivated successfully. Aug 13 01:30:55.352102 systemd[1]: var-lib-kubelet-pods-89c96383\x2dcf88\x2d46bf\x2da4a6\x2d13402be041b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtpj8p.mount: Deactivated successfully. Aug 13 01:30:55.352193 systemd[1]: var-lib-kubelet-pods-89c96383\x2dcf88\x2d46bf\x2da4a6\x2d13402be041b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:30:55.352265 systemd[1]: var-lib-kubelet-pods-89c96383\x2dcf88\x2d46bf\x2da4a6\x2d13402be041b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:30:56.255338 kubelet[2708]: I0813 01:30:56.255268 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192b6a6f-9b7f-4883-9bfc-133f6967ebfa" path="/var/lib/kubelet/pods/192b6a6f-9b7f-4883-9bfc-133f6967ebfa/volumes" Aug 13 01:30:56.255884 kubelet[2708]: I0813 01:30:56.255854 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c96383-cf88-46bf-a4a6-13402be041b3" path="/var/lib/kubelet/pods/89c96383-cf88-46bf-a4a6-13402be041b3/volumes" Aug 13 01:30:56.300583 sshd[4511]: Connection closed by 147.75.109.163 port 42470 Aug 13 01:30:56.301948 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:56.307209 systemd[1]: sshd@53-172.233.222.13:22-147.75.109.163:42470.service: Deactivated successfully. Aug 13 01:30:56.309921 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 01:30:56.311813 systemd-logind[1532]: Session 54 logged out. Waiting for processes to exit. Aug 13 01:30:56.313372 systemd-logind[1532]: Removed session 54. Aug 13 01:30:56.358814 systemd[1]: Started sshd@54-172.233.222.13:22-147.75.109.163:42480.service - OpenSSH per-connection server daemon (147.75.109.163:42480). Aug 13 01:30:56.687020 sshd[4664]: Accepted publickey for core from 147.75.109.163 port 42480 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:56.688454 sshd-session[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:56.693979 systemd-logind[1532]: New session 55 of user core. Aug 13 01:30:56.704786 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 01:30:57.243730 kubelet[2708]: I0813 01:30:57.243061 2708 memory_manager.go:355] "RemoveStaleState removing state" podUID="192b6a6f-9b7f-4883-9bfc-133f6967ebfa" containerName="cilium-operator" Aug 13 01:30:57.243730 kubelet[2708]: I0813 01:30:57.243093 2708 memory_manager.go:355] "RemoveStaleState removing state" podUID="89c96383-cf88-46bf-a4a6-13402be041b3" containerName="cilium-agent" Aug 13 01:30:57.254136 systemd[1]: Created slice kubepods-burstable-pod66194ae4_75b1_47e5_b091_f79f7ddb1021.slice - libcontainer container kubepods-burstable-pod66194ae4_75b1_47e5_b091_f79f7ddb1021.slice. Aug 13 01:30:57.274711 sshd[4666]: Connection closed by 147.75.109.163 port 42480 Aug 13 01:30:57.275552 sshd-session[4664]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:57.281190 systemd-logind[1532]: Session 55 logged out. Waiting for processes to exit. Aug 13 01:30:57.282203 systemd[1]: sshd@54-172.233.222.13:22-147.75.109.163:42480.service: Deactivated successfully. Aug 13 01:30:57.285344 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 01:30:57.289581 systemd-logind[1532]: Removed session 55. Aug 13 01:30:57.338873 systemd[1]: Started sshd@55-172.233.222.13:22-147.75.109.163:42482.service - OpenSSH per-connection server daemon (147.75.109.163:42482). Aug 13 01:30:57.351418 kubelet[2708]: I0813 01:30:57.351130 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/66194ae4-75b1-47e5-b091-f79f7ddb1021-cilium-ipsec-secrets\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351418 kubelet[2708]: I0813 01:30:57.351160 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-host-proc-sys-kernel\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351418 kubelet[2708]: I0813 01:30:57.351178 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-cni-path\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351418 kubelet[2708]: I0813 01:30:57.351190 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-xtables-lock\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351418 kubelet[2708]: I0813 01:30:57.351207 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-bpf-maps\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351418 kubelet[2708]: I0813 01:30:57.351219 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/66194ae4-75b1-47e5-b091-f79f7ddb1021-clustermesh-secrets\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351856 kubelet[2708]: I0813 01:30:57.351232 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/66194ae4-75b1-47e5-b091-f79f7ddb1021-hubble-tls\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351856 kubelet[2708]: I0813 01:30:57.351245 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpg6\" (UniqueName: \"kubernetes.io/projected/66194ae4-75b1-47e5-b091-f79f7ddb1021-kube-api-access-fvpg6\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351856 kubelet[2708]: I0813 01:30:57.351259 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-etc-cni-netd\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351856 kubelet[2708]: I0813 01:30:57.351274 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66194ae4-75b1-47e5-b091-f79f7ddb1021-cilium-config-path\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351856 kubelet[2708]: I0813 01:30:57.351288 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-host-proc-sys-net\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351856 kubelet[2708]: I0813 01:30:57.351301 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-cilium-run\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351983 kubelet[2708]: I0813 01:30:57.351315 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-hostproc\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351983 kubelet[2708]: I0813 01:30:57.351327 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-cilium-cgroup\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.351983 kubelet[2708]: I0813 01:30:57.351339 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66194ae4-75b1-47e5-b091-f79f7ddb1021-lib-modules\") pod \"cilium-qg9k8\" (UID: \"66194ae4-75b1-47e5-b091-f79f7ddb1021\") " pod="kube-system/cilium-qg9k8" Aug 13 01:30:57.560963 kubelet[2708]: E0813 01:30:57.560596 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:57.562619 containerd[1555]: time="2025-08-13T01:30:57.562572836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg9k8,Uid:66194ae4-75b1-47e5-b091-f79f7ddb1021,Namespace:kube-system,Attempt:0,}" Aug 13 01:30:57.580327 containerd[1555]: time="2025-08-13T01:30:57.580273289Z" level=info msg="connecting to shim 0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b" address="unix:///run/containerd/s/abf8ef07c161141cdd57a9f13a6cd2e6127251d6d4b1e4002b6a31f13a3a09e9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:30:57.610784 systemd[1]: Started cri-containerd-0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b.scope - libcontainer container 0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b. Aug 13 01:30:57.644698 containerd[1555]: time="2025-08-13T01:30:57.644629446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qg9k8,Uid:66194ae4-75b1-47e5-b091-f79f7ddb1021,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\"" Aug 13 01:30:57.645942 kubelet[2708]: E0813 01:30:57.645907 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:57.648278 containerd[1555]: time="2025-08-13T01:30:57.648236598Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:30:57.655024 containerd[1555]: time="2025-08-13T01:30:57.654978604Z" level=info msg="Container cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:30:57.659937 containerd[1555]: time="2025-08-13T01:30:57.659861974Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\"" Aug 13 01:30:57.660541 containerd[1555]: time="2025-08-13T01:30:57.660509323Z" level=info msg="StartContainer for \"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\"" Aug 13 01:30:57.662047 containerd[1555]: time="2025-08-13T01:30:57.661999720Z" level=info msg="connecting to shim cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0" address="unix:///run/containerd/s/abf8ef07c161141cdd57a9f13a6cd2e6127251d6d4b1e4002b6a31f13a3a09e9" protocol=ttrpc version=3 Aug 13 01:30:57.681792 systemd[1]: Started cri-containerd-cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0.scope - libcontainer container cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0. Aug 13 01:30:57.687198 sshd[4676]: Accepted publickey for core from 147.75.109.163 port 42482 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:57.689123 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:57.695569 systemd-logind[1532]: New session 56 of user core. Aug 13 01:30:57.706779 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 01:30:57.734719 containerd[1555]: time="2025-08-13T01:30:57.734630749Z" level=info msg="StartContainer for \"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\" returns successfully" Aug 13 01:30:57.742265 systemd[1]: cri-containerd-cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0.scope: Deactivated successfully. Aug 13 01:30:57.743363 containerd[1555]: time="2025-08-13T01:30:57.743324751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\" id:\"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\" pid:4740 exited_at:{seconds:1755048657 nanos:742958971}" Aug 13 01:30:57.744277 containerd[1555]: time="2025-08-13T01:30:57.744248589Z" level=info msg="received exit event container_id:\"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\" id:\"cab3500727e9cab798fe329016fdd64afcaa0818fa9ad3f6e608b2c38af950c0\" pid:4740 exited_at:{seconds:1755048657 nanos:742958971}" Aug 13 01:30:57.942177 sshd[4746]: Connection closed by 147.75.109.163 port 42482 Aug 13 01:30:57.943060 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:57.947408 systemd[1]: sshd@55-172.233.222.13:22-147.75.109.163:42482.service: Deactivated successfully. Aug 13 01:30:57.950372 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 01:30:57.953461 systemd-logind[1532]: Session 56 logged out. Waiting for processes to exit. Aug 13 01:30:57.954938 systemd-logind[1532]: Removed session 56. Aug 13 01:30:57.964172 kubelet[2708]: E0813 01:30:57.964133 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:57.968870 containerd[1555]: time="2025-08-13T01:30:57.968828543Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:30:57.976339 containerd[1555]: time="2025-08-13T01:30:57.976311517Z" level=info msg="Container 021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:30:57.981785 containerd[1555]: time="2025-08-13T01:30:57.981753256Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\"" Aug 13 01:30:57.982719 containerd[1555]: time="2025-08-13T01:30:57.982485274Z" level=info msg="StartContainer for \"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\"" Aug 13 01:30:57.983903 containerd[1555]: time="2025-08-13T01:30:57.983876771Z" level=info msg="connecting to shim 021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59" address="unix:///run/containerd/s/abf8ef07c161141cdd57a9f13a6cd2e6127251d6d4b1e4002b6a31f13a3a09e9" protocol=ttrpc version=3 Aug 13 01:30:58.010286 systemd[1]: Started sshd@56-172.233.222.13:22-147.75.109.163:42496.service - OpenSSH per-connection server daemon (147.75.109.163:42496). Aug 13 01:30:58.040790 systemd[1]: Started cri-containerd-021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59.scope - libcontainer container 021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59. Aug 13 01:30:58.119461 containerd[1555]: time="2025-08-13T01:30:58.119414611Z" level=info msg="StartContainer for \"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\" returns successfully" Aug 13 01:30:58.130305 systemd[1]: cri-containerd-021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59.scope: Deactivated successfully. Aug 13 01:30:58.131509 containerd[1555]: time="2025-08-13T01:30:58.131490456Z" level=info msg="received exit event container_id:\"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\" id:\"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\" pid:4796 exited_at:{seconds:1755048658 nanos:130332259}" Aug 13 01:30:58.132616 containerd[1555]: time="2025-08-13T01:30:58.132577614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\" id:\"021161fe47c86290ed3208f54b1fa22595f7324db64d0329505bd7abefd1ed59\" pid:4796 exited_at:{seconds:1755048658 nanos:130332259}" Aug 13 01:30:58.371742 sshd[4792]: Accepted publickey for core from 147.75.109.163 port 42496 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:30:58.373457 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:30:58.379327 systemd-logind[1532]: New session 57 of user core. Aug 13 01:30:58.383475 kubelet[2708]: E0813 01:30:58.383380 2708 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:30:58.384849 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 01:30:58.968109 kubelet[2708]: E0813 01:30:58.968060 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:58.973241 containerd[1555]: time="2025-08-13T01:30:58.973186487Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:30:58.987723 containerd[1555]: time="2025-08-13T01:30:58.985183113Z" level=info msg="Container e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:30:58.991158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190747108.mount: Deactivated successfully. Aug 13 01:30:58.995849 containerd[1555]: time="2025-08-13T01:30:58.995798051Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\"" Aug 13 01:30:58.996500 containerd[1555]: time="2025-08-13T01:30:58.996431690Z" level=info msg="StartContainer for \"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\"" Aug 13 01:30:58.998335 containerd[1555]: time="2025-08-13T01:30:58.998185796Z" level=info msg="connecting to shim e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e" address="unix:///run/containerd/s/abf8ef07c161141cdd57a9f13a6cd2e6127251d6d4b1e4002b6a31f13a3a09e9" protocol=ttrpc version=3 Aug 13 01:30:59.023775 systemd[1]: Started cri-containerd-e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e.scope - libcontainer container e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e. Aug 13 01:30:59.066779 containerd[1555]: time="2025-08-13T01:30:59.066336816Z" level=info msg="StartContainer for \"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\" returns successfully" Aug 13 01:30:59.068392 systemd[1]: cri-containerd-e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e.scope: Deactivated successfully. Aug 13 01:30:59.074806 containerd[1555]: time="2025-08-13T01:30:59.074769988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\" id:\"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\" pid:4850 exited_at:{seconds:1755048659 nanos:74238709}" Aug 13 01:30:59.075253 containerd[1555]: time="2025-08-13T01:30:59.075143658Z" level=info msg="received exit event container_id:\"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\" id:\"e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e\" pid:4850 exited_at:{seconds:1755048659 nanos:74238709}" Aug 13 01:30:59.110265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2acb49fad51eb692c54d387ea74a7438fabcd982a6279683eb76e92dc7f222e-rootfs.mount: Deactivated successfully. Aug 13 01:30:59.972473 kubelet[2708]: E0813 01:30:59.972439 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:30:59.975321 containerd[1555]: time="2025-08-13T01:30:59.975291687Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:30:59.994892 containerd[1555]: time="2025-08-13T01:30:59.994853206Z" level=info msg="Container e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:00.001246 containerd[1555]: time="2025-08-13T01:31:00.001213034Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\"" Aug 13 01:31:00.001775 containerd[1555]: time="2025-08-13T01:31:00.001732852Z" level=info msg="StartContainer for \"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\"" Aug 13 01:31:00.002502 containerd[1555]: time="2025-08-13T01:31:00.002481581Z" level=info msg="connecting to shim e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e" address="unix:///run/containerd/s/abf8ef07c161141cdd57a9f13a6cd2e6127251d6d4b1e4002b6a31f13a3a09e9" protocol=ttrpc version=3 Aug 13 01:31:00.027774 systemd[1]: Started cri-containerd-e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e.scope - libcontainer container e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e. Aug 13 01:31:00.053413 systemd[1]: cri-containerd-e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e.scope: Deactivated successfully. Aug 13 01:31:00.055250 containerd[1555]: time="2025-08-13T01:31:00.055194584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\" id:\"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\" pid:4892 exited_at:{seconds:1755048660 nanos:54908394}" Aug 13 01:31:00.055608 containerd[1555]: time="2025-08-13T01:31:00.055497582Z" level=info msg="received exit event container_id:\"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\" id:\"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\" pid:4892 exited_at:{seconds:1755048660 nanos:54908394}" Aug 13 01:31:00.056201 containerd[1555]: time="2025-08-13T01:31:00.056171432Z" level=info msg="StartContainer for \"e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e\" returns successfully" Aug 13 01:31:00.076519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6c2c9509889295412c1a40c48992cb36d79a9d12a8839fa6ccdcdeedfc34a0e-rootfs.mount: Deactivated successfully. Aug 13 01:31:00.254776 kubelet[2708]: E0813 01:31:00.253880 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:00.977357 kubelet[2708]: E0813 01:31:00.976947 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:00.980359 containerd[1555]: time="2025-08-13T01:31:00.980284720Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:31:00.995368 containerd[1555]: time="2025-08-13T01:31:00.993706672Z" level=info msg="Container 0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:31:00.999522 containerd[1555]: time="2025-08-13T01:31:00.999488460Z" level=info msg="CreateContainer within sandbox \"0b9479dad962c196e0de9febc44751d427d254c4b3d7f5e5acf947106697683b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\"" Aug 13 01:31:01.000030 containerd[1555]: time="2025-08-13T01:31:00.999920119Z" level=info msg="StartContainer for \"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\"" Aug 13 01:31:01.000828 containerd[1555]: time="2025-08-13T01:31:01.000792478Z" level=info msg="connecting to shim 0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c" address="unix:///run/containerd/s/abf8ef07c161141cdd57a9f13a6cd2e6127251d6d4b1e4002b6a31f13a3a09e9" protocol=ttrpc version=3 Aug 13 01:31:01.027753 systemd[1]: Started cri-containerd-0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c.scope - libcontainer container 0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c. Aug 13 01:31:01.058156 containerd[1555]: time="2025-08-13T01:31:01.058110321Z" level=info msg="StartContainer for \"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" returns successfully" Aug 13 01:31:01.144066 containerd[1555]: time="2025-08-13T01:31:01.144015536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"f56e6ff6d2839d4126e0f51a9b7fb07434791f8c13b4de893e31eece158b4457\" pid:4958 exited_at:{seconds:1755048661 nanos:143574147}" Aug 13 01:31:01.491675 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 01:31:01.982172 kubelet[2708]: E0813 01:31:01.982032 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:02.732821 containerd[1555]: time="2025-08-13T01:31:02.732776254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"3a54d5f807e4a8f4621f24aff4b32d668b670da5f58b6619c1022dc8c4a3289e\" pid:5036 exit_status:1 exited_at:{seconds:1755048662 nanos:731707167}" Aug 13 01:31:03.562553 kubelet[2708]: E0813 01:31:03.562166 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:04.134573 systemd-networkd[1464]: lxc_health: Link UP Aug 13 01:31:04.150245 systemd-networkd[1464]: lxc_health: Gained carrier Aug 13 01:31:04.853067 containerd[1555]: time="2025-08-13T01:31:04.853008455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"9a9e33404870289179d63a34e166898f3c20731acaf468c03c861217ef8d33ff\" pid:5460 exited_at:{seconds:1755048664 nanos:851776248}" Aug 13 01:31:05.563243 kubelet[2708]: E0813 01:31:05.562355 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:05.582201 kubelet[2708]: I0813 01:31:05.581795 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qg9k8" podStartSLOduration=8.581784184 podStartE2EDuration="8.581784184s" podCreationTimestamp="2025-08-13 01:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:31:01.99600585 +0000 UTC m=+343.828816367" watchObservedRunningTime="2025-08-13 01:31:05.581784184 +0000 UTC m=+347.414594681" Aug 13 01:31:05.846823 systemd-networkd[1464]: lxc_health: Gained IPv6LL Aug 13 01:31:05.990902 kubelet[2708]: E0813 01:31:05.989815 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:06.965375 containerd[1555]: time="2025-08-13T01:31:06.965281270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"8d7db779fc101800bbe12fcc31e14c35bbe68bdd380359345c1bb9cdb6107f49\" pid:5493 exited_at:{seconds:1755048666 nanos:964764351}" Aug 13 01:31:06.991234 kubelet[2708]: E0813 01:31:06.991199 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:31:09.071269 containerd[1555]: time="2025-08-13T01:31:09.071168645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"85e1dd8a045caf0b7b9099a1e581c8d70f2625adb15ac407f563943f11210a9c\" pid:5522 exited_at:{seconds:1755048669 nanos:70418936}" Aug 13 01:31:11.167613 containerd[1555]: time="2025-08-13T01:31:11.167567275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"d801b55421c7c2eacd1d795698638959d11f5b3ca3e349b11b3b5e156cf315af\" pid:5546 exited_at:{seconds:1755048671 nanos:166790436}" Aug 13 01:31:13.272435 containerd[1555]: time="2025-08-13T01:31:13.272227162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e4d2f5f19a51c44608428199f142022074748745ad21f06dafec6b5d66ebe8c\" id:\"dec489d1d7a81ed1defae4bc22a82b85f0bf09fcc60f991a807660acc4cd1fa9\" pid:5569 exited_at:{seconds:1755048673 nanos:271125444}" Aug 13 01:31:13.328265 sshd[4831]: Connection closed by 147.75.109.163 port 42496 Aug 13 01:31:13.328931 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Aug 13 01:31:13.333820 systemd[1]: sshd@56-172.233.222.13:22-147.75.109.163:42496.service: Deactivated successfully. Aug 13 01:31:13.336502 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 01:31:13.337926 systemd-logind[1532]: Session 57 logged out. Waiting for processes to exit. Aug 13 01:31:13.339592 systemd-logind[1532]: Removed session 57. Aug 13 01:31:15.330549 systemd[1]: Started sshd@57-172.233.222.13:22-102.210.80.6:46147.service - OpenSSH per-connection server daemon (102.210.80.6:46147). Aug 13 01:31:17.281697 sshd[5582]: Received disconnect from 102.210.80.6 port 46147:11: Bye Bye [preauth] Aug 13 01:31:17.281697 sshd[5582]: Disconnected from authenticating user root 102.210.80.6 port 46147 [preauth] Aug 13 01:31:17.284557 systemd[1]: sshd@57-172.233.222.13:22-102.210.80.6:46147.service: Deactivated successfully. Aug 13 01:31:18.248986 containerd[1555]: time="2025-08-13T01:31:18.248618247Z" level=info msg="StopPodSandbox for \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\"" Aug 13 01:31:18.248986 containerd[1555]: time="2025-08-13T01:31:18.248793186Z" level=info msg="TearDown network for sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" successfully" Aug 13 01:31:18.248986 containerd[1555]: time="2025-08-13T01:31:18.248805446Z" level=info msg="StopPodSandbox for \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" returns successfully" Aug 13 01:31:18.249632 containerd[1555]: time="2025-08-13T01:31:18.249606145Z" level=info msg="RemovePodSandbox for \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\"" Aug 13 01:31:18.249632 containerd[1555]: time="2025-08-13T01:31:18.249629685Z" level=info msg="Forcibly stopping sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\"" Aug 13 01:31:18.249739 containerd[1555]: time="2025-08-13T01:31:18.249710794Z" level=info msg="TearDown network for sandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" successfully" Aug 13 01:31:18.250966 containerd[1555]: time="2025-08-13T01:31:18.250947602Z" level=info msg="Ensure that sandbox 75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4 in task-service has been cleanup successfully" Aug 13 01:31:18.252943 containerd[1555]: time="2025-08-13T01:31:18.252921718Z" level=info msg="RemovePodSandbox \"75ffb84e4072c18a0bcf3019717096c7836f2f794d227653d6aa9b6d416392b4\" returns successfully" Aug 13 01:31:18.253671 containerd[1555]: time="2025-08-13T01:31:18.253343937Z" level=info msg="StopPodSandbox for \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\"" Aug 13 01:31:18.253671 containerd[1555]: time="2025-08-13T01:31:18.253406867Z" level=info msg="TearDown network for sandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" successfully" Aug 13 01:31:18.253671 containerd[1555]: time="2025-08-13T01:31:18.253417067Z" level=info msg="StopPodSandbox for \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" returns successfully" Aug 13 01:31:18.254589 containerd[1555]: time="2025-08-13T01:31:18.254490765Z" level=info msg="RemovePodSandbox for \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\"" Aug 13 01:31:18.254589 containerd[1555]: time="2025-08-13T01:31:18.254511795Z" level=info msg="Forcibly stopping sandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\"" Aug 13 01:31:18.254589 containerd[1555]: time="2025-08-13T01:31:18.254563554Z" level=info msg="TearDown network for sandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" successfully" Aug 13 01:31:18.255930 containerd[1555]: time="2025-08-13T01:31:18.255906932Z" level=info msg="Ensure that sandbox 0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec in task-service has been cleanup successfully" Aug 13 01:31:18.257582 containerd[1555]: time="2025-08-13T01:31:18.257561359Z" level=info msg="RemovePodSandbox \"0c814b6acfae37fa420eab1cd926b5fb2c5c1944c51ecb2fc12f8f804734a6ec\" returns successfully" Aug 13 01:31:20.254144 kubelet[2708]: E0813 01:31:20.253062 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9"