Aug 13 00:35:04.190211 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:35:04.190267 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:35:04.190282 kernel: BIOS-provided physical RAM map: Aug 13 00:35:04.190298 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 00:35:04.190307 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 00:35:04.190317 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:35:04.190328 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 00:35:04.190349 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 00:35:04.190359 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:35:04.190369 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:35:04.190380 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:35:04.190389 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:35:04.190403 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 00:35:04.190413 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:35:04.190425 kernel: NX (Execute Disable) protection: active Aug 13 00:35:04.190436 kernel: APIC: Static calls initialized Aug 13 00:35:04.190454 kernel: SMBIOS 2.8 present. Aug 13 00:35:04.190470 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 00:35:04.190481 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:35:04.190491 kernel: Hypervisor detected: KVM Aug 13 00:35:04.190502 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:35:04.190513 kernel: kvm-clock: using sched offset of 8078127384 cycles Aug 13 00:35:04.190525 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:35:04.190535 kernel: tsc: Detected 2000.000 MHz processor Aug 13 00:35:04.190547 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:35:04.190559 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:35:04.190570 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 00:35:04.190585 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:35:04.190597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:35:04.190608 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 00:35:04.190619 kernel: Using GB pages for direct mapping Aug 13 00:35:04.190631 kernel: ACPI: Early table checksum verification disabled Aug 13 00:35:04.190641 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 00:35:04.190669 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190681 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190693 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190709 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:35:04.190720 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190731 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190743 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190759 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:35:04.190771 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 00:35:04.190785 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 00:35:04.190797 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:35:04.190808 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 00:35:04.190819 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 00:35:04.190830 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 00:35:04.190841 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 00:35:04.190853 kernel: No NUMA configuration found Aug 13 00:35:04.190867 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 00:35:04.190879 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 00:35:04.190891 kernel: Zone ranges: Aug 13 00:35:04.190902 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:35:04.190914 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:35:04.190926 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:35:04.190938 kernel: Device empty Aug 13 00:35:04.190949 kernel: Movable zone start for each node Aug 13 00:35:04.190960 kernel: Early memory node ranges Aug 13 00:35:04.190972 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:35:04.190988 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 00:35:04.190999 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:35:04.191019 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 00:35:04.191031 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:35:04.191048 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:35:04.191060 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 00:35:04.191072 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:35:04.191091 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:35:04.191103 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:35:04.191121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:35:04.191132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:35:04.191144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:35:04.191155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:35:04.191166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:35:04.191178 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:35:04.191189 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:35:04.191198 kernel: TSC deadline timer available Aug 13 00:35:04.191240 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:35:04.191254 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:35:04.191264 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:35:04.191274 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:35:04.191284 kernel: CPU topo: Num. cores per package: 2 Aug 13 00:35:04.191294 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:35:04.191304 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:35:04.191314 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:35:04.191325 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:35:04.191337 kernel: kvm-guest: setup PV sched yield Aug 13 00:35:04.191352 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:35:04.191363 kernel: Booting paravirtualized kernel on KVM Aug 13 00:35:04.191375 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:35:04.191387 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:35:04.191399 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:35:04.191410 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:35:04.191422 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:35:04.191433 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:35:04.191445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:35:04.191462 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:35:04.191474 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:35:04.191484 kernel: random: crng init done Aug 13 00:35:04.191495 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:35:04.191506 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:35:04.191516 kernel: Fallback order for Node 0: 0 Aug 13 00:35:04.191526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 00:35:04.191536 kernel: Policy zone: Normal Aug 13 00:35:04.191550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:35:04.191561 kernel: software IO TLB: area num 2. Aug 13 00:35:04.191572 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:35:04.191583 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:35:04.191595 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:35:04.191607 kernel: Dynamic Preempt: voluntary Aug 13 00:35:04.191628 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:35:04.191642 kernel: rcu: RCU event tracing is enabled. Aug 13 00:35:04.191675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:35:04.191694 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:35:04.191706 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:35:04.191718 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:35:04.191729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:35:04.191739 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:35:04.191750 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:35:04.191776 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:35:04.191792 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:35:04.191804 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:35:04.191817 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:35:04.191839 kernel: Console: colour VGA+ 80x25 Aug 13 00:35:04.191852 kernel: printk: legacy console [tty0] enabled Aug 13 00:35:04.191868 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:35:04.191880 kernel: ACPI: Core revision 20240827 Aug 13 00:35:04.191893 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:35:04.191905 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:35:04.191917 kernel: x2apic enabled Aug 13 00:35:04.191934 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:35:04.191946 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:35:04.191958 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:35:04.191970 kernel: kvm-guest: setup PV IPIs Aug 13 00:35:04.191982 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:35:04.191995 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 00:35:04.192008 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 00:35:04.192020 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:35:04.192033 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:35:04.192052 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:35:04.192064 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:35:04.192077 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:35:04.192090 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:35:04.192101 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:35:04.192113 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:35:04.192125 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:35:04.192138 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:35:04.192157 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:35:04.192170 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:35:04.192183 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 00:35:04.192195 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:35:04.192208 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:35:04.192220 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:35:04.192233 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:35:04.192246 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:35:04.192260 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:35:04.192277 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 00:35:04.192298 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 00:35:04.192310 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:35:04.192323 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:35:04.192335 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:35:04.192347 kernel: landlock: Up and running. Aug 13 00:35:04.192359 kernel: SELinux: Initializing. Aug 13 00:35:04.192371 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:35:04.192383 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:35:04.192401 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 00:35:04.192413 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:35:04.192425 kernel: ... version: 0 Aug 13 00:35:04.192437 kernel: ... bit width: 48 Aug 13 00:35:04.192449 kernel: ... generic registers: 6 Aug 13 00:35:04.192461 kernel: ... value mask: 0000ffffffffffff Aug 13 00:35:04.192474 kernel: ... max period: 00007fffffffffff Aug 13 00:35:04.192486 kernel: ... fixed-purpose events: 0 Aug 13 00:35:04.192498 kernel: ... event mask: 000000000000003f Aug 13 00:35:04.192516 kernel: signal: max sigframe size: 3376 Aug 13 00:35:04.192529 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:35:04.192541 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:35:04.192554 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:35:04.192567 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:35:04.192580 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:35:04.192592 kernel: .... node #0, CPUs: #1 Aug 13 00:35:04.192614 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:35:04.192627 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 00:35:04.192646 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 00:35:04.192682 kernel: devtmpfs: initialized Aug 13 00:35:04.192695 kernel: x86/mm: Memory block size: 128MB Aug 13 00:35:04.192709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:35:04.192721 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:35:04.192733 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:35:04.192745 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:35:04.192758 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:35:04.192771 kernel: audit: type=2000 audit(1755045299.885:1): state=initialized audit_enabled=0 res=1 Aug 13 00:35:04.192790 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:35:04.192804 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:35:04.192817 kernel: cpuidle: using governor menu Aug 13 00:35:04.192830 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:35:04.192841 kernel: dca service started, version 1.12.1 Aug 13 00:35:04.192854 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 00:35:04.192867 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:35:04.192879 kernel: PCI: Using configuration type 1 for base access Aug 13 00:35:04.192892 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:35:04.192911 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:35:04.192923 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:35:04.192946 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:35:04.192960 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:35:04.192972 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:35:04.192985 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:35:04.192997 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:35:04.193010 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:35:04.193022 kernel: ACPI: Interpreter enabled Aug 13 00:35:04.193040 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:35:04.193053 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:35:04.193066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:35:04.193079 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:35:04.193091 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:35:04.193104 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:35:04.193537 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:35:04.194801 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:35:04.195014 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:35:04.195033 kernel: PCI host bridge to bus 0000:00 Aug 13 00:35:04.195274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:35:04.195445 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:35:04.195616 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:35:04.195808 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 00:35:04.195984 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:35:04.196154 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 00:35:04.196325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:35:04.196617 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:35:04.199086 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:35:04.199287 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 00:35:04.199470 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 00:35:04.199679 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:35:04.199886 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:35:04.200101 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:35:04.200301 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 00:35:04.200503 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 00:35:04.203751 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:35:04.203976 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:35:04.204187 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 00:35:04.204379 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 00:35:04.204574 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:35:04.204773 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:35:04.204996 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:35:04.205188 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:35:04.205443 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 00:35:04.205643 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 00:35:04.207121 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 00:35:04.207316 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 00:35:04.207492 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 00:35:04.207509 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:35:04.207531 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:35:04.207550 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:35:04.207562 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:35:04.207573 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:35:04.207585 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:35:04.207597 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:35:04.207610 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:35:04.207622 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:35:04.207634 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:35:04.207646 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:35:04.207698 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:35:04.207711 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:35:04.207724 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:35:04.207736 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:35:04.207749 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:35:04.207761 kernel: iommu: Default domain type: Translated Aug 13 00:35:04.207774 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:35:04.207787 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:35:04.207799 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:35:04.207817 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 00:35:04.207829 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 00:35:04.208033 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:35:04.208233 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:35:04.208428 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:35:04.208449 kernel: vgaarb: loaded Aug 13 00:35:04.208462 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:35:04.208474 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:35:04.208486 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:35:04.208504 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:35:04.208516 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:35:04.208528 kernel: pnp: PnP ACPI init Aug 13 00:35:04.210891 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:35:04.210918 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:35:04.210934 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:35:04.210948 kernel: NET: Registered PF_INET protocol family Aug 13 00:35:04.210959 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:35:04.210979 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:35:04.210991 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:35:04.211003 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:35:04.211015 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:35:04.211027 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:35:04.211039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:35:04.211051 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:35:04.211063 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:35:04.211074 kernel: NET: Registered PF_XDP protocol family Aug 13 00:35:04.211255 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:35:04.211411 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:35:04.211571 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:35:04.211748 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 00:35:04.211919 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:35:04.212093 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 00:35:04.212112 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:35:04.212125 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:35:04.212143 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 00:35:04.212155 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 00:35:04.212167 kernel: Initialise system trusted keyrings Aug 13 00:35:04.212179 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:35:04.212191 kernel: Key type asymmetric registered Aug 13 00:35:04.212202 kernel: Asymmetric key parser 'x509' registered Aug 13 00:35:04.212214 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:35:04.212226 kernel: io scheduler mq-deadline registered Aug 13 00:35:04.212238 kernel: io scheduler kyber registered Aug 13 00:35:04.212254 kernel: io scheduler bfq registered Aug 13 00:35:04.212266 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:35:04.212279 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:35:04.212292 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:35:04.212304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:35:04.212316 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:35:04.212328 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:35:04.212340 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:35:04.212353 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:35:04.212592 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:35:04.212614 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:35:04.215358 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:35:04.215526 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:35:03 UTC (1755045303) Aug 13 00:35:04.215723 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:35:04.215741 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:35:04.215753 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:35:04.215766 kernel: Segment Routing with IPv6 Aug 13 00:35:04.215783 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:35:04.215796 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:35:04.215807 kernel: Key type dns_resolver registered Aug 13 00:35:04.215819 kernel: IPI shorthand broadcast: enabled Aug 13 00:35:04.215832 kernel: sched_clock: Marking stable (5021004236, 224652022)->(5355164651, -109508393) Aug 13 00:35:04.215844 kernel: registered taskstats version 1 Aug 13 00:35:04.215856 kernel: Loading compiled-in X.509 certificates Aug 13 00:35:04.215868 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:35:04.215880 kernel: Demotion targets for Node 0: null Aug 13 00:35:04.215898 kernel: Key type .fscrypt registered Aug 13 00:35:04.215910 kernel: Key type fscrypt-provisioning registered Aug 13 00:35:04.215921 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:35:04.215933 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:35:04.215945 kernel: ima: No architecture policies found Aug 13 00:35:04.215957 kernel: clk: Disabling unused clocks Aug 13 00:35:04.215969 kernel: Warning: unable to open an initial console. Aug 13 00:35:04.215981 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:35:04.215997 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:35:04.216009 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:35:04.216022 kernel: Run /init as init process Aug 13 00:35:04.216033 kernel: with arguments: Aug 13 00:35:04.216045 kernel: /init Aug 13 00:35:04.216056 kernel: with environment: Aug 13 00:35:04.216068 kernel: HOME=/ Aug 13 00:35:04.216101 kernel: TERM=linux Aug 13 00:35:04.216116 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:35:04.216130 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:35:04.216150 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:35:04.216164 systemd[1]: Detected virtualization kvm. Aug 13 00:35:04.216176 systemd[1]: Detected architecture x86-64. Aug 13 00:35:04.216189 systemd[1]: Running in initrd. Aug 13 00:35:04.216201 systemd[1]: No hostname configured, using default hostname. Aug 13 00:35:04.216214 systemd[1]: Hostname set to . Aug 13 00:35:04.216229 systemd[1]: Initializing machine ID from random generator. Aug 13 00:35:04.216242 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:35:04.216255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:35:04.216268 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:35:04.216281 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:35:04.216294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:35:04.216308 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:35:04.216321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:35:04.216339 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:35:04.216351 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:35:04.216364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:35:04.216377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:35:04.216390 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:35:04.216403 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:35:04.216416 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:35:04.216428 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:35:04.216445 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:35:04.216457 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:35:04.216470 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:35:04.216482 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:35:04.216495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:35:04.216508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:35:04.216521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:35:04.216534 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:35:04.216550 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:35:04.216562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:35:04.216575 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:35:04.216588 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:35:04.216601 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:35:04.216619 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:35:04.216633 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:35:04.216647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:35:04.218716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:35:04.218733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:35:04.218753 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:35:04.218808 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 00:35:04.218849 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:35:04.218864 systemd-journald[206]: Journal started Aug 13 00:35:04.218898 systemd-journald[206]: Runtime Journal (/run/log/journal/fa10715ddeba4670b56d30cb2dd863c5) is 8M, max 78.5M, 70.5M free. Aug 13 00:35:04.209303 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 00:35:04.279114 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:35:04.279146 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:35:04.279163 kernel: Bridge firewalling registered Aug 13 00:35:04.266258 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 00:35:04.279918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:35:04.280912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:35:04.281942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:35:04.286229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:35:04.289799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:35:04.292816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:35:04.302851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:35:04.315218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:35:04.332968 systemd-tmpfiles[222]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:35:04.333195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:35:04.336026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:35:04.340593 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:35:04.342554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:35:04.347807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:35:04.368441 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:35:04.401287 systemd-resolved[244]: Positive Trust Anchors: Aug 13 00:35:04.402048 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:35:04.402077 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:35:04.407718 systemd-resolved[244]: Defaulting to hostname 'linux'. Aug 13 00:35:04.409465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:35:04.410943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:35:04.488721 kernel: SCSI subsystem initialized Aug 13 00:35:04.497742 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:35:04.509689 kernel: iscsi: registered transport (tcp) Aug 13 00:35:04.530718 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:35:04.530830 kernel: QLogic iSCSI HBA Driver Aug 13 00:35:04.587864 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:35:04.624569 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:35:04.629132 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:35:04.711478 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:35:04.714567 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:35:04.780871 kernel: raid6: avx2x4 gen() 32364 MB/s Aug 13 00:35:04.798695 kernel: raid6: avx2x2 gen() 30626 MB/s Aug 13 00:35:04.817192 kernel: raid6: avx2x1 gen() 20165 MB/s Aug 13 00:35:04.817218 kernel: raid6: using algorithm avx2x4 gen() 32364 MB/s Aug 13 00:35:04.836313 kernel: raid6: .... xor() 4437 MB/s, rmw enabled Aug 13 00:35:04.836385 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:35:04.871704 kernel: xor: automatically using best checksumming function avx Aug 13 00:35:05.080735 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:35:05.093686 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:35:05.097268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:35:05.137070 systemd-udevd[453]: Using default interface naming scheme 'v255'. Aug 13 00:35:05.148592 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:35:05.153206 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:35:05.192199 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Aug 13 00:35:05.239307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:35:05.243109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:35:05.335724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:35:05.342412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:35:05.446780 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:35:05.449998 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 00:35:05.467724 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:35:05.473716 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:35:05.475765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:35:05.476021 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:35:05.500255 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:35:05.506143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:35:05.507384 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:35:05.516706 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:35:05.519732 kernel: libata version 3.00 loaded. Aug 13 00:35:05.525678 kernel: AES CTR mode by8 optimization enabled Aug 13 00:35:05.972723 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 00:35:05.973550 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 00:35:05.973855 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:35:05.974116 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 00:35:05.974365 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:35:05.979737 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:35:05.981185 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:35:05.983244 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:35:05.983278 kernel: GPT:9289727 != 9297919 Aug 13 00:35:05.983296 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:35:05.983342 kernel: GPT:9289727 != 9297919 Aug 13 00:35:05.983359 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:35:05.983391 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:35:05.983407 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 00:35:05.983683 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 00:35:05.983935 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:35:05.985736 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:35:05.996784 kernel: scsi host1: ahci Aug 13 00:35:06.000778 kernel: scsi host2: ahci Aug 13 00:35:06.001672 kernel: scsi host3: ahci Aug 13 00:35:06.002678 kernel: scsi host4: ahci Aug 13 00:35:06.035478 kernel: scsi host5: ahci Aug 13 00:35:06.036726 kernel: scsi host6: ahci Aug 13 00:35:06.037069 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 00:35:06.037090 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 00:35:06.037107 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 00:35:06.037140 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 00:35:06.037166 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 00:35:06.037183 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 00:35:06.130103 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:35:06.211117 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:35:06.212218 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:35:06.214197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:35:06.230220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:35:06.253974 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:35:06.266750 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:35:06.307818 disk-uuid[625]: Primary Header is updated. Aug 13 00:35:06.307818 disk-uuid[625]: Secondary Entries is updated. Aug 13 00:35:06.307818 disk-uuid[625]: Secondary Header is updated. Aug 13 00:35:06.319700 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:35:06.334714 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:35:06.343737 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:35:06.356685 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:35:06.356987 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:35:06.357058 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:35:06.357130 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:35:06.359761 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:35:06.569417 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:35:06.597443 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:35:06.598235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:35:06.599694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:35:06.602548 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:35:06.637130 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:35:07.339752 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:35:07.340695 disk-uuid[626]: The operation has completed successfully. Aug 13 00:35:07.426508 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:35:07.426759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:35:07.478521 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:35:07.513136 sh[654]: Success Aug 13 00:35:07.542745 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:35:07.542837 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:35:07.548712 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:35:07.561728 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:35:07.624205 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:35:07.629773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:35:07.641334 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:35:07.655164 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:35:07.655204 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (666) Aug 13 00:35:07.660912 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:35:07.660960 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:35:07.662692 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:35:07.672986 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:35:07.674129 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:35:07.674961 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:35:07.676039 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:35:07.679798 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:35:07.719695 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (699) Aug 13 00:35:07.722676 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:35:07.726026 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:35:07.726047 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:35:07.737741 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:35:07.739202 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:35:07.742861 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:35:07.949351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:35:07.954796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:35:08.236309 systemd-networkd[835]: lo: Link UP Aug 13 00:35:08.236324 systemd-networkd[835]: lo: Gained carrier Aug 13 00:35:08.239030 systemd-networkd[835]: Enumeration completed Aug 13 00:35:08.239206 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:35:08.242090 systemd[1]: Reached target network.target - Network. Aug 13 00:35:08.242543 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:35:08.242550 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:35:08.260992 systemd-networkd[835]: eth0: Link UP Aug 13 00:35:08.261385 systemd-networkd[835]: eth0: Gained carrier Aug 13 00:35:08.261402 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:35:08.299831 ignition[757]: Ignition 2.21.0 Aug 13 00:35:08.299908 ignition[757]: Stage: fetch-offline Aug 13 00:35:08.299995 ignition[757]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:08.300015 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:08.300278 ignition[757]: parsed url from cmdline: "" Aug 13 00:35:08.300285 ignition[757]: no config URL provided Aug 13 00:35:08.300294 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:35:08.300308 ignition[757]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:35:08.300317 ignition[757]: failed to fetch config: resource requires networking Aug 13 00:35:08.307156 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:35:08.301305 ignition[757]: Ignition finished successfully Aug 13 00:35:08.311713 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:35:08.467321 ignition[843]: Ignition 2.21.0 Aug 13 00:35:08.467355 ignition[843]: Stage: fetch Aug 13 00:35:08.467573 ignition[843]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:08.467587 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:08.467762 ignition[843]: parsed url from cmdline: "" Aug 13 00:35:08.467766 ignition[843]: no config URL provided Aug 13 00:35:08.467773 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:35:08.467785 ignition[843]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:35:08.467838 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 00:35:08.468281 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:35:08.669387 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 00:35:08.669711 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:35:09.070862 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 00:35:09.071083 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:35:09.464301 systemd-networkd[835]: eth0: Gained IPv6LL Aug 13 00:35:09.645840 systemd-networkd[835]: eth0: DHCPv4 address 172.237.133.249/24, gateway 172.237.133.1 acquired from 23.192.120.216 Aug 13 00:35:09.872060 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #4 Aug 13 00:35:09.983499 ignition[843]: PUT result: OK Aug 13 00:35:09.983603 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 00:35:10.113086 ignition[843]: GET result: OK Aug 13 00:35:10.113315 ignition[843]: parsing config with SHA512: ad029ee6c7ffa7b06538071e81dceb86d7f9d1bf602f7c9f155d08358ada83732edc17e40c3e6a571f801ebc565960730250e577f3873e6946e3d72ebb67db87 Aug 13 00:35:10.120301 unknown[843]: fetched base config from "system" Aug 13 00:35:10.121609 unknown[843]: fetched base config from "system" Aug 13 00:35:10.122175 ignition[843]: fetch: fetch complete Aug 13 00:35:10.121626 unknown[843]: fetched user config from "akamai" Aug 13 00:35:10.122184 ignition[843]: fetch: fetch passed Aug 13 00:35:10.127764 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:35:10.122259 ignition[843]: Ignition finished successfully Aug 13 00:35:10.131039 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:35:10.174959 ignition[850]: Ignition 2.21.0 Aug 13 00:35:10.174980 ignition[850]: Stage: kargs Aug 13 00:35:10.175166 ignition[850]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:10.175184 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:10.176874 ignition[850]: kargs: kargs passed Aug 13 00:35:10.176943 ignition[850]: Ignition finished successfully Aug 13 00:35:10.179894 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:35:10.182798 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:35:10.286117 ignition[857]: Ignition 2.21.0 Aug 13 00:35:10.286142 ignition[857]: Stage: disks Aug 13 00:35:10.286377 ignition[857]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:10.286397 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:10.291050 ignition[857]: disks: disks passed Aug 13 00:35:10.291138 ignition[857]: Ignition finished successfully Aug 13 00:35:10.294483 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:35:10.295895 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:35:10.297121 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:35:10.299273 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:35:10.303587 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:35:10.305070 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:35:10.308425 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:35:10.355386 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:35:10.358482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:35:10.362310 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:35:10.547726 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:35:10.548915 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:35:10.550192 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:35:10.553017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:35:10.556732 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:35:10.559766 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:35:10.575597 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:35:10.575790 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:35:10.587530 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:35:10.592836 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:35:10.600694 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Aug 13 00:35:10.604693 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:35:10.604750 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:35:10.608638 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:35:10.621711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:35:10.717469 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:35:10.736712 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:35:10.758813 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:35:10.772028 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:35:11.027412 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:35:11.041015 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:35:11.048155 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:35:11.083837 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:35:11.091821 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:35:11.347591 ignition[987]: INFO : Ignition 2.21.0 Aug 13 00:35:11.351462 ignition[987]: INFO : Stage: mount Aug 13 00:35:11.351462 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:11.351462 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:11.356687 ignition[987]: INFO : mount: mount passed Aug 13 00:35:11.356687 ignition[987]: INFO : Ignition finished successfully Aug 13 00:35:11.358435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:35:11.382773 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:35:11.434875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:35:11.484756 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (997) Aug 13 00:35:11.510025 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:35:11.510143 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:35:11.510161 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:35:11.545487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:35:11.567142 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:35:11.696072 ignition[1015]: INFO : Ignition 2.21.0 Aug 13 00:35:11.696072 ignition[1015]: INFO : Stage: files Aug 13 00:35:11.696072 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:11.696072 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:11.696072 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:35:11.721260 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:35:11.721260 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:35:11.728695 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:35:11.730773 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:35:11.733085 unknown[1015]: wrote ssh authorized keys file for user: core Aug 13 00:35:11.736280 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:35:11.745615 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:35:11.745615 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:35:12.189013 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:35:13.011713 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:35:13.011713 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:35:13.011713 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:35:13.198717 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:35:13.749144 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:35:13.749144 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:35:13.752051 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:35:13.752051 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:35:13.754109 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:35:13.754109 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:35:13.754109 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:35:13.754109 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:35:13.762560 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:35:14.007490 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:35:15.381429 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:35:15.381429 ignition[1015]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:35:15.386860 ignition[1015]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:35:15.399525 ignition[1015]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:35:15.399525 ignition[1015]: INFO : files: files passed Aug 13 00:35:15.399525 ignition[1015]: INFO : Ignition finished successfully Aug 13 00:35:15.395831 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:35:15.402107 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:35:15.407826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:35:15.429170 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:35:15.430148 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:35:15.440077 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:35:15.440077 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:35:15.442528 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:35:15.442282 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:35:15.443640 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:35:15.445831 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:35:15.491981 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:35:15.492159 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:35:15.493936 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:35:15.495841 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:35:15.496637 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:35:15.498329 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:35:15.526395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:35:15.528634 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:35:15.566577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:35:15.568142 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:35:15.569146 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:35:15.570400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:35:15.570776 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:35:15.572777 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:35:15.574301 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:35:15.575450 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:35:15.576679 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:35:15.578193 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:35:15.579785 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:35:15.581235 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:35:15.582834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:35:15.584229 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:35:15.585919 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:35:15.587543 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:35:15.588572 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:35:15.588769 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:35:15.590251 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:35:15.591228 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:35:15.592720 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:35:15.592872 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:35:15.594035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:35:15.594248 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:35:15.596097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:35:15.596328 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:35:15.598034 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:35:15.598241 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:35:15.601883 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:35:15.604857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:35:15.605580 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:35:15.606047 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:35:15.608590 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:35:15.609887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:35:15.621677 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:35:15.629807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:35:15.652463 ignition[1071]: INFO : Ignition 2.21.0 Aug 13 00:35:15.652463 ignition[1071]: INFO : Stage: umount Aug 13 00:35:15.654043 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:35:15.654043 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:35:15.657457 ignition[1071]: INFO : umount: umount passed Aug 13 00:35:15.657457 ignition[1071]: INFO : Ignition finished successfully Aug 13 00:35:15.662195 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:35:15.662373 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:35:15.667944 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:35:15.668780 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:35:15.668847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:35:15.669572 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:35:15.669633 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:35:15.671025 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:35:15.671085 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:35:15.671755 systemd[1]: Stopped target network.target - Network. Aug 13 00:35:15.672323 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:35:15.672391 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:35:15.673120 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:35:15.674273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:35:15.675821 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:35:15.676943 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:35:15.678189 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:35:15.679694 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:35:15.679756 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:35:15.681071 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:35:15.681124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:35:15.682763 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:35:15.682866 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:35:15.684140 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:35:15.684221 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:35:15.685829 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:35:15.687725 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:35:15.718372 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:35:15.720884 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:35:15.726403 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:35:15.727981 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:35:15.733773 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:35:15.734244 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:35:15.734469 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:35:15.740181 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:35:15.742765 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:35:15.744330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:35:15.744415 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:35:15.747406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:35:15.747685 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:35:15.750934 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:35:15.751518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:35:15.751591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:35:15.752290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:35:15.752344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:35:15.754588 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:35:15.754673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:35:15.755771 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:35:15.755825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:35:15.760115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:35:15.766213 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:35:15.766311 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:35:15.768387 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:35:15.770320 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:35:15.777871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:35:15.777969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:35:15.778801 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:35:15.778856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:35:15.780158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:35:15.780233 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:35:15.783892 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:35:15.783966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:35:15.785346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:35:15.785418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:35:15.787813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:35:15.789482 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:35:15.789606 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:35:15.796138 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:35:15.796259 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:35:15.797578 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:35:15.797677 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:35:15.801013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:35:15.801126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:35:15.803050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:35:15.803187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:35:15.809236 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:35:15.809314 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 00:35:15.809446 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:35:15.809516 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:35:15.810149 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:35:15.810306 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:35:15.816062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:35:15.816259 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:35:15.818076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:35:15.821985 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:35:15.850366 systemd[1]: Switching root. Aug 13 00:35:15.888715 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 00:35:15.888977 systemd-journald[206]: Journal stopped Aug 13 00:35:17.764529 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:35:17.764614 kernel: SELinux: policy capability open_perms=1 Aug 13 00:35:17.764634 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:35:17.765684 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:35:17.765729 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:35:17.765747 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:35:17.765764 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:35:17.765785 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:35:17.765800 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:35:17.765815 kernel: audit: type=1403 audit(1755045316.153:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:35:17.765832 systemd[1]: Successfully loaded SELinux policy in 74.097ms. Aug 13 00:35:17.765857 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.684ms. Aug 13 00:35:17.765886 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:35:17.765908 systemd[1]: Detected virtualization kvm. Aug 13 00:35:17.765932 systemd[1]: Detected architecture x86-64. Aug 13 00:35:17.765954 systemd[1]: Detected first boot. Aug 13 00:35:17.765980 systemd[1]: Initializing machine ID from random generator. Aug 13 00:35:17.766000 zram_generator::config[1116]: No configuration found. Aug 13 00:35:17.766020 kernel: Guest personality initialized and is inactive Aug 13 00:35:17.766036 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:35:17.766051 kernel: Initialized host personality Aug 13 00:35:17.766066 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:35:17.766093 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:35:17.766119 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:35:17.766141 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:35:17.766970 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:35:17.766999 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:35:17.767018 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:35:17.767035 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:35:17.767058 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:35:17.767075 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:35:17.767092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:35:17.767109 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:35:17.767126 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:35:17.767143 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:35:17.767160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:35:17.767177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:35:17.767200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:35:17.767218 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:35:17.767234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:35:17.767257 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:35:17.767274 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:35:17.767291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:35:17.767490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:35:17.767510 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:35:17.767533 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:35:17.767550 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:35:17.767571 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:35:17.767588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:35:17.767605 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:35:17.767621 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:35:17.767638 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:35:17.767672 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:35:17.767694 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:35:17.767711 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:35:17.767729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:35:17.767745 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:35:17.767766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:35:17.767784 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:35:17.767802 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:35:17.767819 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:35:17.767835 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:35:17.767858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:17.767875 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:35:17.767892 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:35:17.767909 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:35:17.767932 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:35:17.767950 systemd[1]: Reached target machines.target - Containers. Aug 13 00:35:17.767976 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:35:17.767993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:35:17.768010 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:35:17.768028 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:35:17.768051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:35:17.768073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:35:17.768094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:35:17.768111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:35:17.768128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:35:17.768145 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:35:17.768162 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:35:17.768179 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:35:17.768195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:35:17.768217 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:35:17.768239 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:35:17.768256 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:35:17.768272 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:35:17.768294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:35:17.768310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:35:17.768327 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:35:17.768344 kernel: loop: module loaded Aug 13 00:35:17.768548 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:35:17.768579 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:35:17.768597 systemd[1]: Stopped verity-setup.service. Aug 13 00:35:17.768615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:17.768631 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:35:17.771574 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:35:17.771608 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:35:17.771626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:35:17.771643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:35:17.771678 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:35:17.771707 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:35:17.771723 kernel: fuse: init (API version 7.41) Aug 13 00:35:17.771739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:35:17.771755 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:35:17.771771 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:35:17.771787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:35:17.771813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:35:17.771830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:35:17.771848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:35:17.771871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:35:17.771889 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:35:17.771906 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:35:17.771924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:35:17.771940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:35:17.771958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:35:17.771975 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:35:17.771992 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:35:17.772013 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:35:17.772031 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:35:17.772056 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:35:17.772078 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:35:17.772097 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:35:17.772157 systemd-journald[1204]: Collecting audit messages is disabled. Aug 13 00:35:17.772209 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:35:17.772229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:35:17.772248 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:35:17.772266 kernel: ACPI: bus type drm_connector registered Aug 13 00:35:17.772288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:35:17.772306 systemd-journald[1204]: Journal started Aug 13 00:35:17.772339 systemd-journald[1204]: Runtime Journal (/run/log/journal/5ce63cff0a3144cc9bf04512a892a2cd) is 8M, max 78.5M, 70.5M free. Aug 13 00:35:17.035079 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:35:17.062642 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:35:17.063517 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:35:17.780829 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:35:17.789853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:35:17.797335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:35:17.819943 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:35:17.819992 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:35:17.823000 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:35:17.827691 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:35:17.829023 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:35:17.830441 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:35:17.831955 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:35:17.834518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:35:17.924303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:35:17.929231 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:35:17.952935 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:35:17.989335 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:35:17.991227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:35:18.012614 systemd-journald[1204]: Time spent on flushing to /var/log/journal/5ce63cff0a3144cc9bf04512a892a2cd is 96.013ms for 1011 entries. Aug 13 00:35:18.012614 systemd-journald[1204]: System Journal (/var/log/journal/5ce63cff0a3144cc9bf04512a892a2cd) is 8M, max 195.6M, 187.6M free. Aug 13 00:35:18.127855 systemd-journald[1204]: Received client request to flush runtime journal. Aug 13 00:35:18.127951 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 00:35:18.127975 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:35:18.041894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:35:18.068582 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Aug 13 00:35:18.068721 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Aug 13 00:35:18.075259 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:35:18.080067 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:35:18.100955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:35:18.110835 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:35:18.135559 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:35:18.152684 kernel: loop1: detected capacity change from 0 to 113872 Aug 13 00:35:18.211695 kernel: loop2: detected capacity change from 0 to 8 Aug 13 00:35:18.247703 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 00:35:18.258066 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:35:18.266146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:35:18.397736 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 00:35:18.444666 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Aug 13 00:35:18.445454 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Aug 13 00:35:18.456640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:35:18.480696 kernel: loop5: detected capacity change from 0 to 113872 Aug 13 00:35:18.508969 kernel: loop6: detected capacity change from 0 to 8 Aug 13 00:35:18.513684 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 00:35:18.582429 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 00:35:18.584862 (sd-merge)[1265]: Merged extensions into '/usr'. Aug 13 00:35:18.639025 systemd[1]: Reload requested from client PID 1222 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:35:18.639097 systemd[1]: Reloading... Aug 13 00:35:18.965951 zram_generator::config[1289]: No configuration found. Aug 13 00:35:19.229772 ldconfig[1217]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:35:19.235615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:35:19.353669 systemd[1]: Reloading finished in 713 ms. Aug 13 00:35:19.392026 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:35:19.394412 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:35:19.411868 systemd[1]: Starting ensure-sysext.service... Aug 13 00:35:19.414385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:35:19.461810 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:35:19.461842 systemd[1]: Reloading... Aug 13 00:35:19.513667 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:35:19.513764 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:35:19.514256 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:35:19.514634 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:35:19.517237 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:35:19.517636 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Aug 13 00:35:19.517766 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Aug 13 00:35:19.534676 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:35:19.534692 systemd-tmpfiles[1337]: Skipping /boot Aug 13 00:35:19.663915 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:35:19.664279 systemd-tmpfiles[1337]: Skipping /boot Aug 13 00:35:19.762719 zram_generator::config[1370]: No configuration found. Aug 13 00:35:19.963854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:35:20.077396 systemd[1]: Reloading finished in 614 ms. Aug 13 00:35:20.104449 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:35:20.130210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:35:20.145758 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:35:20.152335 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:35:20.162483 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:35:20.172418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:35:20.184113 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:35:20.193048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:35:20.199452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:20.199669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:35:20.202496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:35:20.216030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:35:20.222461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:35:20.223970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:35:20.224092 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:35:20.229900 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:35:20.230634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:20.238019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:20.238215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:35:20.238421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:35:20.238517 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:35:20.238607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:20.245278 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:20.246120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:35:20.252584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:35:20.253624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:35:20.253758 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:35:20.253892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:20.265299 systemd[1]: Finished ensure-sysext.service. Aug 13 00:35:20.271343 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:35:20.306897 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:35:20.311346 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Aug 13 00:35:20.322001 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:35:20.325117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:35:20.326046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:35:20.336045 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:35:20.350866 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:35:20.351150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:35:20.352235 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:35:20.354640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:35:20.360056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:35:20.362148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:35:20.371417 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:35:20.371750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:35:20.393961 augenrules[1449]: No rules Aug 13 00:35:20.398438 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:35:20.399201 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:35:20.404616 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:35:20.420396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:35:20.423526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:35:20.425305 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:35:20.429568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:35:20.436461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:35:20.642456 systemd-networkd[1464]: lo: Link UP Aug 13 00:35:20.642913 systemd-networkd[1464]: lo: Gained carrier Aug 13 00:35:20.644098 systemd-networkd[1464]: Enumeration completed Aug 13 00:35:20.644282 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:35:20.648027 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:35:20.650898 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:35:20.690342 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:35:20.691332 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:35:20.723735 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:35:20.777177 systemd-resolved[1412]: Positive Trust Anchors: Aug 13 00:35:20.777211 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:35:20.777259 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:35:20.790065 systemd-resolved[1412]: Defaulting to hostname 'linux'. Aug 13 00:35:20.796088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:35:20.797305 systemd[1]: Reached target network.target - Network. Aug 13 00:35:20.798153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:35:20.799849 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:35:20.800523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:35:20.801186 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:35:20.802838 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:35:20.803671 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:35:20.804903 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:35:20.805570 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:35:20.806534 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:35:20.806569 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:35:20.807292 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:35:20.811399 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:35:20.816386 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:35:20.846934 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:35:20.847990 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:35:20.848596 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:35:20.862397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:35:20.865051 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:35:20.867213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:35:20.877767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:35:20.877978 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:35:20.879891 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:35:20.880866 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:35:20.880923 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:35:20.885004 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:35:20.891401 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:35:20.895176 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:35:20.901338 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:35:20.967616 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:35:20.972067 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:35:20.972889 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:35:20.975487 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:35:20.978929 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:35:20.996134 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:35:21.012994 jq[1505]: false Aug 13 00:35:21.014187 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:35:21.022960 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:35:21.032932 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:35:21.035961 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:35:21.036760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:35:21.047955 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache Aug 13 00:35:21.050635 oslogin_cache_refresh[1507]: Refreshing passwd entry cache Aug 13 00:35:21.058417 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting Aug 13 00:35:21.061700 oslogin_cache_refresh[1507]: Failure getting users, quitting Aug 13 00:35:21.063419 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:35:21.063419 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache Aug 13 00:35:21.063419 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting Aug 13 00:35:21.063419 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:35:21.061782 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:35:21.061848 oslogin_cache_refresh[1507]: Refreshing group entry cache Aug 13 00:35:21.063137 oslogin_cache_refresh[1507]: Failure getting groups, quitting Aug 13 00:35:21.063152 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:35:21.071553 extend-filesystems[1506]: Found /dev/sda6 Aug 13 00:35:21.078263 extend-filesystems[1506]: Found /dev/sda9 Aug 13 00:35:21.199812 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:35:21.217443 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:35:21.255426 extend-filesystems[1506]: Checking size of /dev/sda9 Aug 13 00:35:21.249753 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:35:21.252814 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:35:21.253899 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:35:21.254464 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:35:21.262401 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:35:21.268515 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:35:21.286089 update_engine[1514]: I20250813 00:35:21.272156 1514 main.cc:92] Flatcar Update Engine starting Aug 13 00:35:21.287461 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:35:21.373710 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:35:21.374391 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:35:21.406910 jq[1522]: true Aug 13 00:35:21.418478 dbus-daemon[1503]: [system] SELinux support is enabled Aug 13 00:35:21.427547 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:35:21.431513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:35:21.431556 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:35:21.451213 extend-filesystems[1506]: Resized partition /dev/sda9 Aug 13 00:35:21.433088 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:35:21.452713 tar[1527]: linux-amd64/LICENSE Aug 13 00:35:21.433114 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:35:21.433258 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:35:21.456726 tar[1527]: linux-amd64/helm Aug 13 00:35:21.463245 extend-filesystems[1552]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:35:21.488481 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 00:35:21.489104 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:35:21.490686 update_engine[1514]: I20250813 00:35:21.490130 1514 update_check_scheduler.cc:74] Next update check in 8m38s Aug 13 00:35:21.493962 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:35:21.499678 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 00:35:21.556880 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:35:21.561707 extend-filesystems[1552]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:35:21.561707 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:35:21.561707 extend-filesystems[1552]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 00:35:21.571497 extend-filesystems[1506]: Resized filesystem in /dev/sda9 Aug 13 00:35:21.567454 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:35:21.568155 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:35:21.661531 jq[1549]: true Aug 13 00:35:21.666978 coreos-metadata[1502]: Aug 13 00:35:21.665 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:35:21.767598 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:35:21.782480 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:35:21.840323 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:35:21.840753 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:35:21.843342 systemd-networkd[1464]: eth0: Link UP Aug 13 00:35:21.844943 systemd-networkd[1464]: eth0: Gained carrier Aug 13 00:35:21.845053 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:35:21.873686 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:35:21.900817 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:35:21.901932 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:35:21.934114 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:35:21.942716 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:35:21.992780 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:35:21.995807 systemd-logind[1513]: New seat seat0. Aug 13 00:35:22.031400 systemd[1]: Starting sshkeys.service... Aug 13 00:35:22.044572 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:35:22.199748 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:35:22.226205 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:35:22.240111 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:35:22.244024 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:35:22.271295 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:35:22.296023 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:35:22.307108 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:35:22.308975 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:35:22.409950 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:35:22.425474 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:35:22.432273 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:35:22.516301 systemd-networkd[1464]: eth0: DHCPv4 address 172.237.133.249/24, gateway 172.237.133.1 acquired from 23.192.120.216 Aug 13 00:35:22.527916 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1464 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:35:22.541970 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Aug 13 00:35:22.547868 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:35:22.737147 coreos-metadata[1602]: Aug 13 00:35:22.735 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:35:22.769137 coreos-metadata[1502]: Aug 13 00:35:22.768 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:35:22.783853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:35:23.719559 systemd-timesyncd[1431]: Contacted time server 66.59.198.94:123 (0.flatcar.pool.ntp.org). Aug 13 00:35:23.720044 systemd-timesyncd[1431]: Initial clock synchronization to Wed 2025-08-13 00:35:23.717608 UTC. Aug 13 00:35:23.720234 systemd-resolved[1412]: Clock change detected. Flushing caches. Aug 13 00:35:23.938376 coreos-metadata[1602]: Aug 13 00:35:23.739 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 00:35:23.943815 coreos-metadata[1602]: Aug 13 00:35:23.943 INFO Fetch successful Aug 13 00:35:23.963197 coreos-metadata[1502]: Aug 13 00:35:23.962 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 00:35:24.007585 update-ssh-keys[1629]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:35:24.013054 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:35:24.022048 systemd[1]: Finished sshkeys.service. Aug 13 00:35:24.182448 containerd[1545]: time="2025-08-13T00:35:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:35:24.195072 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:35:24.200597 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:35:24.201755 systemd[1]: Started sshd@0-172.237.133.249:22-147.75.109.163:40622.service - OpenSSH per-connection server daemon (147.75.109.163:40622). Aug 13 00:35:24.204083 coreos-metadata[1502]: Aug 13 00:35:24.202 INFO Fetch successful Aug 13 00:35:24.204083 coreos-metadata[1502]: Aug 13 00:35:24.202 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 00:35:24.215068 containerd[1545]: time="2025-08-13T00:35:24.214712544Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:35:24.294670 containerd[1545]: time="2025-08-13T00:35:24.294204945Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.44µs" Aug 13 00:35:24.294670 containerd[1545]: time="2025-08-13T00:35:24.294282585Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:35:24.294670 containerd[1545]: time="2025-08-13T00:35:24.294332034Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:35:24.294963 containerd[1545]: time="2025-08-13T00:35:24.294941354Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:35:24.295030 containerd[1545]: time="2025-08-13T00:35:24.295015244Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:35:24.295170 containerd[1545]: time="2025-08-13T00:35:24.295152324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:35:24.295333 containerd[1545]: time="2025-08-13T00:35:24.295312264Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:35:24.295391 containerd[1545]: time="2025-08-13T00:35:24.295377134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:35:24.295828 containerd[1545]: time="2025-08-13T00:35:24.295794324Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:35:24.295889 containerd[1545]: time="2025-08-13T00:35:24.295875084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:35:24.295938 containerd[1545]: time="2025-08-13T00:35:24.295924984Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:35:24.295980 containerd[1545]: time="2025-08-13T00:35:24.295968854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:35:24.296164 containerd[1545]: time="2025-08-13T00:35:24.296144424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:35:24.298226 containerd[1545]: time="2025-08-13T00:35:24.298203403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:35:24.298607 containerd[1545]: time="2025-08-13T00:35:24.298584952Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:35:24.298671 containerd[1545]: time="2025-08-13T00:35:24.298657192Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:35:24.298768 containerd[1545]: time="2025-08-13T00:35:24.298751432Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:35:24.376145 containerd[1545]: time="2025-08-13T00:35:24.376073224Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:35:24.379187 containerd[1545]: time="2025-08-13T00:35:24.378693092Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:35:24.383215 containerd[1545]: time="2025-08-13T00:35:24.383138640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:35:24.383318 containerd[1545]: time="2025-08-13T00:35:24.383280710Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:35:24.383394 containerd[1545]: time="2025-08-13T00:35:24.383362170Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:35:24.383421 containerd[1545]: time="2025-08-13T00:35:24.383397360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:35:24.383462 containerd[1545]: time="2025-08-13T00:35:24.383426660Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:35:24.383531 containerd[1545]: time="2025-08-13T00:35:24.383457100Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:35:24.383531 containerd[1545]: time="2025-08-13T00:35:24.383484740Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:35:24.383531 containerd[1545]: time="2025-08-13T00:35:24.383509170Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:35:24.383632 containerd[1545]: time="2025-08-13T00:35:24.383562780Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:35:24.383632 containerd[1545]: time="2025-08-13T00:35:24.383580870Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:35:24.383632 containerd[1545]: time="2025-08-13T00:35:24.383593630Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:35:24.383632 containerd[1545]: time="2025-08-13T00:35:24.383611760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:35:24.383920 containerd[1545]: time="2025-08-13T00:35:24.383818370Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:35:24.383920 containerd[1545]: time="2025-08-13T00:35:24.383864630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:35:24.383920 containerd[1545]: time="2025-08-13T00:35:24.383896420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.383932520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.383950930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.383964840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.383979970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.384004190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.384024350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.384046250Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:35:24.384209 containerd[1545]: time="2025-08-13T00:35:24.384069850Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:35:24.385367 containerd[1545]: time="2025-08-13T00:35:24.384232370Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:35:24.385367 containerd[1545]: time="2025-08-13T00:35:24.384273120Z" level=info msg="Start snapshots syncer" Aug 13 00:35:24.385367 containerd[1545]: time="2025-08-13T00:35:24.384316699Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:35:24.385641 containerd[1545]: time="2025-08-13T00:35:24.384780819Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:35:24.385641 containerd[1545]: time="2025-08-13T00:35:24.384875279Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395450524Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395676804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395716264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395731234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395757244Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395775064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395789704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395803744Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395857644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395898684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.395927054Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396019904Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396113654Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396129784Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396144074Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396156264Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396169264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396190084Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396214924Z" level=info msg="runtime interface created" Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396222334Z" level=info msg="created NRI interface" Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396233664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396247994Z" level=info msg="Connect containerd service" Aug 13 00:35:24.396564 containerd[1545]: time="2025-08-13T00:35:24.396277204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:35:24.402167 containerd[1545]: time="2025-08-13T00:35:24.402091111Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:35:24.450753 systemd-logind[1513]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:35:24.498249 coreos-metadata[1502]: Aug 13 00:35:24.498 INFO Fetch successful Aug 13 00:35:24.566946 systemd-networkd[1464]: eth0: Gained IPv6LL Aug 13 00:35:24.585085 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:35:24.698738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:35:24.716292 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:35:24.722827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:24.761058 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:35:24.812946 systemd-logind[1513]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:35:24.815838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:35:24.831165 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:35:24.836068 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 40622 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:24.841374 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:24.971917 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:35:24.983460 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:35:25.168976 systemd-logind[1513]: New session 1 of user core. Aug 13 00:35:25.177611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:35:25.189794 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:35:25.207515 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.204348149Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.206800298Z" level=info msg="Start subscribing containerd event" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.206851498Z" level=info msg="Start recovering state" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207091308Z" level=info msg="Start event monitor" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207117698Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207147388Z" level=info msg="Start streaming server" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207160378Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207169408Z" level=info msg="runtime interface starting up..." Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207182878Z" level=info msg="starting plugins..." Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.207223178Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.208151157Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:35:25.214882 containerd[1545]: time="2025-08-13T00:35:25.211560356Z" level=info msg="containerd successfully booted in 1.038651s" Aug 13 00:35:25.216075 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:35:25.223598 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:35:25.255920 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:35:25.271680 systemd-logind[1513]: New session c1 of user core. Aug 13 00:35:25.431661 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:35:25.448023 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:35:25.451680 dbus-daemon[1503]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.12' (uid=0 pid=1621 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:35:25.468842 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:35:25.527765 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:35:25.530253 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:35:25.692970 systemd[1685]: Queued start job for default target default.target. Aug 13 00:35:25.700544 systemd[1685]: Created slice app.slice - User Application Slice. Aug 13 00:35:25.700727 systemd[1685]: Reached target paths.target - Paths. Aug 13 00:35:25.700828 systemd[1685]: Reached target timers.target - Timers. Aug 13 00:35:25.706260 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:35:25.884066 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:35:25.884318 systemd[1685]: Reached target sockets.target - Sockets. Aug 13 00:35:25.884397 systemd[1685]: Reached target basic.target - Basic System. Aug 13 00:35:25.884472 systemd[1685]: Reached target default.target - Main User Target. Aug 13 00:35:25.885629 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:35:25.886549 systemd[1685]: Startup finished in 597ms. Aug 13 00:35:25.895995 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:35:25.898608 tar[1527]: linux-amd64/README.md Aug 13 00:35:25.902129 polkitd[1697]: Started polkitd version 126 Aug 13 00:35:25.924411 polkitd[1697]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:35:25.924807 polkitd[1697]: Loading rules from directory /run/polkit-1/rules.d Aug 13 00:35:25.924878 polkitd[1697]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:35:25.925125 polkitd[1697]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 00:35:25.925156 polkitd[1697]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:35:25.925216 polkitd[1697]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:35:25.926011 polkitd[1697]: Finished loading, compiling and executing 2 rules Aug 13 00:35:25.926374 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:35:25.928387 dbus-daemon[1503]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:35:25.928996 polkitd[1697]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:35:25.940434 systemd-hostnamed[1621]: Hostname set to <172-237-133-249> (transient) Aug 13 00:35:25.940620 systemd-resolved[1412]: System hostname changed to '172-237-133-249'. Aug 13 00:35:25.946348 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:35:26.208846 systemd[1]: Started sshd@1-172.237.133.249:22-147.75.109.163:40638.service - OpenSSH per-connection server daemon (147.75.109.163:40638). Aug 13 00:35:26.637050 sshd[1715]: Accepted publickey for core from 147.75.109.163 port 40638 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:26.637861 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:26.646079 systemd-logind[1513]: New session 2 of user core. Aug 13 00:35:26.654702 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:35:26.896139 sshd[1717]: Connection closed by 147.75.109.163 port 40638 Aug 13 00:35:26.894183 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:26.905757 systemd[1]: sshd@1-172.237.133.249:22-147.75.109.163:40638.service: Deactivated successfully. Aug 13 00:35:26.909095 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:35:26.911972 systemd-logind[1513]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:35:26.915753 systemd-logind[1513]: Removed session 2. Aug 13 00:35:26.998000 systemd[1]: Started sshd@2-172.237.133.249:22-147.75.109.163:40644.service - OpenSSH per-connection server daemon (147.75.109.163:40644). Aug 13 00:35:27.558424 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 40644 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:27.561537 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:27.579636 systemd-logind[1513]: New session 3 of user core. Aug 13 00:35:27.604975 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:35:27.900575 sshd[1725]: Connection closed by 147.75.109.163 port 40644 Aug 13 00:35:27.901839 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:27.911982 systemd-logind[1513]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:35:27.912384 systemd[1]: sshd@2-172.237.133.249:22-147.75.109.163:40644.service: Deactivated successfully. Aug 13 00:35:27.916129 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:35:27.920218 systemd-logind[1513]: Removed session 3. Aug 13 00:35:28.248886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:28.250456 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:35:28.259609 systemd[1]: Startup finished in 5.207s (kernel) + 12.372s (initrd) + 11.347s (userspace) = 28.927s. Aug 13 00:35:28.295262 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:35:29.275934 kubelet[1735]: E0813 00:35:29.275463 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:35:29.279326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:35:29.279694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:35:29.280475 systemd[1]: kubelet.service: Consumed 2.833s CPU time, 266.5M memory peak. Aug 13 00:35:37.973680 systemd[1]: Started sshd@3-172.237.133.249:22-147.75.109.163:35616.service - OpenSSH per-connection server daemon (147.75.109.163:35616). Aug 13 00:35:38.320918 sshd[1747]: Accepted publickey for core from 147.75.109.163 port 35616 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:38.323115 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:38.331808 systemd-logind[1513]: New session 4 of user core. Aug 13 00:35:38.338681 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:35:38.569750 sshd[1749]: Connection closed by 147.75.109.163 port 35616 Aug 13 00:35:38.570719 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:38.576475 systemd[1]: sshd@3-172.237.133.249:22-147.75.109.163:35616.service: Deactivated successfully. Aug 13 00:35:38.579459 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:35:38.580658 systemd-logind[1513]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:35:38.583184 systemd-logind[1513]: Removed session 4. Aug 13 00:35:38.630284 systemd[1]: Started sshd@4-172.237.133.249:22-147.75.109.163:35598.service - OpenSSH per-connection server daemon (147.75.109.163:35598). Aug 13 00:35:38.989747 sshd[1755]: Accepted publickey for core from 147.75.109.163 port 35598 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:38.991794 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:38.998236 systemd-logind[1513]: New session 5 of user core. Aug 13 00:35:39.008934 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:35:39.235146 sshd[1757]: Connection closed by 147.75.109.163 port 35598 Aug 13 00:35:39.236287 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:39.241793 systemd-logind[1513]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:35:39.242615 systemd[1]: sshd@4-172.237.133.249:22-147.75.109.163:35598.service: Deactivated successfully. Aug 13 00:35:39.245211 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:35:39.247439 systemd-logind[1513]: Removed session 5. Aug 13 00:35:39.301188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:35:39.303419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:39.306744 systemd[1]: Started sshd@5-172.237.133.249:22-147.75.109.163:35610.service - OpenSSH per-connection server daemon (147.75.109.163:35610). Aug 13 00:35:39.613427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:39.632199 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:35:39.653682 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 35610 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:39.656729 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:39.665601 systemd-logind[1513]: New session 6 of user core. Aug 13 00:35:39.673745 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:35:39.685323 kubelet[1773]: E0813 00:35:39.685219 1773 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:35:39.694150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:35:39.694406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:35:39.695249 systemd[1]: kubelet.service: Consumed 323ms CPU time, 110.7M memory peak. Aug 13 00:35:39.902836 sshd[1779]: Connection closed by 147.75.109.163 port 35610 Aug 13 00:35:39.903742 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:39.908626 systemd-logind[1513]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:35:39.909390 systemd[1]: sshd@5-172.237.133.249:22-147.75.109.163:35610.service: Deactivated successfully. Aug 13 00:35:39.912113 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:35:39.914983 systemd-logind[1513]: Removed session 6. Aug 13 00:35:39.973655 systemd[1]: Started sshd@6-172.237.133.249:22-147.75.109.163:35626.service - OpenSSH per-connection server daemon (147.75.109.163:35626). Aug 13 00:35:40.325145 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 35626 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:40.328162 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:40.335507 systemd-logind[1513]: New session 7 of user core. Aug 13 00:35:40.344763 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:35:40.535466 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:35:40.536016 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:40.560850 sudo[1789]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:40.613270 sshd[1788]: Connection closed by 147.75.109.163 port 35626 Aug 13 00:35:40.615259 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:40.622363 systemd[1]: sshd@6-172.237.133.249:22-147.75.109.163:35626.service: Deactivated successfully. Aug 13 00:35:40.626034 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:35:40.627399 systemd-logind[1513]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:35:40.630797 systemd-logind[1513]: Removed session 7. Aug 13 00:35:40.683587 systemd[1]: Started sshd@7-172.237.133.249:22-147.75.109.163:35632.service - OpenSSH per-connection server daemon (147.75.109.163:35632). Aug 13 00:35:41.044939 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 35632 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:41.047343 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:41.055202 systemd-logind[1513]: New session 8 of user core. Aug 13 00:35:41.060737 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:35:41.250736 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:35:41.251129 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:41.257562 sudo[1799]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:41.266284 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:35:41.266708 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:41.280862 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:35:41.341603 augenrules[1821]: No rules Aug 13 00:35:41.343343 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:35:41.343753 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:35:41.344945 sudo[1798]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:41.396805 sshd[1797]: Connection closed by 147.75.109.163 port 35632 Aug 13 00:35:41.397697 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:41.403046 systemd[1]: sshd@7-172.237.133.249:22-147.75.109.163:35632.service: Deactivated successfully. Aug 13 00:35:41.405674 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:35:41.407133 systemd-logind[1513]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:35:41.408718 systemd-logind[1513]: Removed session 8. Aug 13 00:35:41.473252 systemd[1]: Started sshd@8-172.237.133.249:22-147.75.109.163:35646.service - OpenSSH per-connection server daemon (147.75.109.163:35646). Aug 13 00:35:41.834492 sshd[1830]: Accepted publickey for core from 147.75.109.163 port 35646 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:35:41.836806 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:41.855765 systemd-logind[1513]: New session 9 of user core. Aug 13 00:35:41.868712 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:35:42.037708 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:35:42.038115 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:43.154303 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:35:43.174066 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:35:44.076442 dockerd[1851]: time="2025-08-13T00:35:44.076296081Z" level=info msg="Starting up" Aug 13 00:35:44.079166 dockerd[1851]: time="2025-08-13T00:35:44.079098400Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:35:44.296344 dockerd[1851]: time="2025-08-13T00:35:44.295855231Z" level=info msg="Loading containers: start." Aug 13 00:35:44.312590 kernel: Initializing XFRM netlink socket Aug 13 00:35:44.636687 systemd-networkd[1464]: docker0: Link UP Aug 13 00:35:44.641116 dockerd[1851]: time="2025-08-13T00:35:44.641058769Z" level=info msg="Loading containers: done." Aug 13 00:35:44.663466 dockerd[1851]: time="2025-08-13T00:35:44.662575188Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:35:44.663466 dockerd[1851]: time="2025-08-13T00:35:44.662858448Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:35:44.663466 dockerd[1851]: time="2025-08-13T00:35:44.663000258Z" level=info msg="Initializing buildkit" Aug 13 00:35:44.664163 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1889831853-merged.mount: Deactivated successfully. Aug 13 00:35:44.692527 dockerd[1851]: time="2025-08-13T00:35:44.692445533Z" level=info msg="Completed buildkit initialization" Aug 13 00:35:44.696473 dockerd[1851]: time="2025-08-13T00:35:44.696439981Z" level=info msg="Daemon has completed initialization" Aug 13 00:35:44.696751 dockerd[1851]: time="2025-08-13T00:35:44.696670671Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:35:44.696849 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:35:45.790566 containerd[1545]: time="2025-08-13T00:35:45.789845314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:35:46.707179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645599020.mount: Deactivated successfully. Aug 13 00:35:48.393917 containerd[1545]: time="2025-08-13T00:35:48.393750262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:48.396808 containerd[1545]: time="2025-08-13T00:35:48.395759261Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 00:35:48.397056 containerd[1545]: time="2025-08-13T00:35:48.397016730Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:48.399883 containerd[1545]: time="2025-08-13T00:35:48.399648499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:48.401581 containerd[1545]: time="2025-08-13T00:35:48.400858088Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.610664074s" Aug 13 00:35:48.401581 containerd[1545]: time="2025-08-13T00:35:48.400934138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:35:48.402235 containerd[1545]: time="2025-08-13T00:35:48.402212958Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:35:49.915480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:35:49.925991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:50.564720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:50.579926 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:35:50.839410 kubelet[2116]: E0813 00:35:50.839161 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:35:50.847683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:35:50.848248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:35:50.849807 systemd[1]: kubelet.service: Consumed 814ms CPU time, 110.4M memory peak. Aug 13 00:35:50.890563 containerd[1545]: time="2025-08-13T00:35:50.889472394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:50.890563 containerd[1545]: time="2025-08-13T00:35:50.889948063Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 00:35:50.892136 containerd[1545]: time="2025-08-13T00:35:50.892100062Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:50.895418 containerd[1545]: time="2025-08-13T00:35:50.895374411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:50.896663 containerd[1545]: time="2025-08-13T00:35:50.896631510Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 2.494247052s" Aug 13 00:35:50.897081 containerd[1545]: time="2025-08-13T00:35:50.897049670Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:35:50.898998 containerd[1545]: time="2025-08-13T00:35:50.898950339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:35:52.683157 containerd[1545]: time="2025-08-13T00:35:52.681751927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:52.683157 containerd[1545]: time="2025-08-13T00:35:52.682970167Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 00:35:52.683157 containerd[1545]: time="2025-08-13T00:35:52.683100297Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:52.685925 containerd[1545]: time="2025-08-13T00:35:52.685895005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:52.687027 containerd[1545]: time="2025-08-13T00:35:52.686994615Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.787991426s" Aug 13 00:35:52.687129 containerd[1545]: time="2025-08-13T00:35:52.687112435Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:35:52.689046 containerd[1545]: time="2025-08-13T00:35:52.689020654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:35:54.351997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795921637.mount: Deactivated successfully. Aug 13 00:35:56.043198 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:35:56.080024 containerd[1545]: time="2025-08-13T00:35:56.079968083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:56.081343 containerd[1545]: time="2025-08-13T00:35:56.080640622Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 00:35:56.082542 containerd[1545]: time="2025-08-13T00:35:56.081624375Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:56.083425 containerd[1545]: time="2025-08-13T00:35:56.083389819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:56.084086 containerd[1545]: time="2025-08-13T00:35:56.084045919Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 3.394995595s" Aug 13 00:35:56.084236 containerd[1545]: time="2025-08-13T00:35:56.084215841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:35:56.085071 containerd[1545]: time="2025-08-13T00:35:56.085025492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:35:56.799183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910149237.mount: Deactivated successfully. Aug 13 00:35:58.107845 containerd[1545]: time="2025-08-13T00:35:58.107775950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:58.110607 containerd[1545]: time="2025-08-13T00:35:58.110551022Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:35:58.112314 containerd[1545]: time="2025-08-13T00:35:58.112284553Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:58.117567 containerd[1545]: time="2025-08-13T00:35:58.117154312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:58.118230 containerd[1545]: time="2025-08-13T00:35:58.118045613Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.032985851s" Aug 13 00:35:58.118230 containerd[1545]: time="2025-08-13T00:35:58.118083744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:35:58.118583 containerd[1545]: time="2025-08-13T00:35:58.118554758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:35:58.820440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362036469.mount: Deactivated successfully. Aug 13 00:35:58.825166 containerd[1545]: time="2025-08-13T00:35:58.825117887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:58.825860 containerd[1545]: time="2025-08-13T00:35:58.825825396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:35:58.827476 containerd[1545]: time="2025-08-13T00:35:58.826543414Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:58.827956 containerd[1545]: time="2025-08-13T00:35:58.827924981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:58.828687 containerd[1545]: time="2025-08-13T00:35:58.828663509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 710.07871ms" Aug 13 00:35:58.828758 containerd[1545]: time="2025-08-13T00:35:58.828742110Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:35:58.829388 containerd[1545]: time="2025-08-13T00:35:58.829357618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:35:59.582488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount283736800.mount: Deactivated successfully. Aug 13 00:36:00.887582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:36:00.894710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:36:01.203690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:36:01.213063 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:36:01.314770 kubelet[2255]: E0813 00:36:01.314692 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:36:01.318665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:36:01.318875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:36:01.319399 systemd[1]: kubelet.service: Consumed 326ms CPU time, 110M memory peak. Aug 13 00:36:01.752174 containerd[1545]: time="2025-08-13T00:36:01.752119406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:01.753797 containerd[1545]: time="2025-08-13T00:36:01.753455648Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 00:36:01.754312 containerd[1545]: time="2025-08-13T00:36:01.754284117Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:01.757654 containerd[1545]: time="2025-08-13T00:36:01.757614609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:01.758551 containerd[1545]: time="2025-08-13T00:36:01.758114154Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.928638615s" Aug 13 00:36:01.758551 containerd[1545]: time="2025-08-13T00:36:01.758142165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:36:04.473322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:36:04.473498 systemd[1]: kubelet.service: Consumed 326ms CPU time, 110M memory peak. Aug 13 00:36:04.477088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:36:04.524132 systemd[1]: Reload requested from client PID 2289 ('systemctl') (unit session-9.scope)... Aug 13 00:36:04.524398 systemd[1]: Reloading... Aug 13 00:36:04.731504 zram_generator::config[2333]: No configuration found. Aug 13 00:36:04.868420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:36:04.997938 systemd[1]: Reloading finished in 472 ms. Aug 13 00:36:05.076301 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:36:05.076416 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:36:05.076740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:36:05.076815 systemd[1]: kubelet.service: Consumed 237ms CPU time, 98.3M memory peak. Aug 13 00:36:05.078897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:36:05.408223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:36:05.420061 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:36:05.495556 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:36:05.495556 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:36:05.495556 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:36:05.495556 kubelet[2387]: I0813 00:36:05.494785 2387 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:36:05.700683 kubelet[2387]: I0813 00:36:05.700559 2387 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:36:05.700683 kubelet[2387]: I0813 00:36:05.700606 2387 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:36:05.701291 kubelet[2387]: I0813 00:36:05.701035 2387 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:36:05.757080 kubelet[2387]: E0813 00:36:05.756549 2387 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.133.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:05.757080 kubelet[2387]: I0813 00:36:05.756709 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:36:05.769619 kubelet[2387]: I0813 00:36:05.769597 2387 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:36:05.778254 kubelet[2387]: I0813 00:36:05.778221 2387 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:36:05.778715 kubelet[2387]: I0813 00:36:05.778677 2387 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:36:05.779155 kubelet[2387]: I0813 00:36:05.778707 2387 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-133-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:36:05.779433 kubelet[2387]: I0813 00:36:05.779231 2387 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:36:05.779433 kubelet[2387]: I0813 00:36:05.779243 2387 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:36:05.779644 kubelet[2387]: I0813 00:36:05.779621 2387 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:36:05.796901 kubelet[2387]: I0813 00:36:05.796836 2387 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:36:05.797058 kubelet[2387]: I0813 00:36:05.796924 2387 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:36:05.797058 kubelet[2387]: I0813 00:36:05.796981 2387 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:36:05.797058 kubelet[2387]: I0813 00:36:05.797040 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:36:05.801857 kubelet[2387]: W0813 00:36:05.801685 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.133.249:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-249&limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:05.801857 kubelet[2387]: E0813 00:36:05.801800 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.133.249:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-249&limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:05.802181 kubelet[2387]: I0813 00:36:05.802162 2387 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:36:05.802860 kubelet[2387]: I0813 00:36:05.802841 2387 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:36:05.803099 kubelet[2387]: W0813 00:36:05.803083 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:36:05.803916 kubelet[2387]: W0813 00:36:05.803881 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.133.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:05.804030 kubelet[2387]: E0813 00:36:05.804012 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.133.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:05.807895 kubelet[2387]: I0813 00:36:05.807326 2387 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:36:05.807895 kubelet[2387]: I0813 00:36:05.807403 2387 server.go:1287] "Started kubelet" Aug 13 00:36:05.811157 kubelet[2387]: I0813 00:36:05.811089 2387 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:36:05.812527 kubelet[2387]: I0813 00:36:05.812462 2387 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:36:05.815610 kubelet[2387]: I0813 00:36:05.814629 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:36:05.815610 kubelet[2387]: I0813 00:36:05.815069 2387 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:36:05.817594 kubelet[2387]: I0813 00:36:05.817464 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:36:05.817781 kubelet[2387]: E0813 00:36:05.815365 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.133.249:6443/api/v1/namespaces/default/events\": dial tcp 172.237.133.249:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-133-249.185b2c7629a4afce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-133-249,UID:172-237-133-249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-133-249,},FirstTimestamp:2025-08-13 00:36:05.807361998 +0000 UTC m=+0.378533772,LastTimestamp:2025-08-13 00:36:05.807361998 +0000 UTC m=+0.378533772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-133-249,}" Aug 13 00:36:05.819593 kubelet[2387]: I0813 00:36:05.819573 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:36:05.831300 kubelet[2387]: I0813 00:36:05.831277 2387 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:36:05.832606 kubelet[2387]: E0813 00:36:05.832537 2387 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-133-249\" not found" Aug 13 00:36:05.834143 kubelet[2387]: I0813 00:36:05.834105 2387 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:36:05.835587 kubelet[2387]: I0813 00:36:05.835152 2387 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:36:05.835686 kubelet[2387]: I0813 00:36:05.834457 2387 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:36:05.836590 kubelet[2387]: I0813 00:36:05.834373 2387 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:36:05.836590 kubelet[2387]: W0813 00:36:05.835989 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.133.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:05.836590 kubelet[2387]: E0813 00:36:05.836221 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.133.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:05.836590 kubelet[2387]: E0813 00:36:05.836250 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-249?timeout=10s\": dial tcp 172.237.133.249:6443: connect: connection refused" interval="200ms" Aug 13 00:36:05.837791 kubelet[2387]: E0813 00:36:05.837757 2387 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:36:05.838654 kubelet[2387]: I0813 00:36:05.838621 2387 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:36:05.874838 kubelet[2387]: I0813 00:36:05.874811 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:36:05.874980 kubelet[2387]: I0813 00:36:05.874968 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:36:05.875165 kubelet[2387]: I0813 00:36:05.875145 2387 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:36:05.879819 kubelet[2387]: I0813 00:36:05.879247 2387 policy_none.go:49] "None policy: Start" Aug 13 00:36:05.879819 kubelet[2387]: I0813 00:36:05.879344 2387 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:36:05.879819 kubelet[2387]: I0813 00:36:05.879417 2387 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:36:05.891737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:36:05.901042 kubelet[2387]: I0813 00:36:05.900983 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:36:05.905027 kubelet[2387]: I0813 00:36:05.905002 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:36:05.905808 kubelet[2387]: I0813 00:36:05.905749 2387 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:36:05.905898 kubelet[2387]: I0813 00:36:05.905884 2387 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:36:05.906213 kubelet[2387]: I0813 00:36:05.906197 2387 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:36:05.906413 kubelet[2387]: E0813 00:36:05.906384 2387 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:36:05.908073 kubelet[2387]: W0813 00:36:05.908019 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.133.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:05.908318 kubelet[2387]: E0813 00:36:05.908276 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.133.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:05.908587 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:36:05.915744 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:36:05.922818 kubelet[2387]: I0813 00:36:05.922772 2387 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:36:05.923507 kubelet[2387]: I0813 00:36:05.923133 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:36:05.923507 kubelet[2387]: I0813 00:36:05.923155 2387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:36:05.923507 kubelet[2387]: I0813 00:36:05.923506 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:36:05.926738 kubelet[2387]: E0813 00:36:05.926698 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:36:05.926813 kubelet[2387]: E0813 00:36:05.926783 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-133-249\" not found" Aug 13 00:36:06.025661 kubelet[2387]: I0813 00:36:06.025493 2387 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-249" Aug 13 00:36:06.026900 kubelet[2387]: E0813 00:36:06.025971 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.249:6443/api/v1/nodes\": dial tcp 172.237.133.249:6443: connect: connection refused" node="172-237-133-249" Aug 13 00:36:06.026015 systemd[1]: Created slice kubepods-burstable-podc7846664812ab68f292b21f4dfb64951.slice - libcontainer container kubepods-burstable-podc7846664812ab68f292b21f4dfb64951.slice. Aug 13 00:36:06.037164 kubelet[2387]: E0813 00:36:06.037105 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-249?timeout=10s\": dial tcp 172.237.133.249:6443: connect: connection refused" interval="400ms" Aug 13 00:36:06.039168 kubelet[2387]: E0813 00:36:06.038848 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:06.044470 systemd[1]: Created slice kubepods-burstable-podb396d8b630101200fac26c6c5d8b6e2c.slice - libcontainer container kubepods-burstable-podb396d8b630101200fac26c6c5d8b6e2c.slice. Aug 13 00:36:06.047255 kubelet[2387]: E0813 00:36:06.047218 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:06.051108 systemd[1]: Created slice kubepods-burstable-pod262169405304b3a4dcf6b2dd26622368.slice - libcontainer container kubepods-burstable-pod262169405304b3a4dcf6b2dd26622368.slice. Aug 13 00:36:06.053669 kubelet[2387]: E0813 00:36:06.053631 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:06.138392 kubelet[2387]: I0813 00:36:06.138112 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-kubeconfig\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:06.138392 kubelet[2387]: I0813 00:36:06.138165 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7846664812ab68f292b21f4dfb64951-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-133-249\" (UID: \"c7846664812ab68f292b21f4dfb64951\") " pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:06.138392 kubelet[2387]: I0813 00:36:06.138200 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-ca-certs\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:06.138392 kubelet[2387]: I0813 00:36:06.138219 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-k8s-certs\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:06.138392 kubelet[2387]: I0813 00:36:06.138281 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:06.138910 kubelet[2387]: I0813 00:36:06.138304 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/262169405304b3a4dcf6b2dd26622368-kubeconfig\") pod \"kube-scheduler-172-237-133-249\" (UID: \"262169405304b3a4dcf6b2dd26622368\") " pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:06.138910 kubelet[2387]: I0813 00:36:06.138320 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7846664812ab68f292b21f4dfb64951-ca-certs\") pod \"kube-apiserver-172-237-133-249\" (UID: \"c7846664812ab68f292b21f4dfb64951\") " pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:06.138910 kubelet[2387]: I0813 00:36:06.138336 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7846664812ab68f292b21f4dfb64951-k8s-certs\") pod \"kube-apiserver-172-237-133-249\" (UID: \"c7846664812ab68f292b21f4dfb64951\") " pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:06.138910 kubelet[2387]: I0813 00:36:06.138350 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-flexvolume-dir\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:06.228741 kubelet[2387]: I0813 00:36:06.228707 2387 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-249" Aug 13 00:36:06.229183 kubelet[2387]: E0813 00:36:06.229153 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.249:6443/api/v1/nodes\": dial tcp 172.237.133.249:6443: connect: connection refused" node="172-237-133-249" Aug 13 00:36:06.340172 kubelet[2387]: E0813 00:36:06.340037 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:06.341153 containerd[1545]: time="2025-08-13T00:36:06.341037999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-133-249,Uid:c7846664812ab68f292b21f4dfb64951,Namespace:kube-system,Attempt:0,}" Aug 13 00:36:06.348232 kubelet[2387]: E0813 00:36:06.348210 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:06.348813 containerd[1545]: time="2025-08-13T00:36:06.348700952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-133-249,Uid:b396d8b630101200fac26c6c5d8b6e2c,Namespace:kube-system,Attempt:0,}" Aug 13 00:36:06.355296 kubelet[2387]: E0813 00:36:06.355142 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:06.355688 containerd[1545]: time="2025-08-13T00:36:06.355656661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-133-249,Uid:262169405304b3a4dcf6b2dd26622368,Namespace:kube-system,Attempt:0,}" Aug 13 00:36:06.422861 containerd[1545]: time="2025-08-13T00:36:06.416975647Z" level=info msg="connecting to shim b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec" address="unix:///run/containerd/s/29923521fe66fe8ddf5813bfc89965609b56945d24abf68b3bb08e1dbf04a695" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:36:06.437775 containerd[1545]: time="2025-08-13T00:36:06.437731972Z" level=info msg="connecting to shim 15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306" address="unix:///run/containerd/s/724976a00f8c0f80ee99f3140bc8467fc9fc0ce576f48108c85a38dee520f0ff" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:36:06.438167 kubelet[2387]: E0813 00:36:06.438121 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-249?timeout=10s\": dial tcp 172.237.133.249:6443: connect: connection refused" interval="800ms" Aug 13 00:36:06.501763 containerd[1545]: time="2025-08-13T00:36:06.459912466Z" level=info msg="connecting to shim 0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57" address="unix:///run/containerd/s/b1a74311e9aafb938c0daf375d01ac3ac94b1c7804843cbce3573f51ad90b5d5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:36:06.672293 kubelet[2387]: I0813 00:36:06.672248 2387 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-249" Aug 13 00:36:06.673810 kubelet[2387]: E0813 00:36:06.673769 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.249:6443/api/v1/nodes\": dial tcp 172.237.133.249:6443: connect: connection refused" node="172-237-133-249" Aug 13 00:36:06.675422 systemd[1]: Started cri-containerd-0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57.scope - libcontainer container 0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57. Aug 13 00:36:06.723651 kubelet[2387]: W0813 00:36:06.723580 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.133.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:06.723651 kubelet[2387]: E0813 00:36:06.723652 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.133.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:06.726892 systemd[1]: Started cri-containerd-b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec.scope - libcontainer container b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec. Aug 13 00:36:06.776340 systemd[1]: Started cri-containerd-15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306.scope - libcontainer container 15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306. Aug 13 00:36:06.952700 containerd[1545]: time="2025-08-13T00:36:06.945298362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-133-249,Uid:c7846664812ab68f292b21f4dfb64951,Namespace:kube-system,Attempt:0,} returns sandbox id \"0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57\"" Aug 13 00:36:06.954891 kubelet[2387]: E0813 00:36:06.954830 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:06.960652 containerd[1545]: time="2025-08-13T00:36:06.960146286Z" level=info msg="CreateContainer within sandbox \"0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:36:07.008009 containerd[1545]: time="2025-08-13T00:36:07.007671423Z" level=info msg="Container a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:07.050476 containerd[1545]: time="2025-08-13T00:36:07.050401730Z" level=info msg="CreateContainer within sandbox \"0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20\"" Aug 13 00:36:07.053478 containerd[1545]: time="2025-08-13T00:36:07.053224439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-133-249,Uid:b396d8b630101200fac26c6c5d8b6e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec\"" Aug 13 00:36:07.054913 containerd[1545]: time="2025-08-13T00:36:07.054726488Z" level=info msg="StartContainer for \"a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20\"" Aug 13 00:36:07.057856 containerd[1545]: time="2025-08-13T00:36:07.057784389Z" level=info msg="connecting to shim a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20" address="unix:///run/containerd/s/b1a74311e9aafb938c0daf375d01ac3ac94b1c7804843cbce3573f51ad90b5d5" protocol=ttrpc version=3 Aug 13 00:36:07.060226 containerd[1545]: time="2025-08-13T00:36:07.059760652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-133-249,Uid:262169405304b3a4dcf6b2dd26622368,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306\"" Aug 13 00:36:07.060675 kubelet[2387]: E0813 00:36:07.060443 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:07.062422 kubelet[2387]: E0813 00:36:07.061922 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:07.066777 containerd[1545]: time="2025-08-13T00:36:07.066712866Z" level=info msg="CreateContainer within sandbox \"15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:36:07.068967 containerd[1545]: time="2025-08-13T00:36:07.068899600Z" level=info msg="CreateContainer within sandbox \"b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:36:07.099263 kubelet[2387]: W0813 00:36:07.099168 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.133.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:07.099488 kubelet[2387]: E0813 00:36:07.099279 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.133.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:07.099602 systemd[1]: Started cri-containerd-a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20.scope - libcontainer container a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20. Aug 13 00:36:07.115911 containerd[1545]: time="2025-08-13T00:36:07.115782395Z" level=info msg="Container 02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:07.139565 containerd[1545]: time="2025-08-13T00:36:07.139309027Z" level=info msg="CreateContainer within sandbox \"15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68\"" Aug 13 00:36:07.141733 containerd[1545]: time="2025-08-13T00:36:07.140447554Z" level=info msg="StartContainer for \"02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68\"" Aug 13 00:36:07.142212 containerd[1545]: time="2025-08-13T00:36:07.142170195Z" level=info msg="Container 3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:07.146223 containerd[1545]: time="2025-08-13T00:36:07.146173781Z" level=info msg="connecting to shim 02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68" address="unix:///run/containerd/s/724976a00f8c0f80ee99f3140bc8467fc9fc0ce576f48108c85a38dee520f0ff" protocol=ttrpc version=3 Aug 13 00:36:07.166415 containerd[1545]: time="2025-08-13T00:36:07.166286252Z" level=info msg="CreateContainer within sandbox \"b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283\"" Aug 13 00:36:07.167809 containerd[1545]: time="2025-08-13T00:36:07.167774332Z" level=info msg="StartContainer for \"3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283\"" Aug 13 00:36:07.170545 containerd[1545]: time="2025-08-13T00:36:07.170475619Z" level=info msg="connecting to shim 3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283" address="unix:///run/containerd/s/29923521fe66fe8ddf5813bfc89965609b56945d24abf68b3bb08e1dbf04a695" protocol=ttrpc version=3 Aug 13 00:36:07.211056 systemd[1]: Started cri-containerd-02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68.scope - libcontainer container 02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68. Aug 13 00:36:07.239579 kubelet[2387]: E0813 00:36:07.239460 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.133.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-133-249?timeout=10s\": dial tcp 172.237.133.249:6443: connect: connection refused" interval="1.6s" Aug 13 00:36:07.278620 systemd[1]: Started cri-containerd-3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283.scope - libcontainer container 3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283. Aug 13 00:36:07.502396 kubelet[2387]: W0813 00:36:07.285210 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.133.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:07.502396 kubelet[2387]: E0813 00:36:07.286721 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.133.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:07.508369 update_engine[1514]: I20250813 00:36:07.508005 1514 update_attempter.cc:509] Updating boot flags... Aug 13 00:36:07.523562 kubelet[2387]: I0813 00:36:07.521867 2387 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-249" Aug 13 00:36:07.523562 kubelet[2387]: E0813 00:36:07.522428 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.133.249:6443/api/v1/nodes\": dial tcp 172.237.133.249:6443: connect: connection refused" node="172-237-133-249" Aug 13 00:36:07.534198 kubelet[2387]: W0813 00:36:07.533967 2387 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.133.249:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-249&limit=500&resourceVersion=0": dial tcp 172.237.133.249:6443: connect: connection refused Aug 13 00:36:07.535786 kubelet[2387]: E0813 00:36:07.534754 2387 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.133.249:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-133-249&limit=500&resourceVersion=0\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:07.566995 containerd[1545]: time="2025-08-13T00:36:07.566941293Z" level=info msg="StartContainer for \"a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20\" returns successfully" Aug 13 00:36:07.917485 kubelet[2387]: E0813 00:36:07.915160 2387 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.133.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.133.249:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:36:07.942579 kubelet[2387]: E0813 00:36:07.941279 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:07.953301 kubelet[2387]: E0813 00:36:07.953216 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:08.568779 containerd[1545]: time="2025-08-13T00:36:08.568641985Z" level=info msg="StartContainer for \"3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283\" returns successfully" Aug 13 00:36:08.587698 containerd[1545]: time="2025-08-13T00:36:08.586243801Z" level=info msg="StartContainer for \"02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68\" returns successfully" Aug 13 00:36:09.049627 kubelet[2387]: E0813 00:36:09.049457 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:09.055305 kubelet[2387]: E0813 00:36:09.054731 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:09.076603 kubelet[2387]: E0813 00:36:09.074091 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:09.080245 kubelet[2387]: E0813 00:36:09.079446 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:09.092052 kubelet[2387]: E0813 00:36:09.091881 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:09.094300 kubelet[2387]: E0813 00:36:09.094115 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:09.140956 kubelet[2387]: I0813 00:36:09.140910 2387 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-249" Aug 13 00:36:10.062918 kubelet[2387]: E0813 00:36:10.062877 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:10.063443 kubelet[2387]: E0813 00:36:10.063005 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:10.064563 kubelet[2387]: E0813 00:36:10.063978 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:10.064563 kubelet[2387]: E0813 00:36:10.064070 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:10.064822 kubelet[2387]: E0813 00:36:10.064798 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:10.064921 kubelet[2387]: E0813 00:36:10.064899 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:11.066575 kubelet[2387]: E0813 00:36:11.066509 2387 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:11.070543 kubelet[2387]: E0813 00:36:11.068567 2387 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:11.484721 kubelet[2387]: E0813 00:36:11.484667 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-133-249\" not found" node="172-237-133-249" Aug 13 00:36:11.617571 kubelet[2387]: I0813 00:36:11.617475 2387 kubelet_node_status.go:78] "Successfully registered node" node="172-237-133-249" Aug 13 00:36:11.617571 kubelet[2387]: E0813 00:36:11.617538 2387 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-237-133-249\": node \"172-237-133-249\" not found" Aug 13 00:36:11.711362 kubelet[2387]: I0813 00:36:11.634952 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:11.720842 kubelet[2387]: E0813 00:36:11.720258 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-133-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:11.720995 kubelet[2387]: I0813 00:36:11.720853 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:11.724709 kubelet[2387]: E0813 00:36:11.724687 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-133-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:11.724781 kubelet[2387]: I0813 00:36:11.724769 2387 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:11.726550 kubelet[2387]: E0813 00:36:11.726505 2387 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-133-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:11.910163 kubelet[2387]: I0813 00:36:11.910094 2387 apiserver.go:52] "Watching apiserver" Aug 13 00:36:11.936945 kubelet[2387]: I0813 00:36:11.936908 2387 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:36:13.388129 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-9.scope)... Aug 13 00:36:13.388197 systemd[1]: Reloading... Aug 13 00:36:13.520564 zram_generator::config[2730]: No configuration found. Aug 13 00:36:13.637013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:36:13.772355 systemd[1]: Reloading finished in 383 ms. Aug 13 00:36:13.820841 kubelet[2387]: I0813 00:36:13.820696 2387 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:36:13.821423 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:36:13.834020 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:36:13.834575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:36:13.834663 systemd[1]: kubelet.service: Consumed 1.377s CPU time, 131.1M memory peak. Aug 13 00:36:13.838317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:36:14.153650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:36:14.162852 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:36:14.225540 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:36:14.225540 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:36:14.225540 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:36:14.226543 kubelet[2778]: I0813 00:36:14.226064 2778 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:36:14.236327 kubelet[2778]: I0813 00:36:14.236304 2778 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:36:14.237551 kubelet[2778]: I0813 00:36:14.237537 2778 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:36:14.237926 kubelet[2778]: I0813 00:36:14.237909 2778 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:36:14.240505 kubelet[2778]: I0813 00:36:14.240484 2778 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:36:14.243756 kubelet[2778]: I0813 00:36:14.243729 2778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:36:14.249379 kubelet[2778]: I0813 00:36:14.248181 2778 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:36:14.253092 kubelet[2778]: I0813 00:36:14.253056 2778 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:36:14.253376 kubelet[2778]: I0813 00:36:14.253329 2778 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:36:14.253618 kubelet[2778]: I0813 00:36:14.253362 2778 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-133-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:36:14.253767 kubelet[2778]: I0813 00:36:14.253633 2778 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:36:14.253767 kubelet[2778]: I0813 00:36:14.253646 2778 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:36:14.253767 kubelet[2778]: I0813 00:36:14.253707 2778 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:36:14.254845 kubelet[2778]: I0813 00:36:14.254014 2778 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:36:14.254845 kubelet[2778]: I0813 00:36:14.254066 2778 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:36:14.254845 kubelet[2778]: I0813 00:36:14.254095 2778 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:36:14.254845 kubelet[2778]: I0813 00:36:14.254175 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:36:14.273797 kubelet[2778]: I0813 00:36:14.273753 2778 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:36:14.274323 kubelet[2778]: I0813 00:36:14.274305 2778 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:36:14.274901 kubelet[2778]: I0813 00:36:14.274883 2778 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:36:14.275026 kubelet[2778]: I0813 00:36:14.275014 2778 server.go:1287] "Started kubelet" Aug 13 00:36:14.280073 kubelet[2778]: I0813 00:36:14.280053 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:36:14.282262 kubelet[2778]: I0813 00:36:14.282215 2778 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:36:14.286010 kubelet[2778]: I0813 00:36:14.285989 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:36:14.289058 kubelet[2778]: I0813 00:36:14.289040 2778 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:36:14.291340 kubelet[2778]: I0813 00:36:14.291142 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:36:14.291483 kubelet[2778]: I0813 00:36:14.291464 2778 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:36:14.291783 kubelet[2778]: I0813 00:36:14.291764 2778 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:36:14.296953 kubelet[2778]: I0813 00:36:14.296916 2778 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:36:14.299076 kubelet[2778]: I0813 00:36:14.297945 2778 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:36:14.300483 kubelet[2778]: I0813 00:36:14.300451 2778 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:36:14.305450 kubelet[2778]: E0813 00:36:14.304355 2778 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:36:14.314585 kubelet[2778]: I0813 00:36:14.305767 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:36:14.314585 kubelet[2778]: I0813 00:36:14.307547 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:36:14.314585 kubelet[2778]: I0813 00:36:14.307577 2778 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:36:14.314585 kubelet[2778]: I0813 00:36:14.307627 2778 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:36:14.314585 kubelet[2778]: I0813 00:36:14.307643 2778 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:36:14.314585 kubelet[2778]: E0813 00:36:14.307706 2778 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:36:14.315452 kubelet[2778]: I0813 00:36:14.315430 2778 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:36:14.315536 kubelet[2778]: I0813 00:36:14.315505 2778 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:36:14.399010 sudo[2809]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:36:14.399563 sudo[2809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:36:14.407259 kubelet[2778]: I0813 00:36:14.406410 2778 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:36:14.407365 kubelet[2778]: I0813 00:36:14.407349 2778 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:36:14.407456 kubelet[2778]: I0813 00:36:14.407445 2778 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:36:14.408090 kubelet[2778]: I0813 00:36:14.408052 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:36:14.408202 kubelet[2778]: I0813 00:36:14.408174 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:36:14.408268 kubelet[2778]: I0813 00:36:14.408258 2778 policy_none.go:49] "None policy: Start" Aug 13 00:36:14.408328 kubelet[2778]: I0813 00:36:14.408306 2778 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:36:14.408400 kubelet[2778]: I0813 00:36:14.408390 2778 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:36:14.408485 kubelet[2778]: E0813 00:36:14.408472 2778 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:36:14.408713 kubelet[2778]: I0813 00:36:14.408688 2778 state_mem.go:75] "Updated machine memory state" Aug 13 00:36:14.414778 kubelet[2778]: I0813 00:36:14.414706 2778 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:36:14.415958 kubelet[2778]: I0813 00:36:14.415913 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:36:14.416077 kubelet[2778]: I0813 00:36:14.416034 2778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:36:14.417975 kubelet[2778]: I0813 00:36:14.417707 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:36:14.421667 kubelet[2778]: E0813 00:36:14.421636 2778 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:36:14.541959 kubelet[2778]: I0813 00:36:14.541927 2778 kubelet_node_status.go:75] "Attempting to register node" node="172-237-133-249" Aug 13 00:36:14.553970 kubelet[2778]: I0813 00:36:14.553887 2778 kubelet_node_status.go:124] "Node was previously registered" node="172-237-133-249" Aug 13 00:36:14.554453 kubelet[2778]: I0813 00:36:14.554434 2778 kubelet_node_status.go:78] "Successfully registered node" node="172-237-133-249" Aug 13 00:36:14.611316 kubelet[2778]: I0813 00:36:14.609686 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:14.611316 kubelet[2778]: I0813 00:36:14.610273 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:14.611713 kubelet[2778]: I0813 00:36:14.611683 2778 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:14.696834 kubelet[2778]: I0813 00:36:14.696692 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-k8s-certs\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:14.697055 kubelet[2778]: I0813 00:36:14.696976 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:14.697118 kubelet[2778]: I0813 00:36:14.697077 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7846664812ab68f292b21f4dfb64951-ca-certs\") pod \"kube-apiserver-172-237-133-249\" (UID: \"c7846664812ab68f292b21f4dfb64951\") " pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:14.697169 kubelet[2778]: I0813 00:36:14.697147 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7846664812ab68f292b21f4dfb64951-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-133-249\" (UID: \"c7846664812ab68f292b21f4dfb64951\") " pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:14.697252 kubelet[2778]: I0813 00:36:14.697227 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7846664812ab68f292b21f4dfb64951-k8s-certs\") pod \"kube-apiserver-172-237-133-249\" (UID: \"c7846664812ab68f292b21f4dfb64951\") " pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:14.697323 kubelet[2778]: I0813 00:36:14.697301 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-ca-certs\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:14.697391 kubelet[2778]: I0813 00:36:14.697333 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-flexvolume-dir\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:14.697431 kubelet[2778]: I0813 00:36:14.697404 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b396d8b630101200fac26c6c5d8b6e2c-kubeconfig\") pod \"kube-controller-manager-172-237-133-249\" (UID: \"b396d8b630101200fac26c6c5d8b6e2c\") " pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:14.697494 kubelet[2778]: I0813 00:36:14.697471 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/262169405304b3a4dcf6b2dd26622368-kubeconfig\") pod \"kube-scheduler-172-237-133-249\" (UID: \"262169405304b3a4dcf6b2dd26622368\") " pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:14.936482 kubelet[2778]: E0813 00:36:14.936432 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:14.937756 kubelet[2778]: E0813 00:36:14.937730 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:14.941963 kubelet[2778]: E0813 00:36:14.941880 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:15.257730 kubelet[2778]: I0813 00:36:15.257675 2778 apiserver.go:52] "Watching apiserver" Aug 13 00:36:15.291872 kubelet[2778]: I0813 00:36:15.291800 2778 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:36:15.343223 kubelet[2778]: E0813 00:36:15.342174 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:15.343606 kubelet[2778]: E0813 00:36:15.343588 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:15.343957 kubelet[2778]: E0813 00:36:15.343939 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:15.684541 kubelet[2778]: I0813 00:36:15.684402 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-133-249" podStartSLOduration=1.6842971389999999 podStartE2EDuration="1.684297139s" podCreationTimestamp="2025-08-13 00:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:36:15.543624183 +0000 UTC m=+1.373419255" watchObservedRunningTime="2025-08-13 00:36:15.684297139 +0000 UTC m=+1.514092211" Aug 13 00:36:15.684585 sudo[2809]: pam_unix(sudo:session): session closed for user root Aug 13 00:36:15.740267 kubelet[2778]: I0813 00:36:15.739986 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-133-249" podStartSLOduration=1.739960334 podStartE2EDuration="1.739960334s" podCreationTimestamp="2025-08-13 00:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:36:15.725929153 +0000 UTC m=+1.555724245" watchObservedRunningTime="2025-08-13 00:36:15.739960334 +0000 UTC m=+1.569755426" Aug 13 00:36:15.740695 kubelet[2778]: I0813 00:36:15.740569 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-133-249" podStartSLOduration=1.7402930250000002 podStartE2EDuration="1.740293025s" podCreationTimestamp="2025-08-13 00:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:36:15.736985183 +0000 UTC m=+1.566780255" watchObservedRunningTime="2025-08-13 00:36:15.740293025 +0000 UTC m=+1.570088127" Aug 13 00:36:16.344650 kubelet[2778]: E0813 00:36:16.344584 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:16.346511 kubelet[2778]: E0813 00:36:16.345750 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:17.401384 kubelet[2778]: E0813 00:36:17.400738 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:17.661823 sudo[1833]: pam_unix(sudo:session): session closed for user root Aug 13 00:36:17.716083 sshd[1832]: Connection closed by 147.75.109.163 port 35646 Aug 13 00:36:17.718395 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Aug 13 00:36:17.730291 systemd[1]: sshd@8-172.237.133.249:22-147.75.109.163:35646.service: Deactivated successfully. Aug 13 00:36:17.735027 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:36:17.735732 systemd[1]: session-9.scope: Consumed 7.140s CPU time, 269.5M memory peak. Aug 13 00:36:17.739567 systemd-logind[1513]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:36:17.742165 systemd-logind[1513]: Removed session 9. Aug 13 00:36:19.355415 kubelet[2778]: E0813 00:36:19.355285 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:20.095787 kubelet[2778]: E0813 00:36:20.095511 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:20.357758 kubelet[2778]: E0813 00:36:20.357628 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:20.358223 kubelet[2778]: E0813 00:36:20.358139 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:20.401800 kubelet[2778]: I0813 00:36:20.401762 2778 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:36:20.403066 containerd[1545]: time="2025-08-13T00:36:20.402841019Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:36:20.403811 kubelet[2778]: I0813 00:36:20.403629 2778 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:36:21.063349 systemd[1]: Created slice kubepods-besteffort-pod11194b2a_f798_44ba_ab10_f156f8b9fb60.slice - libcontainer container kubepods-besteffort-pod11194b2a_f798_44ba_ab10_f156f8b9fb60.slice. Aug 13 00:36:21.078513 systemd[1]: Created slice kubepods-burstable-pod44a121c6_5869_4359_934f_20fd0b863ad3.slice - libcontainer container kubepods-burstable-pod44a121c6_5869_4359_934f_20fd0b863ad3.slice. Aug 13 00:36:21.175265 kubelet[2778]: I0813 00:36:21.175219 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-cgroup\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175462 kubelet[2778]: I0813 00:36:21.175442 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11194b2a-f798-44ba-ab10-f156f8b9fb60-kube-proxy\") pod \"kube-proxy-z2lvf\" (UID: \"11194b2a-f798-44ba-ab10-f156f8b9fb60\") " pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:21.175680 kubelet[2778]: I0813 00:36:21.175658 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zb9\" (UniqueName: \"kubernetes.io/projected/11194b2a-f798-44ba-ab10-f156f8b9fb60-kube-api-access-z7zb9\") pod \"kube-proxy-z2lvf\" (UID: \"11194b2a-f798-44ba-ab10-f156f8b9fb60\") " pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:21.175846 kubelet[2778]: I0813 00:36:21.175794 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cni-path\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175985 kubelet[2778]: I0813 00:36:21.175844 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-kernel\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175985 kubelet[2778]: I0813 00:36:21.175894 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-hostproc\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175985 kubelet[2778]: I0813 00:36:21.175910 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-net\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175985 kubelet[2778]: I0813 00:36:21.175927 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-hubble-tls\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175985 kubelet[2778]: I0813 00:36:21.175943 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-run\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.175985 kubelet[2778]: I0813 00:36:21.175961 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-config-path\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.176262 kubelet[2778]: I0813 00:36:21.175980 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11194b2a-f798-44ba-ab10-f156f8b9fb60-xtables-lock\") pod \"kube-proxy-z2lvf\" (UID: \"11194b2a-f798-44ba-ab10-f156f8b9fb60\") " pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:21.176262 kubelet[2778]: I0813 00:36:21.176005 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-bpf-maps\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.176262 kubelet[2778]: I0813 00:36:21.176050 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-lib-modules\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.176262 kubelet[2778]: I0813 00:36:21.176068 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44a121c6-5869-4359-934f-20fd0b863ad3-clustermesh-secrets\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.176262 kubelet[2778]: I0813 00:36:21.176083 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11194b2a-f798-44ba-ab10-f156f8b9fb60-lib-modules\") pod \"kube-proxy-z2lvf\" (UID: \"11194b2a-f798-44ba-ab10-f156f8b9fb60\") " pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:21.176262 kubelet[2778]: I0813 00:36:21.176100 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-xtables-lock\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.176453 kubelet[2778]: I0813 00:36:21.176116 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-etc-cni-netd\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.176453 kubelet[2778]: I0813 00:36:21.176139 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n75q\" (UniqueName: \"kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-kube-api-access-7n75q\") pod \"cilium-vnzl2\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " pod="kube-system/cilium-vnzl2" Aug 13 00:36:21.453245 kubelet[2778]: E0813 00:36:21.452187 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.453245 kubelet[2778]: E0813 00:36:21.452667 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.454829 containerd[1545]: time="2025-08-13T00:36:21.454350585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnzl2,Uid:44a121c6-5869-4359-934f-20fd0b863ad3,Namespace:kube-system,Attempt:0,}" Aug 13 00:36:21.457590 containerd[1545]: time="2025-08-13T00:36:21.457016230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2lvf,Uid:11194b2a-f798-44ba-ab10-f156f8b9fb60,Namespace:kube-system,Attempt:0,}" Aug 13 00:36:21.462715 kubelet[2778]: E0813 00:36:21.462423 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.465634 kubelet[2778]: E0813 00:36:21.464841 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.612609 containerd[1545]: time="2025-08-13T00:36:21.611582861Z" level=info msg="connecting to shim ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29" address="unix:///run/containerd/s/6dfa12b6820ee7976b9a5aad0868dc5eefda93db47b630713ddb0aa7eba8bda1" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:36:21.654859 systemd[1]: Created slice kubepods-besteffort-podc3f90c0b_aecb_47e6_a95e_d3afb284eda4.slice - libcontainer container kubepods-besteffort-podc3f90c0b_aecb_47e6_a95e_d3afb284eda4.slice. Aug 13 00:36:21.657036 containerd[1545]: time="2025-08-13T00:36:21.656992016Z" level=info msg="connecting to shim 4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a" address="unix:///run/containerd/s/a1bf5766bdef4f7c2fbe520d9069a458d11723951fdbb8a17c245d5729b57f56" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:36:21.662636 kubelet[2778]: I0813 00:36:21.660511 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5d6\" (UniqueName: \"kubernetes.io/projected/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-kube-api-access-lx5d6\") pod \"cilium-operator-6c4d7847fc-xlfct\" (UID: \"c3f90c0b-aecb-47e6-a95e-d3afb284eda4\") " pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:36:21.662768 kubelet[2778]: I0813 00:36:21.662646 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xlfct\" (UID: \"c3f90c0b-aecb-47e6-a95e-d3afb284eda4\") " pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:36:21.770715 systemd[1]: Started cri-containerd-4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a.scope - libcontainer container 4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a. Aug 13 00:36:21.773190 systemd[1]: Started cri-containerd-ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29.scope - libcontainer container ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29. Aug 13 00:36:21.865307 containerd[1545]: time="2025-08-13T00:36:21.865259382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnzl2,Uid:44a121c6-5869-4359-934f-20fd0b863ad3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\"" Aug 13 00:36:21.866801 containerd[1545]: time="2025-08-13T00:36:21.866770965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2lvf,Uid:11194b2a-f798-44ba-ab10-f156f8b9fb60,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a\"" Aug 13 00:36:21.867068 kubelet[2778]: E0813 00:36:21.867033 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.869283 kubelet[2778]: E0813 00:36:21.869060 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.879055 containerd[1545]: time="2025-08-13T00:36:21.878534844Z" level=info msg="CreateContainer within sandbox \"4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:36:21.879778 containerd[1545]: time="2025-08-13T00:36:21.879710506Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:36:21.899603 containerd[1545]: time="2025-08-13T00:36:21.899560372Z" level=info msg="Container 29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:21.908023 containerd[1545]: time="2025-08-13T00:36:21.907977622Z" level=info msg="CreateContainer within sandbox \"4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f\"" Aug 13 00:36:21.908633 containerd[1545]: time="2025-08-13T00:36:21.908584384Z" level=info msg="StartContainer for \"29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f\"" Aug 13 00:36:21.910550 containerd[1545]: time="2025-08-13T00:36:21.910468568Z" level=info msg="connecting to shim 29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f" address="unix:///run/containerd/s/a1bf5766bdef4f7c2fbe520d9069a458d11723951fdbb8a17c245d5729b57f56" protocol=ttrpc version=3 Aug 13 00:36:21.939679 systemd[1]: Started cri-containerd-29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f.scope - libcontainer container 29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f. Aug 13 00:36:21.964564 kubelet[2778]: E0813 00:36:21.964496 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:21.966084 containerd[1545]: time="2025-08-13T00:36:21.966010057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xlfct,Uid:c3f90c0b-aecb-47e6-a95e-d3afb284eda4,Namespace:kube-system,Attempt:0,}" Aug 13 00:36:21.991553 containerd[1545]: time="2025-08-13T00:36:21.991388896Z" level=info msg="connecting to shim cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109" address="unix:///run/containerd/s/0b3b73e058e963ca10a3b00bf76d597349809b59de0efa43acc65e5036658001" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:36:22.040736 containerd[1545]: time="2025-08-13T00:36:22.020388161Z" level=info msg="StartContainer for \"29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f\" returns successfully" Aug 13 00:36:22.055142 systemd[1]: Started cri-containerd-cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109.scope - libcontainer container cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109. Aug 13 00:36:22.142934 containerd[1545]: time="2025-08-13T00:36:22.142850844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xlfct,Uid:c3f90c0b-aecb-47e6-a95e-d3afb284eda4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\"" Aug 13 00:36:22.144542 kubelet[2778]: E0813 00:36:22.144406 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:22.471224 kubelet[2778]: E0813 00:36:22.471185 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:24.334380 kubelet[2778]: I0813 00:36:24.333895 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z2lvf" podStartSLOduration=3.333829303 podStartE2EDuration="3.333829303s" podCreationTimestamp="2025-08-13 00:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:36:22.487424057 +0000 UTC m=+8.317219129" watchObservedRunningTime="2025-08-13 00:36:24.333829303 +0000 UTC m=+10.163624375" Aug 13 00:36:27.422745 kubelet[2778]: E0813 00:36:27.422629 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:31.454248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451322821.mount: Deactivated successfully. Aug 13 00:36:34.584646 kubelet[2778]: I0813 00:36:34.584064 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:34.584646 kubelet[2778]: I0813 00:36:34.584226 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:34.589157 kubelet[2778]: I0813 00:36:34.589118 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:34.627018 kubelet[2778]: I0813 00:36:34.626956 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:34.627456 kubelet[2778]: I0813 00:36:34.627239 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:36:34.627456 kubelet[2778]: E0813 00:36:34.627343 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:36:34.627456 kubelet[2778]: E0813 00:36:34.627357 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:36:34.627456 kubelet[2778]: E0813 00:36:34.627373 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:34.627456 kubelet[2778]: E0813 00:36:34.627383 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:34.627456 kubelet[2778]: E0813 00:36:34.627391 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:34.627456 kubelet[2778]: E0813 00:36:34.627399 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:34.627456 kubelet[2778]: I0813 00:36:34.627417 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:35.683480 containerd[1545]: time="2025-08-13T00:36:35.683331270Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:35.686184 containerd[1545]: time="2025-08-13T00:36:35.686027082Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:36:35.687198 containerd[1545]: time="2025-08-13T00:36:35.686676412Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:35.688852 containerd[1545]: time="2025-08-13T00:36:35.688811004Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.809049868s" Aug 13 00:36:35.689003 containerd[1545]: time="2025-08-13T00:36:35.688977094Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:36:35.691877 containerd[1545]: time="2025-08-13T00:36:35.691838285Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:36:35.693911 containerd[1545]: time="2025-08-13T00:36:35.693859927Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:36:35.709392 containerd[1545]: time="2025-08-13T00:36:35.709343016Z" level=info msg="Container 6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:35.718962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558592381.mount: Deactivated successfully. Aug 13 00:36:35.721835 containerd[1545]: time="2025-08-13T00:36:35.721794715Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\"" Aug 13 00:36:35.723831 containerd[1545]: time="2025-08-13T00:36:35.723763136Z" level=info msg="StartContainer for \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\"" Aug 13 00:36:35.725657 containerd[1545]: time="2025-08-13T00:36:35.725607817Z" level=info msg="connecting to shim 6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6" address="unix:///run/containerd/s/6dfa12b6820ee7976b9a5aad0868dc5eefda93db47b630713ddb0aa7eba8bda1" protocol=ttrpc version=3 Aug 13 00:36:35.797975 systemd[1]: Started cri-containerd-6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6.scope - libcontainer container 6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6. Aug 13 00:36:35.877246 containerd[1545]: time="2025-08-13T00:36:35.877186436Z" level=info msg="StartContainer for \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" returns successfully" Aug 13 00:36:35.934853 systemd[1]: cri-containerd-6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6.scope: Deactivated successfully. Aug 13 00:36:35.942918 containerd[1545]: time="2025-08-13T00:36:35.942489758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" id:\"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" pid:3193 exited_at:{seconds:1755045395 nanos:941554757}" Aug 13 00:36:35.942918 containerd[1545]: time="2025-08-13T00:36:35.942534938Z" level=info msg="received exit event container_id:\"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" id:\"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" pid:3193 exited_at:{seconds:1755045395 nanos:941554757}" Aug 13 00:36:35.985430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6-rootfs.mount: Deactivated successfully. Aug 13 00:36:36.539392 kubelet[2778]: E0813 00:36:36.539083 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:36.556330 containerd[1545]: time="2025-08-13T00:36:36.556280105Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:36:36.574787 containerd[1545]: time="2025-08-13T00:36:36.574728916Z" level=info msg="Container d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:36.599048 containerd[1545]: time="2025-08-13T00:36:36.599000230Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\"" Aug 13 00:36:36.599920 containerd[1545]: time="2025-08-13T00:36:36.599882900Z" level=info msg="StartContainer for \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\"" Aug 13 00:36:36.603891 containerd[1545]: time="2025-08-13T00:36:36.603826193Z" level=info msg="connecting to shim d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a" address="unix:///run/containerd/s/6dfa12b6820ee7976b9a5aad0868dc5eefda93db47b630713ddb0aa7eba8bda1" protocol=ttrpc version=3 Aug 13 00:36:36.785727 systemd[1]: Started cri-containerd-d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a.scope - libcontainer container d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a. Aug 13 00:36:36.881807 containerd[1545]: time="2025-08-13T00:36:36.881139632Z" level=info msg="StartContainer for \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" returns successfully" Aug 13 00:36:36.920646 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:36:36.921114 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:36:36.928736 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:36:36.932005 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:36:36.938893 containerd[1545]: time="2025-08-13T00:36:36.932496442Z" level=info msg="received exit event container_id:\"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" id:\"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" pid:3247 exited_at:{seconds:1755045396 nanos:931564691}" Aug 13 00:36:36.938893 containerd[1545]: time="2025-08-13T00:36:36.932767402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" id:\"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" pid:3247 exited_at:{seconds:1755045396 nanos:931564691}" Aug 13 00:36:36.936675 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:36:36.937140 systemd[1]: cri-containerd-d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a.scope: Deactivated successfully. Aug 13 00:36:37.036387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:36:37.051359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a-rootfs.mount: Deactivated successfully. Aug 13 00:36:37.547149 kubelet[2778]: E0813 00:36:37.545612 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:37.551655 containerd[1545]: time="2025-08-13T00:36:37.551300351Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:36:37.606562 containerd[1545]: time="2025-08-13T00:36:37.605644649Z" level=info msg="Container 6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:37.625443 containerd[1545]: time="2025-08-13T00:36:37.625394249Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\"" Aug 13 00:36:37.627833 containerd[1545]: time="2025-08-13T00:36:37.627797790Z" level=info msg="StartContainer for \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\"" Aug 13 00:36:37.632390 containerd[1545]: time="2025-08-13T00:36:37.632351163Z" level=info msg="connecting to shim 6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5" address="unix:///run/containerd/s/6dfa12b6820ee7976b9a5aad0868dc5eefda93db47b630713ddb0aa7eba8bda1" protocol=ttrpc version=3 Aug 13 00:36:37.743861 systemd[1]: Started cri-containerd-6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5.scope - libcontainer container 6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5. Aug 13 00:36:37.748717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639441659.mount: Deactivated successfully. Aug 13 00:36:38.083327 containerd[1545]: time="2025-08-13T00:36:38.082949926Z" level=info msg="StartContainer for \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" returns successfully" Aug 13 00:36:38.089275 systemd[1]: cri-containerd-6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5.scope: Deactivated successfully. Aug 13 00:36:38.091252 containerd[1545]: time="2025-08-13T00:36:38.091078180Z" level=info msg="received exit event container_id:\"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" id:\"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" pid:3298 exited_at:{seconds:1755045398 nanos:90486230}" Aug 13 00:36:38.095215 containerd[1545]: time="2025-08-13T00:36:38.095126142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" id:\"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" pid:3298 exited_at:{seconds:1755045398 nanos:90486230}" Aug 13 00:36:38.146378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5-rootfs.mount: Deactivated successfully. Aug 13 00:36:38.176901 containerd[1545]: time="2025-08-13T00:36:38.176836918Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:38.177655 containerd[1545]: time="2025-08-13T00:36:38.177622638Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:36:38.178635 containerd[1545]: time="2025-08-13T00:36:38.178274169Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:36:38.179620 containerd[1545]: time="2025-08-13T00:36:38.179583709Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.487696094s" Aug 13 00:36:38.179721 containerd[1545]: time="2025-08-13T00:36:38.179687879Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:36:38.183484 containerd[1545]: time="2025-08-13T00:36:38.183442671Z" level=info msg="CreateContainer within sandbox \"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:36:38.191549 containerd[1545]: time="2025-08-13T00:36:38.191465175Z" level=info msg="Container 7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:38.211586 containerd[1545]: time="2025-08-13T00:36:38.211543743Z" level=info msg="CreateContainer within sandbox \"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\"" Aug 13 00:36:38.212752 containerd[1545]: time="2025-08-13T00:36:38.212729205Z" level=info msg="StartContainer for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\"" Aug 13 00:36:38.215123 containerd[1545]: time="2025-08-13T00:36:38.215052095Z" level=info msg="connecting to shim 7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c" address="unix:///run/containerd/s/0b3b73e058e963ca10a3b00bf76d597349809b59de0efa43acc65e5036658001" protocol=ttrpc version=3 Aug 13 00:36:38.273386 systemd[1]: Started cri-containerd-7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c.scope - libcontainer container 7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c. Aug 13 00:36:38.327154 containerd[1545]: time="2025-08-13T00:36:38.327044195Z" level=info msg="StartContainer for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" returns successfully" Aug 13 00:36:38.566197 kubelet[2778]: E0813 00:36:38.566068 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:38.567442 kubelet[2778]: E0813 00:36:38.566749 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:38.580641 containerd[1545]: time="2025-08-13T00:36:38.580450678Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:36:38.603240 containerd[1545]: time="2025-08-13T00:36:38.603143218Z" level=info msg="Container b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:38.616455 containerd[1545]: time="2025-08-13T00:36:38.616374354Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\"" Aug 13 00:36:38.617540 containerd[1545]: time="2025-08-13T00:36:38.617222904Z" level=info msg="StartContainer for \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\"" Aug 13 00:36:38.620177 containerd[1545]: time="2025-08-13T00:36:38.620133206Z" level=info msg="connecting to shim b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121" address="unix:///run/containerd/s/6dfa12b6820ee7976b9a5aad0868dc5eefda93db47b630713ddb0aa7eba8bda1" protocol=ttrpc version=3 Aug 13 00:36:38.730163 systemd[1]: Started cri-containerd-b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121.scope - libcontainer container b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121. Aug 13 00:36:38.925485 kubelet[2778]: I0813 00:36:38.925019 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" podStartSLOduration=1.889086801 podStartE2EDuration="17.924871661s" podCreationTimestamp="2025-08-13 00:36:21 +0000 UTC" firstStartedPulling="2025-08-13 00:36:22.14549928 +0000 UTC m=+7.975294352" lastFinishedPulling="2025-08-13 00:36:38.18128413 +0000 UTC m=+24.011079212" observedRunningTime="2025-08-13 00:36:38.924672321 +0000 UTC m=+24.754467393" watchObservedRunningTime="2025-08-13 00:36:38.924871661 +0000 UTC m=+24.754666743" Aug 13 00:36:38.967335 systemd[1]: cri-containerd-b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121.scope: Deactivated successfully. Aug 13 00:36:38.967966 containerd[1545]: time="2025-08-13T00:36:38.967894201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" id:\"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" pid:3374 exited_at:{seconds:1755045398 nanos:966972480}" Aug 13 00:36:38.968252 containerd[1545]: time="2025-08-13T00:36:38.968109851Z" level=info msg="received exit event container_id:\"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" id:\"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" pid:3374 exited_at:{seconds:1755045398 nanos:966972480}" Aug 13 00:36:38.969805 containerd[1545]: time="2025-08-13T00:36:38.969759501Z" level=info msg="StartContainer for \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" returns successfully" Aug 13 00:36:39.054814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121-rootfs.mount: Deactivated successfully. Aug 13 00:36:39.582264 kubelet[2778]: E0813 00:36:39.580508 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:39.582264 kubelet[2778]: E0813 00:36:39.582029 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:39.588300 containerd[1545]: time="2025-08-13T00:36:39.588227372Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:36:39.615045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667525368.mount: Deactivated successfully. Aug 13 00:36:39.618615 containerd[1545]: time="2025-08-13T00:36:39.616633673Z" level=info msg="Container aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:36:39.632560 containerd[1545]: time="2025-08-13T00:36:39.632495329Z" level=info msg="CreateContainer within sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\"" Aug 13 00:36:39.633461 containerd[1545]: time="2025-08-13T00:36:39.633440330Z" level=info msg="StartContainer for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\"" Aug 13 00:36:39.637249 containerd[1545]: time="2025-08-13T00:36:39.637224861Z" level=info msg="connecting to shim aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0" address="unix:///run/containerd/s/6dfa12b6820ee7976b9a5aad0868dc5eefda93db47b630713ddb0aa7eba8bda1" protocol=ttrpc version=3 Aug 13 00:36:39.747722 systemd[1]: Started cri-containerd-aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0.scope - libcontainer container aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0. Aug 13 00:36:39.905230 containerd[1545]: time="2025-08-13T00:36:39.905191584Z" level=info msg="StartContainer for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" returns successfully" Aug 13 00:36:40.375834 containerd[1545]: time="2025-08-13T00:36:40.375448845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" id:\"9b2a7215cb8fe076283038e681713e348a3f6d8c3f8acf667073854b622f6677\" pid:3443 exited_at:{seconds:1755045400 nanos:374724385}" Aug 13 00:36:40.464882 kubelet[2778]: I0813 00:36:40.464601 2778 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:36:40.598315 kubelet[2778]: E0813 00:36:40.598252 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:40.642273 kubelet[2778]: I0813 00:36:40.642180 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vnzl2" podStartSLOduration=5.8206419 podStartE2EDuration="19.642156983s" podCreationTimestamp="2025-08-13 00:36:21 +0000 UTC" firstStartedPulling="2025-08-13 00:36:21.869377952 +0000 UTC m=+7.699173024" lastFinishedPulling="2025-08-13 00:36:35.690893035 +0000 UTC m=+21.520688107" observedRunningTime="2025-08-13 00:36:40.638922522 +0000 UTC m=+26.468717604" watchObservedRunningTime="2025-08-13 00:36:40.642156983 +0000 UTC m=+26.471952065" Aug 13 00:36:41.600361 kubelet[2778]: E0813 00:36:41.600253 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:42.602607 kubelet[2778]: E0813 00:36:42.602549 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:42.920397 systemd-networkd[1464]: cilium_host: Link UP Aug 13 00:36:42.920706 systemd-networkd[1464]: cilium_net: Link UP Aug 13 00:36:42.921118 systemd-networkd[1464]: cilium_host: Gained carrier Aug 13 00:36:42.921346 systemd-networkd[1464]: cilium_net: Gained carrier Aug 13 00:36:43.059293 systemd-networkd[1464]: cilium_vxlan: Link UP Aug 13 00:36:43.059303 systemd-networkd[1464]: cilium_vxlan: Gained carrier Aug 13 00:36:43.238800 systemd-networkd[1464]: cilium_net: Gained IPv6LL Aug 13 00:36:43.484606 kernel: NET: Registered PF_ALG protocol family Aug 13 00:36:43.543139 systemd-networkd[1464]: cilium_host: Gained IPv6LL Aug 13 00:36:44.253798 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL Aug 13 00:36:44.702176 systemd-networkd[1464]: lxc_health: Link UP Aug 13 00:36:44.704433 systemd-networkd[1464]: lxc_health: Gained carrier Aug 13 00:36:44.728859 kubelet[2778]: I0813 00:36:44.728740 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:44.730063 kubelet[2778]: I0813 00:36:44.729353 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:44.755970 kubelet[2778]: I0813 00:36:44.754925 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:44.827626 kubelet[2778]: I0813 00:36:44.825790 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:44.827626 kubelet[2778]: I0813 00:36:44.826576 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:36:44.827626 kubelet[2778]: E0813 00:36:44.826892 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:36:44.827626 kubelet[2778]: E0813 00:36:44.826956 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:36:44.827626 kubelet[2778]: E0813 00:36:44.826966 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:44.827626 kubelet[2778]: E0813 00:36:44.826986 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:44.827626 kubelet[2778]: E0813 00:36:44.826994 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:44.827626 kubelet[2778]: E0813 00:36:44.827114 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:44.827626 kubelet[2778]: I0813 00:36:44.827128 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:45.462201 kubelet[2778]: E0813 00:36:45.460661 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:45.612460 kubelet[2778]: E0813 00:36:45.612382 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:36:45.974914 systemd-networkd[1464]: lxc_health: Gained IPv6LL Aug 13 00:36:54.857142 kubelet[2778]: I0813 00:36:54.856795 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:54.857142 kubelet[2778]: I0813 00:36:54.856971 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:54.860165 kubelet[2778]: I0813 00:36:54.859884 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:54.881441 kubelet[2778]: I0813 00:36:54.881406 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:54.881623 kubelet[2778]: I0813 00:36:54.881568 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:36:54.881660 kubelet[2778]: E0813 00:36:54.881654 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:36:54.881700 kubelet[2778]: E0813 00:36:54.881668 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:36:54.881700 kubelet[2778]: E0813 00:36:54.881678 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:36:54.881700 kubelet[2778]: E0813 00:36:54.881686 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:36:54.881700 kubelet[2778]: E0813 00:36:54.881694 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:36:54.881700 kubelet[2778]: E0813 00:36:54.881702 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:36:54.881848 kubelet[2778]: I0813 00:36:54.881712 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:04.914235 kubelet[2778]: I0813 00:37:04.914156 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:04.914235 kubelet[2778]: I0813 00:37:04.914251 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:04.920728 kubelet[2778]: I0813 00:37:04.920682 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:04.947721 kubelet[2778]: I0813 00:37:04.947675 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:04.947929 kubelet[2778]: I0813 00:37:04.947810 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:37:04.947929 kubelet[2778]: E0813 00:37:04.947863 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:37:04.947929 kubelet[2778]: E0813 00:37:04.947877 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:37:04.947929 kubelet[2778]: E0813 00:37:04.947888 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:37:04.947929 kubelet[2778]: E0813 00:37:04.947897 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:37:04.947929 kubelet[2778]: E0813 00:37:04.947906 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:37:04.947929 kubelet[2778]: E0813 00:37:04.947915 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:37:04.947929 kubelet[2778]: I0813 00:37:04.947925 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:14.969800 kubelet[2778]: I0813 00:37:14.969748 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:14.969800 kubelet[2778]: I0813 00:37:14.969798 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:14.973429 kubelet[2778]: I0813 00:37:14.973379 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:14.989045 kubelet[2778]: I0813 00:37:14.988995 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:14.989196 kubelet[2778]: I0813 00:37:14.989072 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-proxy-z2lvf","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:37:14.989196 kubelet[2778]: E0813 00:37:14.989105 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:37:14.989196 kubelet[2778]: E0813 00:37:14.989117 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:37:14.989196 kubelet[2778]: E0813 00:37:14.989127 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:37:14.989196 kubelet[2778]: E0813 00:37:14.989137 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:37:14.989196 kubelet[2778]: E0813 00:37:14.989149 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:37:14.989196 kubelet[2778]: E0813 00:37:14.989156 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:37:14.989196 kubelet[2778]: I0813 00:37:14.989167 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:25.007038 kubelet[2778]: I0813 00:37:25.006986 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:25.007038 kubelet[2778]: I0813 00:37:25.007046 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:25.008641 kubelet[2778]: I0813 00:37:25.008616 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:25.022785 kubelet[2778]: I0813 00:37:25.022761 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:25.022900 kubelet[2778]: I0813 00:37:25.022880 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-proxy-z2lvf","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:37:25.022958 kubelet[2778]: E0813 00:37:25.022926 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:37:25.022958 kubelet[2778]: E0813 00:37:25.022944 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:37:25.022958 kubelet[2778]: E0813 00:37:25.022952 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:37:25.023053 kubelet[2778]: E0813 00:37:25.022961 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:37:25.023053 kubelet[2778]: E0813 00:37:25.022968 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:37:25.023053 kubelet[2778]: E0813 00:37:25.022977 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:37:25.023053 kubelet[2778]: I0813 00:37:25.022987 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:27.309120 kubelet[2778]: E0813 00:37:27.309022 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:37:35.045720 kubelet[2778]: I0813 00:37:35.045542 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:35.045720 kubelet[2778]: I0813 00:37:35.045702 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:35.048734 kubelet[2778]: I0813 00:37:35.048241 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:35.062547 kubelet[2778]: I0813 00:37:35.062485 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:35.062710 kubelet[2778]: I0813 00:37:35.062643 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-proxy-z2lvf","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:37:35.062782 kubelet[2778]: E0813 00:37:35.062726 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:37:35.062782 kubelet[2778]: E0813 00:37:35.062746 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:37:35.062782 kubelet[2778]: E0813 00:37:35.062754 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:37:35.062782 kubelet[2778]: E0813 00:37:35.062764 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:37:35.062782 kubelet[2778]: E0813 00:37:35.062773 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:37:35.062782 kubelet[2778]: E0813 00:37:35.062781 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:37:35.063114 kubelet[2778]: I0813 00:37:35.062792 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:36.310004 kubelet[2778]: E0813 00:37:36.309513 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:37:44.309570 kubelet[2778]: E0813 00:37:44.308853 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:37:45.081763 kubelet[2778]: I0813 00:37:45.081730 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:45.081763 kubelet[2778]: I0813 00:37:45.081770 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:45.084965 kubelet[2778]: I0813 00:37:45.084888 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:45.100637 kubelet[2778]: I0813 00:37:45.100608 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:45.100800 kubelet[2778]: I0813 00:37:45.100773 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:37:45.100908 kubelet[2778]: E0813 00:37:45.100840 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:37:45.100908 kubelet[2778]: E0813 00:37:45.100859 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:37:45.100908 kubelet[2778]: E0813 00:37:45.100871 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:37:45.100908 kubelet[2778]: E0813 00:37:45.100880 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:37:45.100908 kubelet[2778]: E0813 00:37:45.100889 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:37:45.100908 kubelet[2778]: E0813 00:37:45.100896 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:37:45.100908 kubelet[2778]: I0813 00:37:45.100906 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:48.309312 kubelet[2778]: E0813 00:37:48.308437 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:37:49.309047 kubelet[2778]: E0813 00:37:49.309008 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:37:55.120153 kubelet[2778]: I0813 00:37:55.120103 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:55.120153 kubelet[2778]: I0813 00:37:55.120203 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:55.123038 kubelet[2778]: I0813 00:37:55.122962 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:55.138507 kubelet[2778]: I0813 00:37:55.138468 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:55.138671 kubelet[2778]: I0813 00:37:55.138605 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:37:55.138671 kubelet[2778]: E0813 00:37:55.138649 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:37:55.138738 kubelet[2778]: E0813 00:37:55.138676 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:37:55.138738 kubelet[2778]: E0813 00:37:55.138686 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:37:55.138738 kubelet[2778]: E0813 00:37:55.138694 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:37:55.138738 kubelet[2778]: E0813 00:37:55.138702 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:37:55.138738 kubelet[2778]: E0813 00:37:55.138709 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:37:55.138738 kubelet[2778]: I0813 00:37:55.138718 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:05.239850 kubelet[2778]: I0813 00:38:05.239804 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:05.239850 kubelet[2778]: I0813 00:38:05.239854 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:05.244680 kubelet[2778]: I0813 00:38:05.243611 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:05.260722 kubelet[2778]: I0813 00:38:05.260682 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:05.260972 kubelet[2778]: I0813 00:38:05.260942 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-proxy-z2lvf","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:38:05.261029 kubelet[2778]: E0813 00:38:05.260984 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:38:05.261029 kubelet[2778]: E0813 00:38:05.260997 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:38:05.261029 kubelet[2778]: E0813 00:38:05.261005 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:38:05.261029 kubelet[2778]: E0813 00:38:05.261015 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:38:05.261029 kubelet[2778]: E0813 00:38:05.261023 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:38:05.261029 kubelet[2778]: E0813 00:38:05.261030 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:38:05.261360 kubelet[2778]: I0813 00:38:05.261040 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:09.308683 kubelet[2778]: E0813 00:38:09.308612 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:38:15.283902 kubelet[2778]: I0813 00:38:15.283834 2778 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:15.283902 kubelet[2778]: I0813 00:38:15.283885 2778 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:15.291366 kubelet[2778]: I0813 00:38:15.291314 2778 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:15.294107 kubelet[2778]: I0813 00:38:15.294022 2778 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 00:38:15.296278 containerd[1545]: time="2025-08-13T00:38:15.296009385Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:38:15.298794 containerd[1545]: time="2025-08-13T00:38:15.298749074Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:38:15.299433 containerd[1545]: time="2025-08-13T00:38:15.299382797Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 00:38:15.299972 containerd[1545]: time="2025-08-13T00:38:15.299942939Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 00:38:15.300140 containerd[1545]: time="2025-08-13T00:38:15.300029378Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:38:15.300345 kubelet[2778]: I0813 00:38:15.300268 2778 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 00:38:15.300527 containerd[1545]: time="2025-08-13T00:38:15.300497200Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:38:15.301312 containerd[1545]: time="2025-08-13T00:38:15.301277892Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:38:15.301823 containerd[1545]: time="2025-08-13T00:38:15.301780154Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 00:38:15.302283 containerd[1545]: time="2025-08-13T00:38:15.302253035Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 00:38:15.302481 containerd[1545]: time="2025-08-13T00:38:15.302348805Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:38:15.323751 kubelet[2778]: I0813 00:38:15.323630 2778 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:15.323910 kubelet[2778]: I0813 00:38:15.323891 2778 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-xlfct","kube-system/cilium-vnzl2","kube-system/kube-controller-manager-172-237-133-249","kube-system/kube-proxy-z2lvf","kube-system/kube-apiserver-172-237-133-249","kube-system/kube-scheduler-172-237-133-249"] Aug 13 00:38:15.324580 kubelet[2778]: E0813 00:38:15.323962 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-xlfct" Aug 13 00:38:15.324580 kubelet[2778]: E0813 00:38:15.324025 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vnzl2" Aug 13 00:38:15.324580 kubelet[2778]: E0813 00:38:15.324051 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-237-133-249" Aug 13 00:38:15.324580 kubelet[2778]: E0813 00:38:15.324063 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-z2lvf" Aug 13 00:38:15.324580 kubelet[2778]: E0813 00:38:15.324072 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-237-133-249" Aug 13 00:38:15.324580 kubelet[2778]: E0813 00:38:15.324109 2778 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-237-133-249" Aug 13 00:38:15.324580 kubelet[2778]: I0813 00:38:15.324121 2778 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:25.654123 systemd[1]: Started sshd@9-172.237.133.249:22-147.75.109.163:42964.service - OpenSSH per-connection server daemon (147.75.109.163:42964). Aug 13 00:38:26.022088 sshd[3885]: Accepted publickey for core from 147.75.109.163 port 42964 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:26.024477 sshd-session[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:26.034265 systemd-logind[1513]: New session 10 of user core. Aug 13 00:38:26.039670 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:38:26.386439 sshd[3887]: Connection closed by 147.75.109.163 port 42964 Aug 13 00:38:26.387313 sshd-session[3885]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:26.393025 systemd-logind[1513]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:38:26.393759 systemd[1]: sshd@9-172.237.133.249:22-147.75.109.163:42964.service: Deactivated successfully. Aug 13 00:38:26.397754 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:38:26.400321 systemd-logind[1513]: Removed session 10. Aug 13 00:38:31.455472 systemd[1]: Started sshd@10-172.237.133.249:22-147.75.109.163:40012.service - OpenSSH per-connection server daemon (147.75.109.163:40012). Aug 13 00:38:31.807310 sshd[3900]: Accepted publickey for core from 147.75.109.163 port 40012 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:31.808910 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:31.814616 systemd-logind[1513]: New session 11 of user core. Aug 13 00:38:31.825713 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:38:32.123277 sshd[3902]: Connection closed by 147.75.109.163 port 40012 Aug 13 00:38:32.124470 sshd-session[3900]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:32.129788 systemd-logind[1513]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:38:32.130934 systemd[1]: sshd@10-172.237.133.249:22-147.75.109.163:40012.service: Deactivated successfully. Aug 13 00:38:32.134960 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:38:32.136994 systemd-logind[1513]: Removed session 11. Aug 13 00:38:32.309347 kubelet[2778]: E0813 00:38:32.309268 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:38:37.186433 systemd[1]: Started sshd@11-172.237.133.249:22-147.75.109.163:40028.service - OpenSSH per-connection server daemon (147.75.109.163:40028). Aug 13 00:38:37.529309 sshd[3915]: Accepted publickey for core from 147.75.109.163 port 40028 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:37.531177 sshd-session[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:37.537403 systemd-logind[1513]: New session 12 of user core. Aug 13 00:38:37.548680 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:38:37.842617 sshd[3917]: Connection closed by 147.75.109.163 port 40028 Aug 13 00:38:37.843745 sshd-session[3915]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:37.851188 systemd[1]: sshd@11-172.237.133.249:22-147.75.109.163:40028.service: Deactivated successfully. Aug 13 00:38:37.854045 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:38:37.856050 systemd-logind[1513]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:38:37.857967 systemd-logind[1513]: Removed session 12. Aug 13 00:38:42.912792 systemd[1]: Started sshd@12-172.237.133.249:22-147.75.109.163:54890.service - OpenSSH per-connection server daemon (147.75.109.163:54890). Aug 13 00:38:43.265048 sshd[3930]: Accepted publickey for core from 147.75.109.163 port 54890 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:43.266986 sshd-session[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:43.273580 systemd-logind[1513]: New session 13 of user core. Aug 13 00:38:43.276659 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:38:43.578472 sshd[3932]: Connection closed by 147.75.109.163 port 54890 Aug 13 00:38:43.579628 sshd-session[3930]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:43.584639 systemd[1]: sshd@12-172.237.133.249:22-147.75.109.163:54890.service: Deactivated successfully. Aug 13 00:38:43.587702 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:38:43.588896 systemd-logind[1513]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:38:43.590995 systemd-logind[1513]: Removed session 13. Aug 13 00:38:43.644074 systemd[1]: Started sshd@13-172.237.133.249:22-147.75.109.163:54902.service - OpenSSH per-connection server daemon (147.75.109.163:54902). Aug 13 00:38:44.001546 sshd[3945]: Accepted publickey for core from 147.75.109.163 port 54902 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:44.003584 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:44.009600 systemd-logind[1513]: New session 14 of user core. Aug 13 00:38:44.015680 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:38:44.365833 sshd[3947]: Connection closed by 147.75.109.163 port 54902 Aug 13 00:38:44.366754 sshd-session[3945]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:44.370728 systemd[1]: sshd@13-172.237.133.249:22-147.75.109.163:54902.service: Deactivated successfully. Aug 13 00:38:44.373633 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:38:44.375213 systemd-logind[1513]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:38:44.376917 systemd-logind[1513]: Removed session 14. Aug 13 00:38:44.428708 systemd[1]: Started sshd@14-172.237.133.249:22-147.75.109.163:54910.service - OpenSSH per-connection server daemon (147.75.109.163:54910). Aug 13 00:38:44.776756 sshd[3957]: Accepted publickey for core from 147.75.109.163 port 54910 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:44.778795 sshd-session[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:44.783711 systemd-logind[1513]: New session 15 of user core. Aug 13 00:38:44.791816 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:38:45.092912 sshd[3959]: Connection closed by 147.75.109.163 port 54910 Aug 13 00:38:45.093950 sshd-session[3957]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:45.098190 systemd-logind[1513]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:38:45.099075 systemd[1]: sshd@14-172.237.133.249:22-147.75.109.163:54910.service: Deactivated successfully. Aug 13 00:38:45.102066 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:38:45.104456 systemd-logind[1513]: Removed session 15. Aug 13 00:38:50.159794 systemd[1]: Started sshd@15-172.237.133.249:22-147.75.109.163:49422.service - OpenSSH per-connection server daemon (147.75.109.163:49422). Aug 13 00:38:50.508314 sshd[3971]: Accepted publickey for core from 147.75.109.163 port 49422 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:50.509982 sshd-session[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:50.516490 systemd-logind[1513]: New session 16 of user core. Aug 13 00:38:50.522655 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:38:50.836462 sshd[3973]: Connection closed by 147.75.109.163 port 49422 Aug 13 00:38:50.837553 sshd-session[3971]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:50.842161 systemd-logind[1513]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:38:50.843088 systemd[1]: sshd@15-172.237.133.249:22-147.75.109.163:49422.service: Deactivated successfully. Aug 13 00:38:50.846637 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:38:50.849368 systemd-logind[1513]: Removed session 16. Aug 13 00:38:55.899122 systemd[1]: Started sshd@16-172.237.133.249:22-147.75.109.163:49434.service - OpenSSH per-connection server daemon (147.75.109.163:49434). Aug 13 00:38:56.246676 sshd[3995]: Accepted publickey for core from 147.75.109.163 port 49434 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:38:56.249034 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:56.255010 systemd-logind[1513]: New session 17 of user core. Aug 13 00:38:56.264832 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:38:56.564405 sshd[4000]: Connection closed by 147.75.109.163 port 49434 Aug 13 00:38:56.565721 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:56.570164 systemd[1]: sshd@16-172.237.133.249:22-147.75.109.163:49434.service: Deactivated successfully. Aug 13 00:38:56.573619 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:38:56.574818 systemd-logind[1513]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:38:56.576449 systemd-logind[1513]: Removed session 17. Aug 13 00:38:57.309096 kubelet[2778]: E0813 00:38:57.309053 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:39:01.628004 systemd[1]: Started sshd@17-172.237.133.249:22-147.75.109.163:42490.service - OpenSSH per-connection server daemon (147.75.109.163:42490). Aug 13 00:39:01.976778 sshd[4012]: Accepted publickey for core from 147.75.109.163 port 42490 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:01.978798 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:01.985834 systemd-logind[1513]: New session 18 of user core. Aug 13 00:39:01.993715 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:39:02.287860 sshd[4014]: Connection closed by 147.75.109.163 port 42490 Aug 13 00:39:02.288758 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:02.293904 systemd-logind[1513]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:39:02.294855 systemd[1]: sshd@17-172.237.133.249:22-147.75.109.163:42490.service: Deactivated successfully. Aug 13 00:39:02.297958 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:39:02.300484 systemd-logind[1513]: Removed session 18. Aug 13 00:39:02.354559 systemd[1]: Started sshd@18-172.237.133.249:22-147.75.109.163:42504.service - OpenSSH per-connection server daemon (147.75.109.163:42504). Aug 13 00:39:02.700924 sshd[4026]: Accepted publickey for core from 147.75.109.163 port 42504 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:02.702680 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:02.707297 systemd-logind[1513]: New session 19 of user core. Aug 13 00:39:02.713695 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:39:03.084775 sshd[4028]: Connection closed by 147.75.109.163 port 42504 Aug 13 00:39:03.085787 sshd-session[4026]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:03.090819 systemd-logind[1513]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:39:03.091397 systemd[1]: sshd@18-172.237.133.249:22-147.75.109.163:42504.service: Deactivated successfully. Aug 13 00:39:03.094459 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:39:03.097068 systemd-logind[1513]: Removed session 19. Aug 13 00:39:03.143359 systemd[1]: Started sshd@19-172.237.133.249:22-147.75.109.163:42506.service - OpenSSH per-connection server daemon (147.75.109.163:42506). Aug 13 00:39:03.477548 sshd[4038]: Accepted publickey for core from 147.75.109.163 port 42506 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:03.479309 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:03.484234 systemd-logind[1513]: New session 20 of user core. Aug 13 00:39:03.489704 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:39:04.318218 sshd[4040]: Connection closed by 147.75.109.163 port 42506 Aug 13 00:39:04.319008 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:04.325324 systemd[1]: sshd@19-172.237.133.249:22-147.75.109.163:42506.service: Deactivated successfully. Aug 13 00:39:04.329165 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:39:04.332869 systemd-logind[1513]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:39:04.337971 systemd-logind[1513]: Removed session 20. Aug 13 00:39:04.379603 systemd[1]: Started sshd@20-172.237.133.249:22-147.75.109.163:42522.service - OpenSSH per-connection server daemon (147.75.109.163:42522). Aug 13 00:39:04.728586 sshd[4058]: Accepted publickey for core from 147.75.109.163 port 42522 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:04.730362 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:04.736992 systemd-logind[1513]: New session 21 of user core. Aug 13 00:39:04.740647 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:39:05.135006 sshd[4060]: Connection closed by 147.75.109.163 port 42522 Aug 13 00:39:05.135686 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:05.139878 systemd[1]: sshd@20-172.237.133.249:22-147.75.109.163:42522.service: Deactivated successfully. Aug 13 00:39:05.142072 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:39:05.143704 systemd-logind[1513]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:39:05.145286 systemd-logind[1513]: Removed session 21. Aug 13 00:39:05.196826 systemd[1]: Started sshd@21-172.237.133.249:22-147.75.109.163:42526.service - OpenSSH per-connection server daemon (147.75.109.163:42526). Aug 13 00:39:05.309310 kubelet[2778]: E0813 00:39:05.309270 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:39:05.536646 sshd[4070]: Accepted publickey for core from 147.75.109.163 port 42526 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:05.538246 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:05.544591 systemd-logind[1513]: New session 22 of user core. Aug 13 00:39:05.550732 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:39:05.840354 sshd[4072]: Connection closed by 147.75.109.163 port 42526 Aug 13 00:39:05.841347 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:05.846009 systemd[1]: sshd@21-172.237.133.249:22-147.75.109.163:42526.service: Deactivated successfully. Aug 13 00:39:05.849944 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:39:05.851109 systemd-logind[1513]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:39:05.853654 systemd-logind[1513]: Removed session 22. Aug 13 00:39:06.312463 kubelet[2778]: E0813 00:39:06.312090 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:39:08.313792 kubelet[2778]: E0813 00:39:08.313732 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:39:10.905390 systemd[1]: Started sshd@22-172.237.133.249:22-147.75.109.163:54064.service - OpenSSH per-connection server daemon (147.75.109.163:54064). Aug 13 00:39:11.257565 sshd[4084]: Accepted publickey for core from 147.75.109.163 port 54064 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:11.259622 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:11.266747 systemd-logind[1513]: New session 23 of user core. Aug 13 00:39:11.274702 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:39:11.568250 sshd[4086]: Connection closed by 147.75.109.163 port 54064 Aug 13 00:39:11.569188 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:11.574273 systemd[1]: sshd@22-172.237.133.249:22-147.75.109.163:54064.service: Deactivated successfully. Aug 13 00:39:11.576915 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:39:11.578591 systemd-logind[1513]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:39:11.580900 systemd-logind[1513]: Removed session 23. Aug 13 00:39:16.634424 systemd[1]: Started sshd@23-172.237.133.249:22-147.75.109.163:54080.service - OpenSSH per-connection server daemon (147.75.109.163:54080). Aug 13 00:39:16.979664 sshd[4102]: Accepted publickey for core from 147.75.109.163 port 54080 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:16.981454 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:16.989252 systemd-logind[1513]: New session 24 of user core. Aug 13 00:39:16.991705 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:39:17.280234 sshd[4104]: Connection closed by 147.75.109.163 port 54080 Aug 13 00:39:17.281117 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:17.287273 systemd[1]: sshd@23-172.237.133.249:22-147.75.109.163:54080.service: Deactivated successfully. Aug 13 00:39:17.291177 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:39:17.293406 systemd-logind[1513]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:39:17.295098 systemd-logind[1513]: Removed session 24. Aug 13 00:39:17.309650 kubelet[2778]: E0813 00:39:17.309602 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:39:22.355056 systemd[1]: Started sshd@24-172.237.133.249:22-147.75.109.163:50900.service - OpenSSH per-connection server daemon (147.75.109.163:50900). Aug 13 00:39:22.693276 sshd[4120]: Accepted publickey for core from 147.75.109.163 port 50900 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:22.695168 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:22.702440 systemd-logind[1513]: New session 25 of user core. Aug 13 00:39:22.706673 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:39:23.004594 sshd[4122]: Connection closed by 147.75.109.163 port 50900 Aug 13 00:39:23.006449 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:23.010888 systemd-logind[1513]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:39:23.011256 systemd[1]: sshd@24-172.237.133.249:22-147.75.109.163:50900.service: Deactivated successfully. Aug 13 00:39:23.013899 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:39:23.015774 systemd-logind[1513]: Removed session 25. Aug 13 00:39:28.070730 systemd[1]: Started sshd@25-172.237.133.249:22-147.75.109.163:48954.service - OpenSSH per-connection server daemon (147.75.109.163:48954). Aug 13 00:39:28.404238 sshd[4133]: Accepted publickey for core from 147.75.109.163 port 48954 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:28.406089 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:28.411591 systemd-logind[1513]: New session 26 of user core. Aug 13 00:39:28.422692 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:39:28.703159 sshd[4135]: Connection closed by 147.75.109.163 port 48954 Aug 13 00:39:28.704194 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:28.709363 systemd[1]: sshd@25-172.237.133.249:22-147.75.109.163:48954.service: Deactivated successfully. Aug 13 00:39:28.712484 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:39:28.713628 systemd-logind[1513]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:39:28.716022 systemd-logind[1513]: Removed session 26. Aug 13 00:39:33.772231 systemd[1]: Started sshd@26-172.237.133.249:22-147.75.109.163:48956.service - OpenSSH per-connection server daemon (147.75.109.163:48956). Aug 13 00:39:34.120102 sshd[4147]: Accepted publickey for core from 147.75.109.163 port 48956 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:34.121783 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:34.127993 systemd-logind[1513]: New session 27 of user core. Aug 13 00:39:34.132688 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:39:34.310087 kubelet[2778]: E0813 00:39:34.310016 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:39:34.436861 sshd[4149]: Connection closed by 147.75.109.163 port 48956 Aug 13 00:39:34.437843 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:34.442390 systemd[1]: sshd@26-172.237.133.249:22-147.75.109.163:48956.service: Deactivated successfully. Aug 13 00:39:34.445119 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:39:34.447109 systemd-logind[1513]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:39:34.448626 systemd-logind[1513]: Removed session 27. Aug 13 00:39:39.501689 systemd[1]: Started sshd@27-172.237.133.249:22-147.75.109.163:37916.service - OpenSSH per-connection server daemon (147.75.109.163:37916). Aug 13 00:39:39.855832 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 37916 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:39.857534 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:39.863046 systemd-logind[1513]: New session 28 of user core. Aug 13 00:39:39.869722 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:39:40.161496 sshd[4163]: Connection closed by 147.75.109.163 port 37916 Aug 13 00:39:40.162153 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:40.166206 systemd-logind[1513]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:39:40.167108 systemd[1]: sshd@27-172.237.133.249:22-147.75.109.163:37916.service: Deactivated successfully. Aug 13 00:39:40.169783 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:39:40.171624 systemd-logind[1513]: Removed session 28. Aug 13 00:39:45.223089 systemd[1]: Started sshd@28-172.237.133.249:22-147.75.109.163:37930.service - OpenSSH per-connection server daemon (147.75.109.163:37930). Aug 13 00:39:45.575071 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 37930 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:45.576851 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:45.582686 systemd-logind[1513]: New session 29 of user core. Aug 13 00:39:45.588670 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:39:45.888798 sshd[4179]: Connection closed by 147.75.109.163 port 37930 Aug 13 00:39:45.889457 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:45.894032 systemd[1]: sshd@28-172.237.133.249:22-147.75.109.163:37930.service: Deactivated successfully. Aug 13 00:39:45.896343 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:39:45.897237 systemd-logind[1513]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:39:45.899399 systemd-logind[1513]: Removed session 29. Aug 13 00:39:50.962830 systemd[1]: Started sshd@29-172.237.133.249:22-147.75.109.163:42154.service - OpenSSH per-connection server daemon (147.75.109.163:42154). Aug 13 00:39:51.349072 sshd[4191]: Accepted publickey for core from 147.75.109.163 port 42154 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:51.353995 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:51.372022 systemd-logind[1513]: New session 30 of user core. Aug 13 00:39:51.378799 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:39:51.772416 sshd[4194]: Connection closed by 147.75.109.163 port 42154 Aug 13 00:39:51.773932 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:51.787069 systemd[1]: sshd@29-172.237.133.249:22-147.75.109.163:42154.service: Deactivated successfully. Aug 13 00:39:51.793214 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:39:51.796936 systemd-logind[1513]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:39:51.801753 systemd-logind[1513]: Removed session 30. Aug 13 00:39:56.852142 systemd[1]: Started sshd@30-172.237.133.249:22-147.75.109.163:42166.service - OpenSSH per-connection server daemon (147.75.109.163:42166). Aug 13 00:39:57.214311 sshd[4208]: Accepted publickey for core from 147.75.109.163 port 42166 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:39:57.216453 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:57.223718 systemd-logind[1513]: New session 31 of user core. Aug 13 00:39:57.227891 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:39:57.534801 sshd[4210]: Connection closed by 147.75.109.163 port 42166 Aug 13 00:39:57.535832 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:57.540605 systemd[1]: sshd@30-172.237.133.249:22-147.75.109.163:42166.service: Deactivated successfully. Aug 13 00:39:57.543402 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:39:57.544778 systemd-logind[1513]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:39:57.546278 systemd-logind[1513]: Removed session 31. Aug 13 00:40:02.596013 systemd[1]: Started sshd@31-172.237.133.249:22-147.75.109.163:49146.service - OpenSSH per-connection server daemon (147.75.109.163:49146). Aug 13 00:40:02.960353 sshd[4222]: Accepted publickey for core from 147.75.109.163 port 49146 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:02.962260 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:02.969409 systemd-logind[1513]: New session 32 of user core. Aug 13 00:40:02.974691 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:40:03.272792 sshd[4224]: Connection closed by 147.75.109.163 port 49146 Aug 13 00:40:03.273795 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:03.280027 systemd[1]: sshd@31-172.237.133.249:22-147.75.109.163:49146.service: Deactivated successfully. Aug 13 00:40:03.283114 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:40:03.284653 systemd-logind[1513]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:40:03.287609 systemd-logind[1513]: Removed session 32. Aug 13 00:40:08.309569 kubelet[2778]: E0813 00:40:08.309155 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:40:08.333908 systemd[1]: Started sshd@32-172.237.133.249:22-147.75.109.163:54608.service - OpenSSH per-connection server daemon (147.75.109.163:54608). Aug 13 00:40:08.675950 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 54608 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:08.677836 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:08.683270 systemd-logind[1513]: New session 33 of user core. Aug 13 00:40:08.689661 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:40:08.975760 sshd[4238]: Connection closed by 147.75.109.163 port 54608 Aug 13 00:40:08.976850 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:08.981215 systemd-logind[1513]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:40:08.982112 systemd[1]: sshd@32-172.237.133.249:22-147.75.109.163:54608.service: Deactivated successfully. Aug 13 00:40:08.984817 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:40:08.986410 systemd-logind[1513]: Removed session 33. Aug 13 00:40:14.041685 systemd[1]: Started sshd@33-172.237.133.249:22-147.75.109.163:54618.service - OpenSSH per-connection server daemon (147.75.109.163:54618). Aug 13 00:40:14.393244 sshd[4250]: Accepted publickey for core from 147.75.109.163 port 54618 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:14.394989 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:14.401255 systemd-logind[1513]: New session 34 of user core. Aug 13 00:40:14.407682 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:40:14.701935 sshd[4254]: Connection closed by 147.75.109.163 port 54618 Aug 13 00:40:14.702922 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:14.708324 systemd[1]: sshd@33-172.237.133.249:22-147.75.109.163:54618.service: Deactivated successfully. Aug 13 00:40:14.711356 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:40:14.713394 systemd-logind[1513]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:40:14.715448 systemd-logind[1513]: Removed session 34. Aug 13 00:40:19.765073 systemd[1]: Started sshd@34-172.237.133.249:22-147.75.109.163:36120.service - OpenSSH per-connection server daemon (147.75.109.163:36120). Aug 13 00:40:20.104704 sshd[4267]: Accepted publickey for core from 147.75.109.163 port 36120 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:20.106768 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:20.113576 systemd-logind[1513]: New session 35 of user core. Aug 13 00:40:20.119684 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:40:20.412680 sshd[4269]: Connection closed by 147.75.109.163 port 36120 Aug 13 00:40:20.413444 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:20.418945 systemd-logind[1513]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:40:20.419316 systemd[1]: sshd@34-172.237.133.249:22-147.75.109.163:36120.service: Deactivated successfully. Aug 13 00:40:20.422083 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:40:20.424893 systemd-logind[1513]: Removed session 35. Aug 13 00:40:21.308672 kubelet[2778]: E0813 00:40:21.308630 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:40:23.309415 kubelet[2778]: E0813 00:40:23.309350 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:40:25.481242 systemd[1]: Started sshd@35-172.237.133.249:22-147.75.109.163:36128.service - OpenSSH per-connection server daemon (147.75.109.163:36128). Aug 13 00:40:25.831144 sshd[4282]: Accepted publickey for core from 147.75.109.163 port 36128 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:25.833002 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:25.839286 systemd-logind[1513]: New session 36 of user core. Aug 13 00:40:25.847664 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:40:26.153692 sshd[4284]: Connection closed by 147.75.109.163 port 36128 Aug 13 00:40:26.154675 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:26.160176 systemd[1]: sshd@35-172.237.133.249:22-147.75.109.163:36128.service: Deactivated successfully. Aug 13 00:40:26.163130 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:40:26.164459 systemd-logind[1513]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:40:26.166785 systemd-logind[1513]: Removed session 36. Aug 13 00:40:31.226628 systemd[1]: Started sshd@36-172.237.133.249:22-147.75.109.163:52088.service - OpenSSH per-connection server daemon (147.75.109.163:52088). Aug 13 00:40:31.582209 sshd[4296]: Accepted publickey for core from 147.75.109.163 port 52088 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:31.584105 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:31.589765 systemd-logind[1513]: New session 37 of user core. Aug 13 00:40:31.596656 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:40:31.905113 sshd[4298]: Connection closed by 147.75.109.163 port 52088 Aug 13 00:40:31.906683 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:31.913281 systemd[1]: sshd@36-172.237.133.249:22-147.75.109.163:52088.service: Deactivated successfully. Aug 13 00:40:31.917254 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:40:31.920303 systemd-logind[1513]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:40:31.922483 systemd-logind[1513]: Removed session 37. Aug 13 00:40:33.308592 kubelet[2778]: E0813 00:40:33.308480 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:40:35.310487 kubelet[2778]: E0813 00:40:35.309254 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:40:36.967410 systemd[1]: Started sshd@37-172.237.133.249:22-147.75.109.163:52098.service - OpenSSH per-connection server daemon (147.75.109.163:52098). Aug 13 00:40:37.305100 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 52098 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:37.307143 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:37.314489 systemd-logind[1513]: New session 38 of user core. Aug 13 00:40:37.317653 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:40:37.616557 sshd[4312]: Connection closed by 147.75.109.163 port 52098 Aug 13 00:40:37.617416 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:37.621265 systemd[1]: sshd@37-172.237.133.249:22-147.75.109.163:52098.service: Deactivated successfully. Aug 13 00:40:37.623669 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:40:37.626330 systemd-logind[1513]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:40:37.627856 systemd-logind[1513]: Removed session 38. Aug 13 00:40:42.686506 systemd[1]: Started sshd@38-172.237.133.249:22-147.75.109.163:36026.service - OpenSSH per-connection server daemon (147.75.109.163:36026). Aug 13 00:40:43.040731 sshd[4324]: Accepted publickey for core from 147.75.109.163 port 36026 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:43.042424 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:43.048657 systemd-logind[1513]: New session 39 of user core. Aug 13 00:40:43.053683 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:40:43.367738 sshd[4326]: Connection closed by 147.75.109.163 port 36026 Aug 13 00:40:43.369410 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:43.377336 systemd[1]: sshd@38-172.237.133.249:22-147.75.109.163:36026.service: Deactivated successfully. Aug 13 00:40:43.380738 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:40:43.382063 systemd-logind[1513]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:40:43.384342 systemd-logind[1513]: Removed session 39. Aug 13 00:40:44.311553 kubelet[2778]: E0813 00:40:44.310169 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:40:48.434349 systemd[1]: Started sshd@39-172.237.133.249:22-147.75.109.163:52356.service - OpenSSH per-connection server daemon (147.75.109.163:52356). Aug 13 00:40:48.773342 sshd[4339]: Accepted publickey for core from 147.75.109.163 port 52356 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:48.775003 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:48.780728 systemd-logind[1513]: New session 40 of user core. Aug 13 00:40:48.787711 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:40:49.082284 sshd[4341]: Connection closed by 147.75.109.163 port 52356 Aug 13 00:40:49.083541 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:49.088822 systemd-logind[1513]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:40:49.089660 systemd[1]: sshd@39-172.237.133.249:22-147.75.109.163:52356.service: Deactivated successfully. Aug 13 00:40:49.092174 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:40:49.094486 systemd-logind[1513]: Removed session 40. Aug 13 00:40:54.151433 systemd[1]: Started sshd@40-172.237.133.249:22-147.75.109.163:52364.service - OpenSSH per-connection server daemon (147.75.109.163:52364). Aug 13 00:40:54.498278 sshd[4355]: Accepted publickey for core from 147.75.109.163 port 52364 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:40:54.499956 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:54.505579 systemd-logind[1513]: New session 41 of user core. Aug 13 00:40:54.510681 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:40:54.826012 sshd[4357]: Connection closed by 147.75.109.163 port 52364 Aug 13 00:40:54.826854 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:54.831874 systemd[1]: sshd@40-172.237.133.249:22-147.75.109.163:52364.service: Deactivated successfully. Aug 13 00:40:54.834573 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:40:54.835729 systemd-logind[1513]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:40:54.837423 systemd-logind[1513]: Removed session 41. Aug 13 00:40:59.891850 systemd[1]: Started sshd@41-172.237.133.249:22-147.75.109.163:43176.service - OpenSSH per-connection server daemon (147.75.109.163:43176). Aug 13 00:41:00.239640 sshd[4370]: Accepted publickey for core from 147.75.109.163 port 43176 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:00.241780 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:00.247812 systemd-logind[1513]: New session 42 of user core. Aug 13 00:41:00.252760 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:41:00.555501 sshd[4372]: Connection closed by 147.75.109.163 port 43176 Aug 13 00:41:00.556415 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:00.560864 systemd-logind[1513]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:41:00.561790 systemd[1]: sshd@41-172.237.133.249:22-147.75.109.163:43176.service: Deactivated successfully. Aug 13 00:41:00.564276 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:41:00.567088 systemd-logind[1513]: Removed session 42. Aug 13 00:41:05.631667 systemd[1]: Started sshd@42-172.237.133.249:22-147.75.109.163:43188.service - OpenSSH per-connection server daemon (147.75.109.163:43188). Aug 13 00:41:05.978838 sshd[4384]: Accepted publickey for core from 147.75.109.163 port 43188 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:05.980875 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:05.986260 systemd-logind[1513]: New session 43 of user core. Aug 13 00:41:05.994907 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:41:06.297447 sshd[4386]: Connection closed by 147.75.109.163 port 43188 Aug 13 00:41:06.298428 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:06.303374 systemd[1]: sshd@42-172.237.133.249:22-147.75.109.163:43188.service: Deactivated successfully. Aug 13 00:41:06.306065 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:41:06.307124 systemd-logind[1513]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:41:06.309667 systemd-logind[1513]: Removed session 43. Aug 13 00:41:06.946169 containerd[1545]: time="2025-08-13T00:41:06.945818477Z" level=warning msg="container event discarded" container=0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57 type=CONTAINER_CREATED_EVENT Aug 13 00:41:06.957895 containerd[1545]: time="2025-08-13T00:41:06.957828959Z" level=warning msg="container event discarded" container=0289e3451d175567104d519fae9f8a26e9414710c9aa99f125517813bb612a57 type=CONTAINER_STARTED_EVENT Aug 13 00:41:07.056209 containerd[1545]: time="2025-08-13T00:41:07.056138752Z" level=warning msg="container event discarded" container=a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20 type=CONTAINER_CREATED_EVENT Aug 13 00:41:07.056209 containerd[1545]: time="2025-08-13T00:41:07.056191773Z" level=warning msg="container event discarded" container=b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec type=CONTAINER_CREATED_EVENT Aug 13 00:41:07.056209 containerd[1545]: time="2025-08-13T00:41:07.056202163Z" level=warning msg="container event discarded" container=b68c2bb981b651b669884c66c782f2b282c0b9ffcdda61ca61e4ee37de5ccaec type=CONTAINER_STARTED_EVENT Aug 13 00:41:07.070849 containerd[1545]: time="2025-08-13T00:41:07.070777266Z" level=warning msg="container event discarded" container=15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306 type=CONTAINER_CREATED_EVENT Aug 13 00:41:07.070849 containerd[1545]: time="2025-08-13T00:41:07.070826566Z" level=warning msg="container event discarded" container=15a0aa8046751ebacf56086a507d05b37155b0ace35b8b85de0b5f8ab3893306 type=CONTAINER_STARTED_EVENT Aug 13 00:41:07.146214 containerd[1545]: time="2025-08-13T00:41:07.146130129Z" level=warning msg="container event discarded" container=02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68 type=CONTAINER_CREATED_EVENT Aug 13 00:41:07.173633 containerd[1545]: time="2025-08-13T00:41:07.173485477Z" level=warning msg="container event discarded" container=3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283 type=CONTAINER_CREATED_EVENT Aug 13 00:41:07.574000 containerd[1545]: time="2025-08-13T00:41:07.573909635Z" level=warning msg="container event discarded" container=a6c27462bcdb1832f1f105abf2810d63b04e3d6ac29dcdb5f6cb36d7290ede20 type=CONTAINER_STARTED_EVENT Aug 13 00:41:08.571570 containerd[1545]: time="2025-08-13T00:41:08.571435422Z" level=warning msg="container event discarded" container=3f7145224f0be30a5fac00947d0cf9f4b96efe3dc827c625867457cffec89283 type=CONTAINER_STARTED_EVENT Aug 13 00:41:08.590958 containerd[1545]: time="2025-08-13T00:41:08.590869456Z" level=warning msg="container event discarded" container=02f5446b2435f202f23d75735dba1b61e0e440835c076ee9f6b3963ef0e31d68 type=CONTAINER_STARTED_EVENT Aug 13 00:41:11.358196 systemd[1]: Started sshd@43-172.237.133.249:22-147.75.109.163:38240.service - OpenSSH per-connection server daemon (147.75.109.163:38240). Aug 13 00:41:11.701337 sshd[4398]: Accepted publickey for core from 147.75.109.163 port 38240 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:11.703056 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:11.709383 systemd-logind[1513]: New session 44 of user core. Aug 13 00:41:11.716920 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:41:12.026985 sshd[4400]: Connection closed by 147.75.109.163 port 38240 Aug 13 00:41:12.027996 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:12.032940 systemd[1]: sshd@43-172.237.133.249:22-147.75.109.163:38240.service: Deactivated successfully. Aug 13 00:41:12.035934 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:41:12.037162 systemd-logind[1513]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:41:12.039173 systemd-logind[1513]: Removed session 44. Aug 13 00:41:17.091265 systemd[1]: Started sshd@44-172.237.133.249:22-147.75.109.163:38252.service - OpenSSH per-connection server daemon (147.75.109.163:38252). Aug 13 00:41:17.428973 sshd[4414]: Accepted publickey for core from 147.75.109.163 port 38252 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:17.430841 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:17.436001 systemd-logind[1513]: New session 45 of user core. Aug 13 00:41:17.451785 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:41:17.735956 sshd[4416]: Connection closed by 147.75.109.163 port 38252 Aug 13 00:41:17.736585 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:17.740682 systemd-logind[1513]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:41:17.742004 systemd[1]: sshd@44-172.237.133.249:22-147.75.109.163:38252.service: Deactivated successfully. Aug 13 00:41:17.744382 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:41:17.746626 systemd-logind[1513]: Removed session 45. Aug 13 00:41:21.308934 kubelet[2778]: E0813 00:41:21.308804 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:41:21.876289 containerd[1545]: time="2025-08-13T00:41:21.876129997Z" level=warning msg="container event discarded" container=ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29 type=CONTAINER_CREATED_EVENT Aug 13 00:41:21.876289 containerd[1545]: time="2025-08-13T00:41:21.876253257Z" level=warning msg="container event discarded" container=ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29 type=CONTAINER_STARTED_EVENT Aug 13 00:41:21.876289 containerd[1545]: time="2025-08-13T00:41:21.876263867Z" level=warning msg="container event discarded" container=4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a type=CONTAINER_CREATED_EVENT Aug 13 00:41:21.876289 containerd[1545]: time="2025-08-13T00:41:21.876271087Z" level=warning msg="container event discarded" container=4d1fe52aafb83eff4adb098703a89a3a4e649f89a0796c1543cb73c5a36af42a type=CONTAINER_STARTED_EVENT Aug 13 00:41:21.917615 containerd[1545]: time="2025-08-13T00:41:21.917503062Z" level=warning msg="container event discarded" container=29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f type=CONTAINER_CREATED_EVENT Aug 13 00:41:22.028025 containerd[1545]: time="2025-08-13T00:41:22.027936502Z" level=warning msg="container event discarded" container=29b412aa8fc63828af21435bceaf920bd1e857b0b50af936f4d1c6533ae01a0f type=CONTAINER_STARTED_EVENT Aug 13 00:41:22.153351 containerd[1545]: time="2025-08-13T00:41:22.153261860Z" level=warning msg="container event discarded" container=cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109 type=CONTAINER_CREATED_EVENT Aug 13 00:41:22.153351 containerd[1545]: time="2025-08-13T00:41:22.153318270Z" level=warning msg="container event discarded" container=cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109 type=CONTAINER_STARTED_EVENT Aug 13 00:41:22.799750 systemd[1]: Started sshd@45-172.237.133.249:22-147.75.109.163:34466.service - OpenSSH per-connection server daemon (147.75.109.163:34466). Aug 13 00:41:23.144397 sshd[4429]: Accepted publickey for core from 147.75.109.163 port 34466 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:23.146178 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:23.152829 systemd-logind[1513]: New session 46 of user core. Aug 13 00:41:23.159711 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:41:23.456785 sshd[4431]: Connection closed by 147.75.109.163 port 34466 Aug 13 00:41:23.457728 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:23.461829 systemd-logind[1513]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:41:23.462315 systemd[1]: sshd@45-172.237.133.249:22-147.75.109.163:34466.service: Deactivated successfully. Aug 13 00:41:23.464404 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:41:23.466254 systemd-logind[1513]: Removed session 46. Aug 13 00:41:28.522704 systemd[1]: Started sshd@46-172.237.133.249:22-147.75.109.163:51330.service - OpenSSH per-connection server daemon (147.75.109.163:51330). Aug 13 00:41:28.867101 sshd[4443]: Accepted publickey for core from 147.75.109.163 port 51330 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:28.868872 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:28.874590 systemd-logind[1513]: New session 47 of user core. Aug 13 00:41:28.879722 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:41:29.170664 sshd[4445]: Connection closed by 147.75.109.163 port 51330 Aug 13 00:41:29.171462 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:29.176847 systemd-logind[1513]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:41:29.177051 systemd[1]: sshd@46-172.237.133.249:22-147.75.109.163:51330.service: Deactivated successfully. Aug 13 00:41:29.179127 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:41:29.181428 systemd-logind[1513]: Removed session 47. Aug 13 00:41:34.231779 systemd[1]: Started sshd@47-172.237.133.249:22-147.75.109.163:51344.service - OpenSSH per-connection server daemon (147.75.109.163:51344). Aug 13 00:41:34.575692 sshd[4457]: Accepted publickey for core from 147.75.109.163 port 51344 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:34.577500 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:34.583641 systemd-logind[1513]: New session 48 of user core. Aug 13 00:41:34.588693 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:41:34.884690 sshd[4459]: Connection closed by 147.75.109.163 port 51344 Aug 13 00:41:34.885285 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:34.890422 systemd[1]: sshd@47-172.237.133.249:22-147.75.109.163:51344.service: Deactivated successfully. Aug 13 00:41:34.892919 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:41:34.894155 systemd-logind[1513]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:41:34.895726 systemd-logind[1513]: Removed session 48. Aug 13 00:41:35.731396 containerd[1545]: time="2025-08-13T00:41:35.731323455Z" level=warning msg="container event discarded" container=6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6 type=CONTAINER_CREATED_EVENT Aug 13 00:41:35.885166 containerd[1545]: time="2025-08-13T00:41:35.885095722Z" level=warning msg="container event discarded" container=6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6 type=CONTAINER_STARTED_EVENT Aug 13 00:41:36.020643 containerd[1545]: time="2025-08-13T00:41:36.020469450Z" level=warning msg="container event discarded" container=6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6 type=CONTAINER_STOPPED_EVENT Aug 13 00:41:36.607739 containerd[1545]: time="2025-08-13T00:41:36.607667953Z" level=warning msg="container event discarded" container=d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a type=CONTAINER_CREATED_EVENT Aug 13 00:41:36.890223 containerd[1545]: time="2025-08-13T00:41:36.890139989Z" level=warning msg="container event discarded" container=d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a type=CONTAINER_STARTED_EVENT Aug 13 00:41:37.070648 containerd[1545]: time="2025-08-13T00:41:37.070558991Z" level=warning msg="container event discarded" container=d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a type=CONTAINER_STOPPED_EVENT Aug 13 00:41:37.634592 containerd[1545]: time="2025-08-13T00:41:37.634511549Z" level=warning msg="container event discarded" container=6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5 type=CONTAINER_CREATED_EVENT Aug 13 00:41:38.089506 containerd[1545]: time="2025-08-13T00:41:38.089370179Z" level=warning msg="container event discarded" container=6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5 type=CONTAINER_STARTED_EVENT Aug 13 00:41:38.186366 containerd[1545]: time="2025-08-13T00:41:38.186283449Z" level=warning msg="container event discarded" container=6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5 type=CONTAINER_STOPPED_EVENT Aug 13 00:41:38.221758 containerd[1545]: time="2025-08-13T00:41:38.221687239Z" level=warning msg="container event discarded" container=7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c type=CONTAINER_CREATED_EVENT Aug 13 00:41:38.337669 containerd[1545]: time="2025-08-13T00:41:38.337607369Z" level=warning msg="container event discarded" container=7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c type=CONTAINER_STARTED_EVENT Aug 13 00:41:38.625613 containerd[1545]: time="2025-08-13T00:41:38.625485865Z" level=warning msg="container event discarded" container=b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121 type=CONTAINER_CREATED_EVENT Aug 13 00:41:38.976197 containerd[1545]: time="2025-08-13T00:41:38.976102175Z" level=warning msg="container event discarded" container=b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121 type=CONTAINER_STARTED_EVENT Aug 13 00:41:39.083585 containerd[1545]: time="2025-08-13T00:41:39.083488862Z" level=warning msg="container event discarded" container=b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121 type=CONTAINER_STOPPED_EVENT Aug 13 00:41:39.308727 kubelet[2778]: E0813 00:41:39.308590 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:41:39.640275 containerd[1545]: time="2025-08-13T00:41:39.640198646Z" level=warning msg="container event discarded" container=aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0 type=CONTAINER_CREATED_EVENT Aug 13 00:41:39.914289 containerd[1545]: time="2025-08-13T00:41:39.913851807Z" level=warning msg="container event discarded" container=aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0 type=CONTAINER_STARTED_EVENT Aug 13 00:41:39.954325 systemd[1]: Started sshd@48-172.237.133.249:22-147.75.109.163:52894.service - OpenSSH per-connection server daemon (147.75.109.163:52894). Aug 13 00:41:40.316982 sshd[4471]: Accepted publickey for core from 147.75.109.163 port 52894 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:40.319820 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:40.328898 systemd-logind[1513]: New session 49 of user core. Aug 13 00:41:40.340002 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:41:40.719084 sshd[4473]: Connection closed by 147.75.109.163 port 52894 Aug 13 00:41:40.720090 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:40.729203 systemd[1]: sshd@48-172.237.133.249:22-147.75.109.163:52894.service: Deactivated successfully. Aug 13 00:41:40.734201 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:41:40.736901 systemd-logind[1513]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:41:40.742436 systemd-logind[1513]: Removed session 49. Aug 13 00:41:45.783021 systemd[1]: Started sshd@49-172.237.133.249:22-147.75.109.163:52900.service - OpenSSH per-connection server daemon (147.75.109.163:52900). Aug 13 00:41:46.132754 sshd[4484]: Accepted publickey for core from 147.75.109.163 port 52900 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:46.134427 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:46.141421 systemd-logind[1513]: New session 50 of user core. Aug 13 00:41:46.148670 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:41:46.442891 sshd[4486]: Connection closed by 147.75.109.163 port 52900 Aug 13 00:41:46.443905 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:46.448754 systemd[1]: sshd@49-172.237.133.249:22-147.75.109.163:52900.service: Deactivated successfully. Aug 13 00:41:46.452146 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:41:46.453093 systemd-logind[1513]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:41:46.455436 systemd-logind[1513]: Removed session 50. Aug 13 00:41:47.308758 kubelet[2778]: E0813 00:41:47.308707 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:41:51.513938 systemd[1]: Started sshd@50-172.237.133.249:22-147.75.109.163:51698.service - OpenSSH per-connection server daemon (147.75.109.163:51698). Aug 13 00:41:51.871021 sshd[4498]: Accepted publickey for core from 147.75.109.163 port 51698 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:51.872833 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:51.879760 systemd-logind[1513]: New session 51 of user core. Aug 13 00:41:51.883676 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:41:52.170810 sshd[4500]: Connection closed by 147.75.109.163 port 51698 Aug 13 00:41:52.171635 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:52.176561 systemd[1]: sshd@50-172.237.133.249:22-147.75.109.163:51698.service: Deactivated successfully. Aug 13 00:41:52.178985 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:41:52.180453 systemd-logind[1513]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:41:52.182081 systemd-logind[1513]: Removed session 51. Aug 13 00:41:55.308494 kubelet[2778]: E0813 00:41:55.308457 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:41:57.247239 systemd[1]: Started sshd@51-172.237.133.249:22-147.75.109.163:51712.service - OpenSSH per-connection server daemon (147.75.109.163:51712). Aug 13 00:41:57.609392 sshd[4513]: Accepted publickey for core from 147.75.109.163 port 51712 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:41:57.611674 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:57.619136 systemd-logind[1513]: New session 52 of user core. Aug 13 00:41:57.627719 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:41:57.915109 sshd[4515]: Connection closed by 147.75.109.163 port 51712 Aug 13 00:41:57.915887 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:57.920082 systemd-logind[1513]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:41:57.921092 systemd[1]: sshd@51-172.237.133.249:22-147.75.109.163:51712.service: Deactivated successfully. Aug 13 00:41:57.923558 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:41:57.925884 systemd-logind[1513]: Removed session 52. Aug 13 00:42:00.309731 kubelet[2778]: E0813 00:42:00.308658 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:02.985276 systemd[1]: Started sshd@52-172.237.133.249:22-147.75.109.163:55418.service - OpenSSH per-connection server daemon (147.75.109.163:55418). Aug 13 00:42:03.337306 sshd[4527]: Accepted publickey for core from 147.75.109.163 port 55418 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:03.339006 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:03.347142 systemd-logind[1513]: New session 53 of user core. Aug 13 00:42:03.351672 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 00:42:03.648435 sshd[4529]: Connection closed by 147.75.109.163 port 55418 Aug 13 00:42:03.649259 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:03.654600 systemd[1]: sshd@52-172.237.133.249:22-147.75.109.163:55418.service: Deactivated successfully. Aug 13 00:42:03.657932 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 00:42:03.659556 systemd-logind[1513]: Session 53 logged out. Waiting for processes to exit. Aug 13 00:42:03.661512 systemd-logind[1513]: Removed session 53. Aug 13 00:42:08.709829 systemd[1]: Started sshd@53-172.237.133.249:22-147.75.109.163:36524.service - OpenSSH per-connection server daemon (147.75.109.163:36524). Aug 13 00:42:09.052614 sshd[4541]: Accepted publickey for core from 147.75.109.163 port 36524 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:09.054391 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:09.060010 systemd-logind[1513]: New session 54 of user core. Aug 13 00:42:09.067657 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 00:42:09.354434 sshd[4543]: Connection closed by 147.75.109.163 port 36524 Aug 13 00:42:09.355953 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:09.363498 systemd[1]: sshd@53-172.237.133.249:22-147.75.109.163:36524.service: Deactivated successfully. Aug 13 00:42:09.367164 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 00:42:09.368393 systemd-logind[1513]: Session 54 logged out. Waiting for processes to exit. Aug 13 00:42:09.371614 systemd-logind[1513]: Removed session 54. Aug 13 00:42:14.310232 kubelet[2778]: E0813 00:42:14.309989 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:14.419800 systemd[1]: Started sshd@54-172.237.133.249:22-147.75.109.163:36532.service - OpenSSH per-connection server daemon (147.75.109.163:36532). Aug 13 00:42:14.789606 sshd[4557]: Accepted publickey for core from 147.75.109.163 port 36532 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:14.791533 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:14.797593 systemd-logind[1513]: New session 55 of user core. Aug 13 00:42:14.804672 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 00:42:15.106169 sshd[4559]: Connection closed by 147.75.109.163 port 36532 Aug 13 00:42:15.107012 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:15.112333 systemd-logind[1513]: Session 55 logged out. Waiting for processes to exit. Aug 13 00:42:15.113307 systemd[1]: sshd@54-172.237.133.249:22-147.75.109.163:36532.service: Deactivated successfully. Aug 13 00:42:15.116144 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 00:42:15.118297 systemd-logind[1513]: Removed session 55. Aug 13 00:42:20.181628 systemd[1]: Started sshd@55-172.237.133.249:22-147.75.109.163:49696.service - OpenSSH per-connection server daemon (147.75.109.163:49696). Aug 13 00:42:20.534848 sshd[4571]: Accepted publickey for core from 147.75.109.163 port 49696 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:20.537189 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:20.543737 systemd-logind[1513]: New session 56 of user core. Aug 13 00:42:20.551738 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 00:42:20.855012 sshd[4573]: Connection closed by 147.75.109.163 port 49696 Aug 13 00:42:20.855311 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:20.861940 systemd-logind[1513]: Session 56 logged out. Waiting for processes to exit. Aug 13 00:42:20.862272 systemd[1]: sshd@55-172.237.133.249:22-147.75.109.163:49696.service: Deactivated successfully. Aug 13 00:42:20.865211 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 00:42:20.868219 systemd-logind[1513]: Removed session 56. Aug 13 00:42:25.921106 systemd[1]: Started sshd@56-172.237.133.249:22-147.75.109.163:49706.service - OpenSSH per-connection server daemon (147.75.109.163:49706). Aug 13 00:42:26.271571 sshd[4587]: Accepted publickey for core from 147.75.109.163 port 49706 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:26.273392 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:26.279181 systemd-logind[1513]: New session 57 of user core. Aug 13 00:42:26.283676 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 00:42:26.594482 sshd[4589]: Connection closed by 147.75.109.163 port 49706 Aug 13 00:42:26.595644 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:26.600088 systemd-logind[1513]: Session 57 logged out. Waiting for processes to exit. Aug 13 00:42:26.600443 systemd[1]: sshd@56-172.237.133.249:22-147.75.109.163:49706.service: Deactivated successfully. Aug 13 00:42:26.602658 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 00:42:26.604968 systemd-logind[1513]: Removed session 57. Aug 13 00:42:26.661262 systemd[1]: Started sshd@57-172.237.133.249:22-147.75.109.163:49720.service - OpenSSH per-connection server daemon (147.75.109.163:49720). Aug 13 00:42:27.021545 sshd[4601]: Accepted publickey for core from 147.75.109.163 port 49720 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:27.023554 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:27.029966 systemd-logind[1513]: New session 58 of user core. Aug 13 00:42:27.032671 systemd[1]: Started session-58.scope - Session 58 of User core. Aug 13 00:42:28.547562 containerd[1545]: time="2025-08-13T00:42:28.547355918Z" level=info msg="StopContainer for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" with timeout 30 (s)" Aug 13 00:42:28.550128 containerd[1545]: time="2025-08-13T00:42:28.548765182Z" level=info msg="Stop container \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" with signal terminated" Aug 13 00:42:28.583749 systemd[1]: cri-containerd-7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c.scope: Deactivated successfully. Aug 13 00:42:28.595972 containerd[1545]: time="2025-08-13T00:42:28.595902003Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" id:\"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" pid:3343 exited_at:{seconds:1755045748 nanos:591643469}" Aug 13 00:42:28.602563 containerd[1545]: time="2025-08-13T00:42:28.602482534Z" level=info msg="received exit event container_id:\"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" id:\"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" pid:3343 exited_at:{seconds:1755045748 nanos:591643469}" Aug 13 00:42:28.610368 containerd[1545]: time="2025-08-13T00:42:28.610300139Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:42:28.624283 containerd[1545]: time="2025-08-13T00:42:28.624223694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" id:\"d0c2a31950eda73a360f31e79510981ebf1a1eee57e1825a0d2bfbf1cbe6c6e6\" pid:4632 exited_at:{seconds:1755045748 nanos:623116480}" Aug 13 00:42:28.628812 containerd[1545]: time="2025-08-13T00:42:28.628636097Z" level=info msg="StopContainer for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" with timeout 2 (s)" Aug 13 00:42:28.629357 containerd[1545]: time="2025-08-13T00:42:28.629334830Z" level=info msg="Stop container \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" with signal terminated" Aug 13 00:42:28.641206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c-rootfs.mount: Deactivated successfully. Aug 13 00:42:28.654385 systemd-networkd[1464]: lxc_health: Link DOWN Aug 13 00:42:28.654571 systemd-networkd[1464]: lxc_health: Lost carrier Aug 13 00:42:28.669079 containerd[1545]: time="2025-08-13T00:42:28.668846236Z" level=info msg="StopContainer for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" returns successfully" Aug 13 00:42:28.671497 containerd[1545]: time="2025-08-13T00:42:28.671110983Z" level=info msg="StopPodSandbox for \"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\"" Aug 13 00:42:28.671497 containerd[1545]: time="2025-08-13T00:42:28.671271654Z" level=info msg="Container to stop \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:42:28.673065 systemd[1]: cri-containerd-aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0.scope: Deactivated successfully. Aug 13 00:42:28.673493 systemd[1]: cri-containerd-aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0.scope: Consumed 9.193s CPU time, 123.7M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 00:42:28.679313 containerd[1545]: time="2025-08-13T00:42:28.679183489Z" level=info msg="received exit event container_id:\"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" id:\"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" pid:3415 exited_at:{seconds:1755045748 nanos:678453266}" Aug 13 00:42:28.680353 containerd[1545]: time="2025-08-13T00:42:28.680314503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" id:\"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" pid:3415 exited_at:{seconds:1755045748 nanos:678453266}" Aug 13 00:42:28.699811 systemd[1]: cri-containerd-cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109.scope: Deactivated successfully. Aug 13 00:42:28.707250 containerd[1545]: time="2025-08-13T00:42:28.707206039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" id:\"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" pid:3007 exit_status:137 exited_at:{seconds:1755045748 nanos:706783467}" Aug 13 00:42:28.734742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0-rootfs.mount: Deactivated successfully. Aug 13 00:42:28.748074 containerd[1545]: time="2025-08-13T00:42:28.747819169Z" level=info msg="StopContainer for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" returns successfully" Aug 13 00:42:28.749307 containerd[1545]: time="2025-08-13T00:42:28.748299180Z" level=info msg="StopPodSandbox for \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\"" Aug 13 00:42:28.749307 containerd[1545]: time="2025-08-13T00:42:28.748401860Z" level=info msg="Container to stop \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:42:28.749307 containerd[1545]: time="2025-08-13T00:42:28.748414860Z" level=info msg="Container to stop \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:42:28.749307 containerd[1545]: time="2025-08-13T00:42:28.748425990Z" level=info msg="Container to stop \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:42:28.749307 containerd[1545]: time="2025-08-13T00:42:28.748435290Z" level=info msg="Container to stop \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:42:28.749307 containerd[1545]: time="2025-08-13T00:42:28.748443790Z" level=info msg="Container to stop \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:42:28.763816 systemd[1]: cri-containerd-ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29.scope: Deactivated successfully. Aug 13 00:42:28.779757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109-rootfs.mount: Deactivated successfully. Aug 13 00:42:28.783684 containerd[1545]: time="2025-08-13T00:42:28.783639903Z" level=info msg="shim disconnected" id=cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109 namespace=k8s.io Aug 13 00:42:28.783684 containerd[1545]: time="2025-08-13T00:42:28.783682343Z" level=warning msg="cleaning up after shim disconnected" id=cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109 namespace=k8s.io Aug 13 00:42:28.783875 containerd[1545]: time="2025-08-13T00:42:28.783696603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:42:28.800258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29-rootfs.mount: Deactivated successfully. Aug 13 00:42:28.810402 containerd[1545]: time="2025-08-13T00:42:28.810322298Z" level=info msg="shim disconnected" id=ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29 namespace=k8s.io Aug 13 00:42:28.810402 containerd[1545]: time="2025-08-13T00:42:28.810383128Z" level=warning msg="cleaning up after shim disconnected" id=ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29 namespace=k8s.io Aug 13 00:42:28.810402 containerd[1545]: time="2025-08-13T00:42:28.810392528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:42:28.817744 containerd[1545]: time="2025-08-13T00:42:28.817694592Z" level=info msg="received exit event sandbox_id:\"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" exit_status:137 exited_at:{seconds:1755045748 nanos:706783467}" Aug 13 00:42:28.821396 containerd[1545]: time="2025-08-13T00:42:28.821357533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" id:\"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" pid:2916 exit_status:137 exited_at:{seconds:1755045748 nanos:767437181}" Aug 13 00:42:28.821558 containerd[1545]: time="2025-08-13T00:42:28.821506014Z" level=info msg="received exit event sandbox_id:\"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" exit_status:137 exited_at:{seconds:1755045748 nanos:767437181}" Aug 13 00:42:28.821942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109-shm.mount: Deactivated successfully. Aug 13 00:42:28.823261 containerd[1545]: time="2025-08-13T00:42:28.821831905Z" level=info msg="TearDown network for sandbox \"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" successfully" Aug 13 00:42:28.823631 containerd[1545]: time="2025-08-13T00:42:28.823410690Z" level=info msg="StopPodSandbox for \"cb11ea2cd087cabfe6658d57f7ecc8bc4a526e928c57af3a5371ed13e24e2109\" returns successfully" Aug 13 00:42:28.823631 containerd[1545]: time="2025-08-13T00:42:28.821984816Z" level=info msg="TearDown network for sandbox \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" successfully" Aug 13 00:42:28.823631 containerd[1545]: time="2025-08-13T00:42:28.823549530Z" level=info msg="StopPodSandbox for \"ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29\" returns successfully" Aug 13 00:42:29.008037 kubelet[2778]: I0813 00:42:29.007966 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-net\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.008037 kubelet[2778]: I0813 00:42:29.008026 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-bpf-maps\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.008897 kubelet[2778]: I0813 00:42:29.008053 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-xtables-lock\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.008897 kubelet[2778]: I0813 00:42:29.008087 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-cilium-config-path\") pod \"c3f90c0b-aecb-47e6-a95e-d3afb284eda4\" (UID: \"c3f90c0b-aecb-47e6-a95e-d3afb284eda4\") " Aug 13 00:42:29.008897 kubelet[2778]: I0813 00:42:29.008112 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-cgroup\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.008897 kubelet[2778]: I0813 00:42:29.008129 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-etc-cni-netd\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.008897 kubelet[2778]: I0813 00:42:29.008157 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-hubble-tls\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.008897 kubelet[2778]: I0813 00:42:29.008172 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-run\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009213 kubelet[2778]: I0813 00:42:29.008191 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-config-path\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009213 kubelet[2778]: I0813 00:42:29.008218 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44a121c6-5869-4359-934f-20fd0b863ad3-clustermesh-secrets\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009213 kubelet[2778]: I0813 00:42:29.008240 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n75q\" (UniqueName: \"kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-kube-api-access-7n75q\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009213 kubelet[2778]: I0813 00:42:29.008259 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx5d6\" (UniqueName: \"kubernetes.io/projected/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-kube-api-access-lx5d6\") pod \"c3f90c0b-aecb-47e6-a95e-d3afb284eda4\" (UID: \"c3f90c0b-aecb-47e6-a95e-d3afb284eda4\") " Aug 13 00:42:29.009213 kubelet[2778]: I0813 00:42:29.008275 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-hostproc\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009213 kubelet[2778]: I0813 00:42:29.008289 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cni-path\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009509 kubelet[2778]: I0813 00:42:29.008308 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-kernel\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009509 kubelet[2778]: I0813 00:42:29.008327 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-lib-modules\") pod \"44a121c6-5869-4359-934f-20fd0b863ad3\" (UID: \"44a121c6-5869-4359-934f-20fd0b863ad3\") " Aug 13 00:42:29.009509 kubelet[2778]: I0813 00:42:29.008850 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.009509 kubelet[2778]: I0813 00:42:29.008938 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.009509 kubelet[2778]: I0813 00:42:29.008966 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.009959 kubelet[2778]: I0813 00:42:29.008987 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.011542 kubelet[2778]: I0813 00:42:29.010032 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.011800 kubelet[2778]: I0813 00:42:29.011758 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.011934 kubelet[2778]: I0813 00:42:29.011913 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.017883 kubelet[2778]: I0813 00:42:29.017830 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.018189 kubelet[2778]: I0813 00:42:29.018139 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-hostproc" (OuterVolumeSpecName: "hostproc") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.018280 kubelet[2778]: I0813 00:42:29.018156 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cni-path" (OuterVolumeSpecName: "cni-path") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:42:29.018652 kubelet[2778]: I0813 00:42:29.018607 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a121c6-5869-4359-934f-20fd0b863ad3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:42:29.019076 kubelet[2778]: I0813 00:42:29.019034 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:42:29.021380 kubelet[2778]: I0813 00:42:29.021343 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-kube-api-access-7n75q" (OuterVolumeSpecName: "kube-api-access-7n75q") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "kube-api-access-7n75q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:42:29.021380 kubelet[2778]: I0813 00:42:29.021373 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44a121c6-5869-4359-934f-20fd0b863ad3" (UID: "44a121c6-5869-4359-934f-20fd0b863ad3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:42:29.021915 kubelet[2778]: I0813 00:42:29.021883 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c3f90c0b-aecb-47e6-a95e-d3afb284eda4" (UID: "c3f90c0b-aecb-47e6-a95e-d3afb284eda4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:42:29.022306 kubelet[2778]: I0813 00:42:29.022283 2778 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-kube-api-access-lx5d6" (OuterVolumeSpecName: "kube-api-access-lx5d6") pod "c3f90c0b-aecb-47e6-a95e-d3afb284eda4" (UID: "c3f90c0b-aecb-47e6-a95e-d3afb284eda4"). InnerVolumeSpecName "kube-api-access-lx5d6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108770 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-kernel\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108822 2778 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-lib-modules\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108834 2778 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-bpf-maps\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108862 2778 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-xtables-lock\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108873 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-cilium-config-path\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108885 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-cgroup\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108895 2778 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-host-proc-sys-net\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.108947 kubelet[2778]: I0813 00:42:29.108908 2778 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-etc-cni-netd\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.109311 kubelet[2778]: I0813 00:42:29.108920 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-run\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.109865 kubelet[2778]: I0813 00:42:29.109844 2778 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44a121c6-5869-4359-934f-20fd0b863ad3-cilium-config-path\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.110020 kubelet[2778]: I0813 00:42:29.109961 2778 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44a121c6-5869-4359-934f-20fd0b863ad3-clustermesh-secrets\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.110020 kubelet[2778]: I0813 00:42:29.109977 2778 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-hubble-tls\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.110020 kubelet[2778]: I0813 00:42:29.109987 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7n75q\" (UniqueName: \"kubernetes.io/projected/44a121c6-5869-4359-934f-20fd0b863ad3-kube-api-access-7n75q\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.110020 kubelet[2778]: I0813 00:42:29.109996 2778 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-hostproc\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.110347 kubelet[2778]: I0813 00:42:29.110306 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lx5d6\" (UniqueName: \"kubernetes.io/projected/c3f90c0b-aecb-47e6-a95e-d3afb284eda4-kube-api-access-lx5d6\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.110347 kubelet[2778]: I0813 00:42:29.110336 2778 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44a121c6-5869-4359-934f-20fd0b863ad3-cni-path\") on node \"172-237-133-249\" DevicePath \"\"" Aug 13 00:42:29.363353 kubelet[2778]: I0813 00:42:29.363049 2778 scope.go:117] "RemoveContainer" containerID="7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c" Aug 13 00:42:29.369966 containerd[1545]: time="2025-08-13T00:42:29.369900935Z" level=info msg="RemoveContainer for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\"" Aug 13 00:42:29.376484 containerd[1545]: time="2025-08-13T00:42:29.376280104Z" level=info msg="RemoveContainer for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" returns successfully" Aug 13 00:42:29.377223 kubelet[2778]: I0813 00:42:29.377195 2778 scope.go:117] "RemoveContainer" containerID="7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c" Aug 13 00:42:29.377775 containerd[1545]: time="2025-08-13T00:42:29.377477709Z" level=error msg="ContainerStatus for \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\": not found" Aug 13 00:42:29.379211 kubelet[2778]: E0813 00:42:29.379175 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\": not found" containerID="7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c" Aug 13 00:42:29.380985 kubelet[2778]: I0813 00:42:29.379237 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c"} err="failed to get container status \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fae0ac28f3ed5685ff286c721cadd9df2bb0270a3227f57feb331a286f95e9c\": not found" Aug 13 00:42:29.380320 systemd[1]: Removed slice kubepods-besteffort-podc3f90c0b_aecb_47e6_a95e_d3afb284eda4.slice - libcontainer container kubepods-besteffort-podc3f90c0b_aecb_47e6_a95e_d3afb284eda4.slice. Aug 13 00:42:29.384542 kubelet[2778]: I0813 00:42:29.384101 2778 scope.go:117] "RemoveContainer" containerID="aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0" Aug 13 00:42:29.395627 systemd[1]: Removed slice kubepods-burstable-pod44a121c6_5869_4359_934f_20fd0b863ad3.slice - libcontainer container kubepods-burstable-pod44a121c6_5869_4359_934f_20fd0b863ad3.slice. Aug 13 00:42:29.395730 systemd[1]: kubepods-burstable-pod44a121c6_5869_4359_934f_20fd0b863ad3.slice: Consumed 9.554s CPU time, 124.2M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 00:42:29.403235 containerd[1545]: time="2025-08-13T00:42:29.402762059Z" level=info msg="RemoveContainer for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\"" Aug 13 00:42:29.411205 containerd[1545]: time="2025-08-13T00:42:29.411145466Z" level=info msg="RemoveContainer for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" returns successfully" Aug 13 00:42:29.411916 kubelet[2778]: I0813 00:42:29.411862 2778 scope.go:117] "RemoveContainer" containerID="b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121" Aug 13 00:42:29.414982 containerd[1545]: time="2025-08-13T00:42:29.414947508Z" level=info msg="RemoveContainer for \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\"" Aug 13 00:42:29.426916 containerd[1545]: time="2025-08-13T00:42:29.426652516Z" level=info msg="RemoveContainer for \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" returns successfully" Aug 13 00:42:29.427631 kubelet[2778]: I0813 00:42:29.427610 2778 scope.go:117] "RemoveContainer" containerID="6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5" Aug 13 00:42:29.433785 containerd[1545]: time="2025-08-13T00:42:29.433741679Z" level=info msg="RemoveContainer for \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\"" Aug 13 00:42:29.438618 containerd[1545]: time="2025-08-13T00:42:29.438590864Z" level=info msg="RemoveContainer for \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" returns successfully" Aug 13 00:42:29.438953 kubelet[2778]: I0813 00:42:29.438863 2778 scope.go:117] "RemoveContainer" containerID="d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a" Aug 13 00:42:29.440476 containerd[1545]: time="2025-08-13T00:42:29.440447359Z" level=info msg="RemoveContainer for \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\"" Aug 13 00:42:29.443346 containerd[1545]: time="2025-08-13T00:42:29.443317609Z" level=info msg="RemoveContainer for \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" returns successfully" Aug 13 00:42:29.443580 kubelet[2778]: I0813 00:42:29.443508 2778 scope.go:117] "RemoveContainer" containerID="6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6" Aug 13 00:42:29.445174 containerd[1545]: time="2025-08-13T00:42:29.445146095Z" level=info msg="RemoveContainer for \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\"" Aug 13 00:42:29.447928 containerd[1545]: time="2025-08-13T00:42:29.447839474Z" level=info msg="RemoveContainer for \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" returns successfully" Aug 13 00:42:29.448158 kubelet[2778]: I0813 00:42:29.448085 2778 scope.go:117] "RemoveContainer" containerID="aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0" Aug 13 00:42:29.448385 containerd[1545]: time="2025-08-13T00:42:29.448338085Z" level=error msg="ContainerStatus for \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\": not found" Aug 13 00:42:29.448596 kubelet[2778]: E0813 00:42:29.448506 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\": not found" containerID="aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0" Aug 13 00:42:29.448776 kubelet[2778]: I0813 00:42:29.448683 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0"} err="failed to get container status \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"aebad42a9b0d6f4199fe749efc961e77724ab71e808995ff121a511cc5ec84a0\": not found" Aug 13 00:42:29.448776 kubelet[2778]: I0813 00:42:29.448711 2778 scope.go:117] "RemoveContainer" containerID="b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121" Aug 13 00:42:29.449107 containerd[1545]: time="2025-08-13T00:42:29.448999977Z" level=error msg="ContainerStatus for \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\": not found" Aug 13 00:42:29.449313 kubelet[2778]: E0813 00:42:29.449214 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\": not found" containerID="b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121" Aug 13 00:42:29.449313 kubelet[2778]: I0813 00:42:29.449290 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121"} err="failed to get container status \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7ff2f2be46d4bf202790029c8aee6482605d2cdc69f9e279d9311d4e8b59121\": not found" Aug 13 00:42:29.449508 kubelet[2778]: I0813 00:42:29.449428 2778 scope.go:117] "RemoveContainer" containerID="6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5" Aug 13 00:42:29.449830 containerd[1545]: time="2025-08-13T00:42:29.449789700Z" level=error msg="ContainerStatus for \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\": not found" Aug 13 00:42:29.449963 kubelet[2778]: E0813 00:42:29.449943 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\": not found" containerID="6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5" Aug 13 00:42:29.450096 kubelet[2778]: I0813 00:42:29.450038 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5"} err="failed to get container status \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6df3cbee06e3fadd8fff80e7a1c0811dcf873a48e25f6cd952cf4fbf7f3913f5\": not found" Aug 13 00:42:29.450096 kubelet[2778]: I0813 00:42:29.450058 2778 scope.go:117] "RemoveContainer" containerID="d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a" Aug 13 00:42:29.450380 containerd[1545]: time="2025-08-13T00:42:29.450316001Z" level=error msg="ContainerStatus for \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\": not found" Aug 13 00:42:29.450653 kubelet[2778]: E0813 00:42:29.450603 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\": not found" containerID="d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a" Aug 13 00:42:29.450653 kubelet[2778]: I0813 00:42:29.450622 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a"} err="failed to get container status \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7b1ad6bb62f776850c0767252fdb330501b773ad7cde93aabe2c17ca2bb9c4a\": not found" Aug 13 00:42:29.450741 kubelet[2778]: I0813 00:42:29.450635 2778 scope.go:117] "RemoveContainer" containerID="6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6" Aug 13 00:42:29.451008 containerd[1545]: time="2025-08-13T00:42:29.450944483Z" level=error msg="ContainerStatus for \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\": not found" Aug 13 00:42:29.451189 kubelet[2778]: E0813 00:42:29.451153 2778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\": not found" containerID="6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6" Aug 13 00:42:29.451271 kubelet[2778]: I0813 00:42:29.451254 2778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6"} err="failed to get container status \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6884cf5513721e2a4877d687342a22f4c96833d556dfc855888a4c2160d6c7f6\": not found" Aug 13 00:42:29.530857 kubelet[2778]: E0813 00:42:29.530753 2778 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:42:29.640284 systemd[1]: var-lib-kubelet-pods-c3f90c0b\x2daecb\x2d47e6\x2da95e\x2dd3afb284eda4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlx5d6.mount: Deactivated successfully. Aug 13 00:42:29.640412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac52f87e4d0813202bd2fe8db8bdf469a7028a263e13e4462af1037209babf29-shm.mount: Deactivated successfully. Aug 13 00:42:29.640574 systemd[1]: var-lib-kubelet-pods-44a121c6\x2d5869\x2d4359\x2d934f\x2d20fd0b863ad3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7n75q.mount: Deactivated successfully. Aug 13 00:42:29.640657 systemd[1]: var-lib-kubelet-pods-44a121c6\x2d5869\x2d4359\x2d934f\x2d20fd0b863ad3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:42:29.640741 systemd[1]: var-lib-kubelet-pods-44a121c6\x2d5869\x2d4359\x2d934f\x2d20fd0b863ad3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:42:30.312809 kubelet[2778]: I0813 00:42:30.312686 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a121c6-5869-4359-934f-20fd0b863ad3" path="/var/lib/kubelet/pods/44a121c6-5869-4359-934f-20fd0b863ad3/volumes" Aug 13 00:42:30.313952 kubelet[2778]: I0813 00:42:30.313921 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f90c0b-aecb-47e6-a95e-d3afb284eda4" path="/var/lib/kubelet/pods/c3f90c0b-aecb-47e6-a95e-d3afb284eda4/volumes" Aug 13 00:42:30.541461 sshd[4603]: Connection closed by 147.75.109.163 port 49720 Aug 13 00:42:30.542277 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:30.547118 systemd[1]: sshd@57-172.237.133.249:22-147.75.109.163:49720.service: Deactivated successfully. Aug 13 00:42:30.549456 systemd[1]: session-58.scope: Deactivated successfully. Aug 13 00:42:30.550746 systemd-logind[1513]: Session 58 logged out. Waiting for processes to exit. Aug 13 00:42:30.552627 systemd-logind[1513]: Removed session 58. Aug 13 00:42:30.605930 systemd[1]: Started sshd@58-172.237.133.249:22-147.75.109.163:60534.service - OpenSSH per-connection server daemon (147.75.109.163:60534). Aug 13 00:42:30.964738 sshd[4762]: Accepted publickey for core from 147.75.109.163 port 60534 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:30.966690 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:30.973583 systemd-logind[1513]: New session 59 of user core. Aug 13 00:42:30.982696 systemd[1]: Started session-59.scope - Session 59 of User core. Aug 13 00:42:31.704675 kubelet[2778]: I0813 00:42:31.704317 2778 memory_manager.go:355] "RemoveStaleState removing state" podUID="44a121c6-5869-4359-934f-20fd0b863ad3" containerName="cilium-agent" Aug 13 00:42:31.706541 kubelet[2778]: I0813 00:42:31.705340 2778 memory_manager.go:355] "RemoveStaleState removing state" podUID="c3f90c0b-aecb-47e6-a95e-d3afb284eda4" containerName="cilium-operator" Aug 13 00:42:31.718142 systemd[1]: Created slice kubepods-burstable-poda7be821e_7c78_4d0d_a1c3_387ccac6e816.slice - libcontainer container kubepods-burstable-poda7be821e_7c78_4d0d_a1c3_387ccac6e816.slice. Aug 13 00:42:31.725570 kubelet[2778]: I0813 00:42:31.725455 2778 status_manager.go:890] "Failed to get status for pod" podUID="a7be821e-7c78-4d0d-a1c3-387ccac6e816" pod="kube-system/cilium-jz7cl" err="pods \"cilium-jz7cl\" is forbidden: User \"system:node:172-237-133-249\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-133-249' and this object" Aug 13 00:42:31.726068 kubelet[2778]: W0813 00:42:31.725943 2778 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172-237-133-249" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-237-133-249' and this object Aug 13 00:42:31.726068 kubelet[2778]: E0813 00:42:31.726022 2778 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:172-237-133-249\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-133-249' and this object" logger="UnhandledError" Aug 13 00:42:31.726277 kubelet[2778]: W0813 00:42:31.726259 2778 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-237-133-249" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-237-133-249' and this object Aug 13 00:42:31.726346 kubelet[2778]: E0813 00:42:31.726328 2778 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-237-133-249\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-133-249' and this object" logger="UnhandledError" Aug 13 00:42:31.726423 sshd[4764]: Connection closed by 147.75.109.163 port 60534 Aug 13 00:42:31.726952 kubelet[2778]: W0813 00:42:31.726846 2778 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-237-133-249" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-133-249' and this object Aug 13 00:42:31.726952 kubelet[2778]: E0813 00:42:31.726871 2778 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-237-133-249\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-133-249' and this object" logger="UnhandledError" Aug 13 00:42:31.726952 kubelet[2778]: W0813 00:42:31.726923 2778 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-237-133-249" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-237-133-249' and this object Aug 13 00:42:31.726952 kubelet[2778]: E0813 00:42:31.726936 2778 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-237-133-249\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-133-249' and this object" logger="UnhandledError" Aug 13 00:42:31.728755 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:31.734568 systemd[1]: sshd@58-172.237.133.249:22-147.75.109.163:60534.service: Deactivated successfully. Aug 13 00:42:31.739455 systemd[1]: session-59.scope: Deactivated successfully. Aug 13 00:42:31.746389 systemd-logind[1513]: Session 59 logged out. Waiting for processes to exit. Aug 13 00:42:31.749468 systemd-logind[1513]: Removed session 59. Aug 13 00:42:31.795629 systemd[1]: Started sshd@59-172.237.133.249:22-147.75.109.163:60548.service - OpenSSH per-connection server daemon (147.75.109.163:60548). Aug 13 00:42:31.831563 kubelet[2778]: I0813 00:42:31.830694 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cilium-cgroup\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831563 kubelet[2778]: I0813 00:42:31.830783 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cni-path\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831563 kubelet[2778]: I0813 00:42:31.830869 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-host-proc-sys-kernel\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831563 kubelet[2778]: I0813 00:42:31.830947 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-bpf-maps\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831563 kubelet[2778]: I0813 00:42:31.830969 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-hostproc\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831563 kubelet[2778]: I0813 00:42:31.831047 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7be821e-7c78-4d0d-a1c3-387ccac6e816-hubble-tls\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831909 kubelet[2778]: I0813 00:42:31.831110 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-xtables-lock\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831909 kubelet[2778]: I0813 00:42:31.831134 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7be821e-7c78-4d0d-a1c3-387ccac6e816-clustermesh-secrets\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831909 kubelet[2778]: I0813 00:42:31.831274 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cilium-ipsec-secrets\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831909 kubelet[2778]: I0813 00:42:31.831341 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-lib-modules\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831909 kubelet[2778]: I0813 00:42:31.831364 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cilium-run\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.831909 kubelet[2778]: I0813 00:42:31.831421 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-etc-cni-netd\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.832061 kubelet[2778]: I0813 00:42:31.831448 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7be821e-7c78-4d0d-a1c3-387ccac6e816-host-proc-sys-net\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.832061 kubelet[2778]: I0813 00:42:31.831567 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl8sv\" (UniqueName: \"kubernetes.io/projected/a7be821e-7c78-4d0d-a1c3-387ccac6e816-kube-api-access-pl8sv\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:31.832061 kubelet[2778]: I0813 00:42:31.831598 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cilium-config-path\") pod \"cilium-jz7cl\" (UID: \"a7be821e-7c78-4d0d-a1c3-387ccac6e816\") " pod="kube-system/cilium-jz7cl" Aug 13 00:42:32.162564 sshd[4774]: Accepted publickey for core from 147.75.109.163 port 60548 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:32.164265 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:32.171270 systemd-logind[1513]: New session 60 of user core. Aug 13 00:42:32.175815 systemd[1]: Started session-60.scope - Session 60 of User core. Aug 13 00:42:32.415875 sshd[4777]: Connection closed by 147.75.109.163 port 60548 Aug 13 00:42:32.416620 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:32.422114 systemd[1]: sshd@59-172.237.133.249:22-147.75.109.163:60548.service: Deactivated successfully. Aug 13 00:42:32.424795 systemd[1]: session-60.scope: Deactivated successfully. Aug 13 00:42:32.425845 systemd-logind[1513]: Session 60 logged out. Waiting for processes to exit. Aug 13 00:42:32.428421 systemd-logind[1513]: Removed session 60. Aug 13 00:42:32.480214 systemd[1]: Started sshd@60-172.237.133.249:22-147.75.109.163:60550.service - OpenSSH per-connection server daemon (147.75.109.163:60550). Aug 13 00:42:32.637582 kubelet[2778]: I0813 00:42:32.637510 2778 setters.go:602] "Node became not ready" node="172-237-133-249" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:42:32Z","lastTransitionTime":"2025-08-13T00:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:42:32.821372 sshd[4784]: Accepted publickey for core from 147.75.109.163 port 60550 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:42:32.824314 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:42:32.832389 systemd-logind[1513]: New session 61 of user core. Aug 13 00:42:32.840720 systemd[1]: Started session-61.scope - Session 61 of User core. Aug 13 00:42:32.933430 kubelet[2778]: E0813 00:42:32.933348 2778 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:32.933955 kubelet[2778]: E0813 00:42:32.933545 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cilium-ipsec-secrets podName:a7be821e-7c78-4d0d-a1c3-387ccac6e816 nodeName:}" failed. No retries permitted until 2025-08-13 00:42:33.433470037 +0000 UTC m=+379.263265109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/a7be821e-7c78-4d0d-a1c3-387ccac6e816-cilium-ipsec-secrets") pod "cilium-jz7cl" (UID: "a7be821e-7c78-4d0d-a1c3-387ccac6e816") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:32.933955 kubelet[2778]: E0813 00:42:32.933850 2778 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:32.933955 kubelet[2778]: E0813 00:42:32.933918 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7be821e-7c78-4d0d-a1c3-387ccac6e816-clustermesh-secrets podName:a7be821e-7c78-4d0d-a1c3-387ccac6e816 nodeName:}" failed. No retries permitted until 2025-08-13 00:42:33.433907838 +0000 UTC m=+379.263702910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/a7be821e-7c78-4d0d-a1c3-387ccac6e816-clustermesh-secrets") pod "cilium-jz7cl" (UID: "a7be821e-7c78-4d0d-a1c3-387ccac6e816") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:32.934745 kubelet[2778]: E0813 00:42:32.934387 2778 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:32.934745 kubelet[2778]: E0813 00:42:32.934440 2778 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-jz7cl: failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:32.934745 kubelet[2778]: E0813 00:42:32.934684 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a7be821e-7c78-4d0d-a1c3-387ccac6e816-hubble-tls podName:a7be821e-7c78-4d0d-a1c3-387ccac6e816 nodeName:}" failed. No retries permitted until 2025-08-13 00:42:33.434660711 +0000 UTC m=+379.264455783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/a7be821e-7c78-4d0d-a1c3-387ccac6e816-hubble-tls") pod "cilium-jz7cl" (UID: "a7be821e-7c78-4d0d-a1c3-387ccac6e816") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:42:33.525826 kubelet[2778]: E0813 00:42:33.525359 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:33.526871 containerd[1545]: time="2025-08-13T00:42:33.526838117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jz7cl,Uid:a7be821e-7c78-4d0d-a1c3-387ccac6e816,Namespace:kube-system,Attempt:0,}" Aug 13 00:42:33.555391 containerd[1545]: time="2025-08-13T00:42:33.555170646Z" level=info msg="connecting to shim de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb" address="unix:///run/containerd/s/eeff6286eaf77fd89b7f469c81b54c15abd51c42c501e41280305bd11b1821de" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:42:33.589751 systemd[1]: Started cri-containerd-de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb.scope - libcontainer container de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb. Aug 13 00:42:33.623149 containerd[1545]: time="2025-08-13T00:42:33.623100211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jz7cl,Uid:a7be821e-7c78-4d0d-a1c3-387ccac6e816,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\"" Aug 13 00:42:33.624476 kubelet[2778]: E0813 00:42:33.624438 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:33.628000 containerd[1545]: time="2025-08-13T00:42:33.627877116Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:42:33.639877 containerd[1545]: time="2025-08-13T00:42:33.639836114Z" level=info msg="Container 11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:33.646317 containerd[1545]: time="2025-08-13T00:42:33.646280394Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\"" Aug 13 00:42:33.647822 containerd[1545]: time="2025-08-13T00:42:33.647711559Z" level=info msg="StartContainer for \"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\"" Aug 13 00:42:33.650839 containerd[1545]: time="2025-08-13T00:42:33.650805519Z" level=info msg="connecting to shim 11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23" address="unix:///run/containerd/s/eeff6286eaf77fd89b7f469c81b54c15abd51c42c501e41280305bd11b1821de" protocol=ttrpc version=3 Aug 13 00:42:33.677703 systemd[1]: Started cri-containerd-11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23.scope - libcontainer container 11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23. Aug 13 00:42:33.720709 containerd[1545]: time="2025-08-13T00:42:33.720632770Z" level=info msg="StartContainer for \"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\" returns successfully" Aug 13 00:42:33.734662 systemd[1]: cri-containerd-11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23.scope: Deactivated successfully. Aug 13 00:42:33.739179 containerd[1545]: time="2025-08-13T00:42:33.739116139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\" id:\"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\" pid:4855 exited_at:{seconds:1755045753 nanos:738540137}" Aug 13 00:42:33.739732 containerd[1545]: time="2025-08-13T00:42:33.739142298Z" level=info msg="received exit event container_id:\"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\" id:\"11c3f1344dd87b5d427c6a311dffda437f730463e8af919d54a1941302e9dc23\" pid:4855 exited_at:{seconds:1755045753 nanos:738540137}" Aug 13 00:42:34.310362 kubelet[2778]: E0813 00:42:34.309378 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:34.389470 kubelet[2778]: E0813 00:42:34.389424 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:34.395814 containerd[1545]: time="2025-08-13T00:42:34.395573753Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:42:34.405279 containerd[1545]: time="2025-08-13T00:42:34.405209014Z" level=info msg="Container b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:34.411546 containerd[1545]: time="2025-08-13T00:42:34.411472903Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\"" Aug 13 00:42:34.414569 containerd[1545]: time="2025-08-13T00:42:34.414542024Z" level=info msg="StartContainer for \"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\"" Aug 13 00:42:34.418997 containerd[1545]: time="2025-08-13T00:42:34.418960758Z" level=info msg="connecting to shim b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88" address="unix:///run/containerd/s/eeff6286eaf77fd89b7f469c81b54c15abd51c42c501e41280305bd11b1821de" protocol=ttrpc version=3 Aug 13 00:42:34.450103 systemd[1]: Started cri-containerd-b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88.scope - libcontainer container b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88. Aug 13 00:42:34.455341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740613276.mount: Deactivated successfully. Aug 13 00:42:34.502230 containerd[1545]: time="2025-08-13T00:42:34.502125510Z" level=info msg="StartContainer for \"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\" returns successfully" Aug 13 00:42:34.515997 systemd[1]: cri-containerd-b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88.scope: Deactivated successfully. Aug 13 00:42:34.517911 containerd[1545]: time="2025-08-13T00:42:34.517856740Z" level=info msg="received exit event container_id:\"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\" id:\"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\" pid:4900 exited_at:{seconds:1755045754 nanos:517578439}" Aug 13 00:42:34.518265 containerd[1545]: time="2025-08-13T00:42:34.518232451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\" id:\"b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88\" pid:4900 exited_at:{seconds:1755045754 nanos:517578439}" Aug 13 00:42:34.539729 kubelet[2778]: E0813 00:42:34.539665 2778 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:42:34.563774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0c4fd468a9f2672d349b95e70e285d6044059ce9d554530bc12525900b6ee88-rootfs.mount: Deactivated successfully. Aug 13 00:42:35.395139 kubelet[2778]: E0813 00:42:35.395035 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:35.402602 containerd[1545]: time="2025-08-13T00:42:35.402228761Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:42:35.429405 containerd[1545]: time="2025-08-13T00:42:35.429327616Z" level=info msg="Container 9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:35.453256 containerd[1545]: time="2025-08-13T00:42:35.453151181Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\"" Aug 13 00:42:35.456318 containerd[1545]: time="2025-08-13T00:42:35.456283992Z" level=info msg="StartContainer for \"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\"" Aug 13 00:42:35.462256 containerd[1545]: time="2025-08-13T00:42:35.461665579Z" level=info msg="connecting to shim 9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75" address="unix:///run/containerd/s/eeff6286eaf77fd89b7f469c81b54c15abd51c42c501e41280305bd11b1821de" protocol=ttrpc version=3 Aug 13 00:42:35.512864 systemd[1]: Started cri-containerd-9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75.scope - libcontainer container 9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75. Aug 13 00:42:35.637909 systemd[1]: cri-containerd-9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75.scope: Deactivated successfully. Aug 13 00:42:35.647584 containerd[1545]: time="2025-08-13T00:42:35.646788612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\" id:\"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\" pid:4944 exited_at:{seconds:1755045755 nanos:644510286}" Aug 13 00:42:35.648891 containerd[1545]: time="2025-08-13T00:42:35.648625959Z" level=info msg="received exit event container_id:\"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\" id:\"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\" pid:4944 exited_at:{seconds:1755045755 nanos:644510286}" Aug 13 00:42:35.670173 containerd[1545]: time="2025-08-13T00:42:35.670032826Z" level=info msg="StartContainer for \"9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75\" returns successfully" Aug 13 00:42:35.703806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ba8d07b6dd9bc8def9b9262ab38c8ee62fc377a75d51adba2ba8d4c20f6bf75-rootfs.mount: Deactivated successfully. Aug 13 00:42:36.405609 kubelet[2778]: E0813 00:42:36.405295 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:36.414187 containerd[1545]: time="2025-08-13T00:42:36.414077990Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:42:36.455115 containerd[1545]: time="2025-08-13T00:42:36.454860238Z" level=info msg="Container 5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:36.505921 containerd[1545]: time="2025-08-13T00:42:36.505821108Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\"" Aug 13 00:42:36.507304 containerd[1545]: time="2025-08-13T00:42:36.507243093Z" level=info msg="StartContainer for \"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\"" Aug 13 00:42:36.511560 containerd[1545]: time="2025-08-13T00:42:36.511036354Z" level=info msg="connecting to shim 5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5" address="unix:///run/containerd/s/eeff6286eaf77fd89b7f469c81b54c15abd51c42c501e41280305bd11b1821de" protocol=ttrpc version=3 Aug 13 00:42:36.564848 systemd[1]: Started cri-containerd-5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5.scope - libcontainer container 5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5. Aug 13 00:42:36.651110 systemd[1]: cri-containerd-5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5.scope: Deactivated successfully. Aug 13 00:42:36.657810 containerd[1545]: time="2025-08-13T00:42:36.657462065Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\" id:\"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\" pid:4984 exited_at:{seconds:1755045756 nanos:655813481}" Aug 13 00:42:36.658436 containerd[1545]: time="2025-08-13T00:42:36.658313589Z" level=info msg="received exit event container_id:\"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\" id:\"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\" pid:4984 exited_at:{seconds:1755045756 nanos:655813481}" Aug 13 00:42:36.660086 containerd[1545]: time="2025-08-13T00:42:36.659997004Z" level=info msg="StartContainer for \"5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5\" returns successfully" Aug 13 00:42:36.704684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dae55e96bc74dc75462fe364de1c9d4ce1401a669628ce86b12651721b84cc5-rootfs.mount: Deactivated successfully. Aug 13 00:42:37.417318 kubelet[2778]: E0813 00:42:37.416759 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:37.422630 containerd[1545]: time="2025-08-13T00:42:37.422231971Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:42:37.466578 containerd[1545]: time="2025-08-13T00:42:37.464782544Z" level=info msg="Container e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:37.476156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573177381.mount: Deactivated successfully. Aug 13 00:42:37.507141 containerd[1545]: time="2025-08-13T00:42:37.507063906Z" level=info msg="CreateContainer within sandbox \"de6e4989526b75ad8545063ceba04f1a10b7c8027c29422b23229d7c3bb45deb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\"" Aug 13 00:42:37.509220 containerd[1545]: time="2025-08-13T00:42:37.508999713Z" level=info msg="StartContainer for \"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\"" Aug 13 00:42:37.511903 containerd[1545]: time="2025-08-13T00:42:37.511834042Z" level=info msg="connecting to shim e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b" address="unix:///run/containerd/s/eeff6286eaf77fd89b7f469c81b54c15abd51c42c501e41280305bd11b1821de" protocol=ttrpc version=3 Aug 13 00:42:37.563398 systemd[1]: Started cri-containerd-e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b.scope - libcontainer container e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b. Aug 13 00:42:37.669879 containerd[1545]: time="2025-08-13T00:42:37.669716418Z" level=info msg="StartContainer for \"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" returns successfully" Aug 13 00:42:37.890026 containerd[1545]: time="2025-08-13T00:42:37.889922110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" id:\"64d03dbf529528fb6fbd8c3f6b45cf2c36a4076e97a7357204ef8b016258c94e\" pid:5051 exited_at:{seconds:1755045757 nanos:888889856}" Aug 13 00:42:38.395950 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 00:42:38.428329 kubelet[2778]: E0813 00:42:38.428246 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:39.468407 containerd[1545]: time="2025-08-13T00:42:39.468335237Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" id:\"fe6c103c2c6b81e24a856432d2fa6b71310947eda14a9f54ba071cf87acc6009\" pid:5133 exit_status:1 exited_at:{seconds:1755045759 nanos:467437434}" Aug 13 00:42:39.527140 kubelet[2778]: E0813 00:42:39.527093 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:41.332599 systemd-networkd[1464]: lxc_health: Link UP Aug 13 00:42:41.338346 systemd-networkd[1464]: lxc_health: Gained carrier Aug 13 00:42:41.533994 kubelet[2778]: E0813 00:42:41.533891 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:41.606503 kubelet[2778]: I0813 00:42:41.604025 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jz7cl" podStartSLOduration=10.60397875 podStartE2EDuration="10.60397875s" podCreationTimestamp="2025-08-13 00:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:42:38.452000853 +0000 UTC m=+384.281795935" watchObservedRunningTime="2025-08-13 00:42:41.60397875 +0000 UTC m=+387.433773822" Aug 13 00:42:41.699503 containerd[1545]: time="2025-08-13T00:42:41.699422207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" id:\"481fd9af06930cba97b7d881ddc699f055026b9d04ff3f9c0764619d43c16c04\" pid:5549 exited_at:{seconds:1755045761 nanos:699034837}" Aug 13 00:42:42.449135 kubelet[2778]: E0813 00:42:42.449079 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:42.454898 systemd-networkd[1464]: lxc_health: Gained IPv6LL Aug 13 00:42:43.451089 kubelet[2778]: E0813 00:42:43.450145 2778 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:42:43.951201 containerd[1545]: time="2025-08-13T00:42:43.951137755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" id:\"22744a03ef6cb6d57c80a68eecd2628891df3ecf95684730f82cd6c8ccace5d9\" pid:5588 exited_at:{seconds:1755045763 nanos:950317723}" Aug 13 00:42:46.123554 containerd[1545]: time="2025-08-13T00:42:46.123221339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" id:\"8876ec0eabb32d845d889bbbd8144307a4f725f60980454aebecacb83bf8f5d6\" pid:5617 exited_at:{seconds:1755045766 nanos:122869608}" Aug 13 00:42:48.220143 containerd[1545]: time="2025-08-13T00:42:48.219996727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e544fae19c3a1475479630ae5abf7d06cbeafd3a694766fcb9ca5260f3c21b3b\" id:\"e12f9777643eb8ce434cfb1b4aa94c1b3f836b4803bd554eba29085963882100\" pid:5641 exited_at:{seconds:1755045768 nanos:219423146}" Aug 13 00:42:48.310851 sshd[4787]: Connection closed by 147.75.109.163 port 60550 Aug 13 00:42:48.312205 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Aug 13 00:42:48.319137 systemd[1]: sshd@60-172.237.133.249:22-147.75.109.163:60550.service: Deactivated successfully. Aug 13 00:42:48.322092 systemd[1]: session-61.scope: Deactivated successfully. Aug 13 00:42:48.324786 systemd-logind[1513]: Session 61 logged out. Waiting for processes to exit. Aug 13 00:42:48.326575 systemd-logind[1513]: Removed session 61.