Aug 13 01:51:48.086726 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:51:48.086772 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:51:48.086782 kernel: BIOS-provided physical RAM map: Aug 13 01:51:48.086794 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:51:48.086800 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:51:48.086806 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:51:48.086813 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:51:48.086819 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:51:48.086825 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:51:48.086832 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:51:48.086838 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:51:48.086844 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:51:48.086853 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:51:48.086859 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:51:48.086867 kernel: NX (Execute Disable) protection: active Aug 13 01:51:48.086873 kernel: APIC: Static calls initialized Aug 13 01:51:48.086880 kernel: SMBIOS 2.8 present. Aug 13 01:51:48.086889 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:51:48.086896 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:51:48.086902 kernel: Hypervisor detected: KVM Aug 13 01:51:48.086909 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:51:48.086915 kernel: kvm-clock: using sched offset of 8029205878 cycles Aug 13 01:51:48.086923 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:51:48.086930 kernel: tsc: Detected 2000.000 MHz processor Aug 13 01:51:48.086937 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:51:48.086944 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:51:48.086951 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:51:48.086960 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:51:48.086967 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:51:48.086974 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:51:48.086981 kernel: Using GB pages for direct mapping Aug 13 01:51:48.086988 kernel: ACPI: Early table checksum verification disabled Aug 13 01:51:48.086995 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:51:48.087002 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087009 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087016 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087024 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:51:48.087031 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087038 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087045 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087055 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:51:48.087063 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:51:48.087072 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:51:48.087079 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:51:48.087087 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:51:48.087093 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:51:48.087101 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:51:48.087108 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:51:48.087115 kernel: No NUMA configuration found Aug 13 01:51:48.087122 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:51:48.087132 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:51:48.087139 kernel: Zone ranges: Aug 13 01:51:48.087146 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:51:48.087153 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:51:48.087161 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:51:48.087168 kernel: Device empty Aug 13 01:51:48.087175 kernel: Movable zone start for each node Aug 13 01:51:48.087182 kernel: Early memory node ranges Aug 13 01:51:48.087190 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:51:48.087199 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:51:48.087207 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:51:48.087214 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:51:48.087221 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:51:48.087228 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:51:48.087235 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:51:48.087243 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:51:48.087250 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:51:48.087258 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:51:48.087267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:51:48.087274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:51:48.087281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:51:48.087289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:51:48.087296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:51:48.087303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:51:48.087310 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:51:48.087317 kernel: TSC deadline timer available Aug 13 01:51:48.087324 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:51:48.087334 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:51:48.087341 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:51:48.087348 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:51:48.087355 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:51:48.087362 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:51:48.087369 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:51:48.087376 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:51:48.087384 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:51:48.087391 kernel: kvm-guest: setup PV sched yield Aug 13 01:51:48.087400 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:51:48.087407 kernel: Booting paravirtualized kernel on KVM Aug 13 01:51:48.087415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:51:48.087422 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:51:48.087429 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:51:48.087436 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:51:48.087443 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:51:48.087450 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:51:48.087458 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:51:48.087468 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:51:48.087476 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:51:48.087483 kernel: random: crng init done Aug 13 01:51:48.087491 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:51:48.087502 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:51:48.087509 kernel: Fallback order for Node 0: 0 Aug 13 01:51:48.087570 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:51:48.087577 kernel: Policy zone: Normal Aug 13 01:51:48.087587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:51:48.087594 kernel: software IO TLB: area num 2. Aug 13 01:51:48.087602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:51:48.087610 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:51:48.087617 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:51:48.087624 kernel: Dynamic Preempt: voluntary Aug 13 01:51:48.087631 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:51:48.087640 kernel: rcu: RCU event tracing is enabled. Aug 13 01:51:48.087648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:51:48.087657 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:51:48.087665 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:51:48.087672 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:51:48.087679 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:51:48.087686 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:51:48.087694 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:51:48.087708 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:51:48.087718 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:51:48.087726 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:51:48.087733 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:51:48.087741 kernel: Console: colour VGA+ 80x25 Aug 13 01:51:48.087748 kernel: printk: legacy console [tty0] enabled Aug 13 01:51:48.087758 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:51:48.087765 kernel: ACPI: Core revision 20240827 Aug 13 01:51:48.087773 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:51:48.087781 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:51:48.087788 kernel: x2apic enabled Aug 13 01:51:48.087798 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:51:48.087806 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:51:48.087813 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:51:48.087821 kernel: kvm-guest: setup PV IPIs Aug 13 01:51:48.087829 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:51:48.087836 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:51:48.087844 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 01:51:48.087852 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:51:48.087859 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:51:48.087869 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:51:48.087876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:51:48.087884 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:51:48.087892 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:51:48.087899 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:51:48.087907 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:51:48.087915 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:51:48.087922 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:51:48.087933 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:51:48.087941 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:51:48.087948 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:51:48.087956 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:51:48.087963 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:51:48.087971 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:51:48.087979 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:51:48.087986 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:51:48.087994 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:51:48.088003 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:51:48.088011 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:51:48.088018 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:51:48.088026 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:51:48.088033 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:51:48.088041 kernel: landlock: Up and running. Aug 13 01:51:48.088048 kernel: SELinux: Initializing. Aug 13 01:51:48.088056 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:51:48.088063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:51:48.088073 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:51:48.088081 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:51:48.088088 kernel: ... version: 0 Aug 13 01:51:48.088096 kernel: ... bit width: 48 Aug 13 01:51:48.088103 kernel: ... generic registers: 6 Aug 13 01:51:48.088111 kernel: ... value mask: 0000ffffffffffff Aug 13 01:51:48.088118 kernel: ... max period: 00007fffffffffff Aug 13 01:51:48.088126 kernel: ... fixed-purpose events: 0 Aug 13 01:51:48.088134 kernel: ... event mask: 000000000000003f Aug 13 01:51:48.088143 kernel: signal: max sigframe size: 3376 Aug 13 01:51:48.088150 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:51:48.088158 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:51:48.088166 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:51:48.088173 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:51:48.088180 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:51:48.088188 kernel: .... node #0, CPUs: #1 Aug 13 01:51:48.088195 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:51:48.088203 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:51:48.088213 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:51:48.088220 kernel: devtmpfs: initialized Aug 13 01:51:48.088228 kernel: x86/mm: Memory block size: 128MB Aug 13 01:51:48.088235 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:51:48.088243 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:51:48.088250 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:51:48.088258 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:51:48.088266 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:51:48.088273 kernel: audit: type=2000 audit(1755049904.483:1): state=initialized audit_enabled=0 res=1 Aug 13 01:51:48.088283 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:51:48.088291 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:51:48.088298 kernel: cpuidle: using governor menu Aug 13 01:51:48.088306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:51:48.088313 kernel: dca service started, version 1.12.1 Aug 13 01:51:48.088321 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:51:48.088328 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:51:48.088336 kernel: PCI: Using configuration type 1 for base access Aug 13 01:51:48.088344 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:51:48.088353 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:51:48.088361 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:51:48.088368 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:51:48.088375 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:51:48.088383 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:51:48.088391 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:51:48.088398 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:51:48.088406 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:51:48.088413 kernel: ACPI: Interpreter enabled Aug 13 01:51:48.088422 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:51:48.088430 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:51:48.088438 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:51:48.088445 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:51:48.088453 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:51:48.088460 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:51:48.088742 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:51:48.088861 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:51:48.088984 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:51:48.088995 kernel: PCI host bridge to bus 0000:00 Aug 13 01:51:48.089113 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:51:48.089219 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:51:48.089319 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:51:48.089425 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:51:48.089594 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:51:48.089712 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:51:48.089812 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:51:48.089942 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:51:48.090074 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:51:48.090185 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:51:48.090291 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:51:48.090412 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:51:48.090597 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:51:48.090732 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:51:48.090927 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:51:48.091074 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:51:48.091184 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:51:48.091303 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:51:48.091421 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:51:48.091583 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:51:48.091696 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:51:48.091802 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:51:48.091946 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:51:48.092072 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:51:48.092223 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:51:48.092336 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:51:48.092479 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:51:48.092670 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:51:48.092819 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:51:48.092833 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:51:48.092841 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:51:48.092854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:51:48.092861 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:51:48.092868 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:51:48.092876 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:51:48.092883 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:51:48.092890 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:51:48.092898 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:51:48.092905 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:51:48.092912 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:51:48.092922 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:51:48.092929 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:51:48.092936 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:51:48.092944 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:51:48.092951 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:51:48.092958 kernel: iommu: Default domain type: Translated Aug 13 01:51:48.092966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:51:48.092974 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:51:48.092981 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:51:48.092991 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:51:48.092998 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:51:48.093109 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:51:48.093259 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:51:48.093369 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:51:48.093379 kernel: vgaarb: loaded Aug 13 01:51:48.093388 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:51:48.093395 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:51:48.093402 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:51:48.093414 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:51:48.093421 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:51:48.093429 kernel: pnp: PnP ACPI init Aug 13 01:51:48.093574 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:51:48.093586 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:51:48.093594 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:51:48.093602 kernel: NET: Registered PF_INET protocol family Aug 13 01:51:48.093609 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:51:48.093620 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:51:48.093628 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:51:48.093635 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:51:48.093643 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:51:48.093650 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:51:48.093657 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:51:48.093665 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:51:48.093672 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:51:48.093679 kernel: NET: Registered PF_XDP protocol family Aug 13 01:51:48.093782 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:51:48.093920 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:51:48.094049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:51:48.094202 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:51:48.094322 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:51:48.094421 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:51:48.094431 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:51:48.094439 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:51:48.094451 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:51:48.094459 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:51:48.094466 kernel: Initialise system trusted keyrings Aug 13 01:51:48.094474 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:51:48.094481 kernel: Key type asymmetric registered Aug 13 01:51:48.094499 kernel: Asymmetric key parser 'x509' registered Aug 13 01:51:48.094521 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:51:48.094543 kernel: io scheduler mq-deadline registered Aug 13 01:51:48.094551 kernel: io scheduler kyber registered Aug 13 01:51:48.094562 kernel: io scheduler bfq registered Aug 13 01:51:48.094569 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:51:48.094578 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:51:48.094586 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:51:48.094593 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:51:48.094601 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:51:48.094608 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:51:48.094616 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:51:48.094623 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:51:48.094633 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:51:48.094761 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:51:48.094867 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:51:48.094967 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:51:47 UTC (1755049907) Aug 13 01:51:48.095075 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:51:48.095085 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:51:48.095093 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:51:48.095100 kernel: Segment Routing with IPv6 Aug 13 01:51:48.095111 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:51:48.095119 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:51:48.095127 kernel: Key type dns_resolver registered Aug 13 01:51:48.095134 kernel: IPI shorthand broadcast: enabled Aug 13 01:51:48.095142 kernel: sched_clock: Marking stable (4566005572, 231814089)->(4887630896, -89811235) Aug 13 01:51:48.095149 kernel: registered taskstats version 1 Aug 13 01:51:48.095156 kernel: Loading compiled-in X.509 certificates Aug 13 01:51:48.095164 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:51:48.095171 kernel: Demotion targets for Node 0: null Aug 13 01:51:48.095180 kernel: Key type .fscrypt registered Aug 13 01:51:48.095188 kernel: Key type fscrypt-provisioning registered Aug 13 01:51:48.095195 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:51:48.095202 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:51:48.095209 kernel: ima: No architecture policies found Aug 13 01:51:48.095217 kernel: clk: Disabling unused clocks Aug 13 01:51:48.095224 kernel: Warning: unable to open an initial console. Aug 13 01:51:48.095232 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:51:48.095239 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:51:48.095248 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:51:48.095256 kernel: Run /init as init process Aug 13 01:51:48.095263 kernel: with arguments: Aug 13 01:51:48.095270 kernel: /init Aug 13 01:51:48.095277 kernel: with environment: Aug 13 01:51:48.095285 kernel: HOME=/ Aug 13 01:51:48.095306 kernel: TERM=linux Aug 13 01:51:48.095315 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:51:48.095324 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:51:48.095338 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:51:48.095347 systemd[1]: Detected virtualization kvm. Aug 13 01:51:48.095355 systemd[1]: Detected architecture x86-64. Aug 13 01:51:48.095363 systemd[1]: Running in initrd. Aug 13 01:51:48.095371 systemd[1]: No hostname configured, using default hostname. Aug 13 01:51:48.095387 systemd[1]: Hostname set to . Aug 13 01:51:48.095403 systemd[1]: Initializing machine ID from random generator. Aug 13 01:51:48.095421 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:51:48.095435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:51:48.095445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:51:48.095454 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:51:48.095462 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:51:48.095470 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:51:48.095479 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:51:48.095491 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:51:48.095499 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:51:48.095527 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:51:48.095536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:51:48.095544 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:51:48.095553 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:51:48.095561 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:51:48.095569 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:51:48.095580 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:51:48.095588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:51:48.095596 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:51:48.095630 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:51:48.095640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:51:48.095649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:51:48.095657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:51:48.095665 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:51:48.095676 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:51:48.095685 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:51:48.095693 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:51:48.095702 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:51:48.095710 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:51:48.095721 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:51:48.095729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:51:48.095737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:51:48.095783 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:51:48.095833 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:51:48.095868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:51:48.095877 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:51:48.095886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:51:48.095897 systemd-journald[206]: Journal started Aug 13 01:51:48.095918 systemd-journald[206]: Runtime Journal (/run/log/journal/2b6f71b7df5e48968ed8df35e7724d5b) is 8M, max 78.5M, 70.5M free. Aug 13 01:51:48.041711 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:51:48.152788 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:51:48.153075 kernel: Bridge firewalling registered Aug 13 01:51:48.151443 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:51:48.212544 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:51:48.214229 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:51:48.216070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:51:48.221203 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:51:48.223645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:51:48.226770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:51:48.231734 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:51:48.240397 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:51:48.256757 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:51:48.259022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:51:48.260447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:51:48.267624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:51:48.271637 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:51:48.276428 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:51:48.279001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:51:48.293105 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:51:48.323827 systemd-resolved[243]: Positive Trust Anchors: Aug 13 01:51:48.323841 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:51:48.323872 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:51:48.327920 systemd-resolved[243]: Defaulting to hostname 'linux'. Aug 13 01:51:48.331203 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:51:48.332148 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:51:48.382576 kernel: SCSI subsystem initialized Aug 13 01:51:48.391533 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:51:48.402564 kernel: iscsi: registered transport (tcp) Aug 13 01:51:48.422541 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:51:48.422581 kernel: QLogic iSCSI HBA Driver Aug 13 01:51:48.445728 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:51:48.463172 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:51:48.466235 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:51:48.560614 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:51:48.563065 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:51:48.624551 kernel: raid6: avx2x4 gen() 24769 MB/s Aug 13 01:51:48.642568 kernel: raid6: avx2x2 gen() 24014 MB/s Aug 13 01:51:48.660932 kernel: raid6: avx2x1 gen() 14126 MB/s Aug 13 01:51:48.661061 kernel: raid6: using algorithm avx2x4 gen() 24769 MB/s Aug 13 01:51:48.679996 kernel: raid6: .... xor() 3204 MB/s, rmw enabled Aug 13 01:51:48.680114 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:51:48.717574 kernel: xor: automatically using best checksumming function avx Aug 13 01:51:48.901555 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:51:48.908781 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:51:48.910912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:51:48.941542 systemd-udevd[453]: Using default interface naming scheme 'v255'. Aug 13 01:51:48.947424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:51:48.950713 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:51:48.974847 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Aug 13 01:51:49.002756 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:51:49.004850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:51:49.069649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:51:49.073636 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:51:49.190550 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:51:49.196436 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:51:49.202581 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:51:49.214546 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:51:49.224718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:51:49.226617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:51:49.244592 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:51:49.247121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:51:49.250791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:51:49.273564 kernel: libata version 3.00 loaded. Aug 13 01:51:49.284532 kernel: AES CTR mode by8 optimization enabled Aug 13 01:51:49.284582 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:51:49.430543 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:51:49.431627 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:51:49.433688 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:51:49.433837 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:51:49.433980 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:51:49.434116 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:51:49.434249 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:51:49.452275 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:51:49.452653 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:51:49.452801 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:51:49.458191 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:51:49.458240 kernel: GPT:9289727 != 9297919 Aug 13 01:51:49.458252 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:51:49.458263 kernel: GPT:9289727 != 9297919 Aug 13 01:51:49.458272 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:51:49.458281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:51:49.459569 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:51:49.483539 kernel: scsi host1: ahci Aug 13 01:51:49.483778 kernel: scsi host2: ahci Aug 13 01:51:49.483932 kernel: scsi host3: ahci Aug 13 01:51:49.484576 kernel: scsi host4: ahci Aug 13 01:51:49.484756 kernel: scsi host5: ahci Aug 13 01:51:49.485535 kernel: scsi host6: ahci Aug 13 01:51:49.485694 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:51:49.485713 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:51:49.485723 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:51:49.485738 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:51:49.485748 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:51:49.485758 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:51:49.561089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:51:49.604852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:51:49.615194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:51:49.630718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:51:49.638134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:51:49.638794 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:51:49.641903 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:51:49.669370 disk-uuid[624]: Primary Header is updated. Aug 13 01:51:49.669370 disk-uuid[624]: Secondary Entries is updated. Aug 13 01:51:49.669370 disk-uuid[624]: Secondary Header is updated. Aug 13 01:51:49.680176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:51:49.693536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:51:49.798127 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:51:49.798216 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:51:49.798229 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:51:49.798250 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:51:49.798261 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:51:49.801843 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:51:49.867423 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:51:49.885206 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:51:49.886667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:51:49.887962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:51:49.889690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:51:49.916395 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:51:50.712133 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:51:50.714333 disk-uuid[625]: The operation has completed successfully. Aug 13 01:51:50.777200 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:51:50.777332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:51:50.799265 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:51:50.816013 sh[652]: Success Aug 13 01:51:50.835037 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:51:50.835151 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:51:50.835918 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:51:50.847536 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:51:50.896016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:51:50.900019 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:51:50.913489 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:51:50.924942 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:51:50.925000 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (664) Aug 13 01:51:50.931846 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:51:50.931886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:51:50.931906 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:51:50.941811 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:51:50.942999 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:51:50.943708 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:51:50.944575 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:51:50.948233 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:51:50.981739 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (701) Aug 13 01:51:50.981825 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:51:50.983918 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:51:50.985682 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:51:50.995546 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:51:50.996369 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:51:50.999565 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:51:51.047734 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:51:51.051684 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:51:51.144578 systemd-networkd[833]: lo: Link UP Aug 13 01:51:51.144599 systemd-networkd[833]: lo: Gained carrier Aug 13 01:51:51.146533 systemd-networkd[833]: Enumeration completed Aug 13 01:51:51.146901 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:51:51.146906 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:51:51.147724 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:51:51.148411 systemd[1]: Reached target network.target - Network. Aug 13 01:51:51.203402 systemd-networkd[833]: eth0: Link UP Aug 13 01:51:51.203693 systemd-networkd[833]: eth0: Gained carrier Aug 13 01:51:51.203721 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:51:51.507426 ignition[764]: Ignition 2.21.0 Aug 13 01:51:51.507443 ignition[764]: Stage: fetch-offline Aug 13 01:51:51.507485 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:51.507495 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:51.512365 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:51:51.507663 ignition[764]: parsed url from cmdline: "" Aug 13 01:51:51.507670 ignition[764]: no config URL provided Aug 13 01:51:51.507680 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:51:51.514225 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:51:51.507695 ignition[764]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:51:51.507703 ignition[764]: failed to fetch config: resource requires networking Aug 13 01:51:51.508304 ignition[764]: Ignition finished successfully Aug 13 01:51:51.585795 ignition[841]: Ignition 2.21.0 Aug 13 01:51:51.585809 ignition[841]: Stage: fetch Aug 13 01:51:51.586040 ignition[841]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:51.586055 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:51.586156 ignition[841]: parsed url from cmdline: "" Aug 13 01:51:51.586160 ignition[841]: no config URL provided Aug 13 01:51:51.586166 ignition[841]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:51:51.586174 ignition[841]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:51:51.586207 ignition[841]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:51:51.587067 ignition[841]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:51:51.787247 ignition[841]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:51:51.787544 ignition[841]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:51:51.847629 systemd-networkd[833]: eth0: DHCPv4 address 172.236.112.106/24, gateway 172.236.112.1 acquired from 23.40.196.251 Aug 13 01:51:52.187730 ignition[841]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:51:52.295058 ignition[841]: PUT result: OK Aug 13 01:51:52.295145 ignition[841]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:51:52.424596 ignition[841]: GET result: OK Aug 13 01:51:52.424793 ignition[841]: parsing config with SHA512: 8a6b0921555cdc867e9ea15282338957406a0f216e4963312090021d60880654c88ec639752e28eb46d374a8c5282c906ed975faad9eca084a318a81bb02c641 Aug 13 01:51:52.428722 unknown[841]: fetched base config from "system" Aug 13 01:51:52.428736 unknown[841]: fetched base config from "system" Aug 13 01:51:52.428978 ignition[841]: fetch: fetch complete Aug 13 01:51:52.428742 unknown[841]: fetched user config from "akamai" Aug 13 01:51:52.428983 ignition[841]: fetch: fetch passed Aug 13 01:51:52.432586 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:51:52.429025 ignition[841]: Ignition finished successfully Aug 13 01:51:52.456755 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:51:52.572717 ignition[849]: Ignition 2.21.0 Aug 13 01:51:52.572735 ignition[849]: Stage: kargs Aug 13 01:51:52.572899 ignition[849]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:52.572910 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:52.574124 ignition[849]: kargs: kargs passed Aug 13 01:51:52.576088 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:51:52.574176 ignition[849]: Ignition finished successfully Aug 13 01:51:52.578651 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:51:52.703055 ignition[856]: Ignition 2.21.0 Aug 13 01:51:52.703082 ignition[856]: Stage: disks Aug 13 01:51:52.703319 ignition[856]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:52.703333 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:52.705523 ignition[856]: disks: disks passed Aug 13 01:51:52.705589 ignition[856]: Ignition finished successfully Aug 13 01:51:52.708425 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:51:52.709662 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:51:52.710566 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:51:52.711909 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:51:52.713286 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:51:52.714378 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:51:52.716894 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:51:52.743749 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:51:52.745890 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:51:52.749658 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:51:52.899544 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:51:52.900206 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:51:52.901455 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:51:52.903594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:51:52.905673 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:51:52.907815 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:51:52.908982 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:51:52.909019 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:51:52.919446 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:51:52.920921 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:51:52.931673 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (873) Aug 13 01:51:52.937206 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:51:52.937266 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:51:52.937280 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:51:52.938091 systemd-networkd[833]: eth0: Gained IPv6LL Aug 13 01:51:52.947210 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:51:52.992306 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:51:52.997536 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:51:53.002485 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:51:53.008113 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:51:53.149870 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:51:53.152157 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:51:53.153322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:51:53.184697 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:51:53.187260 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:51:53.205967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:51:53.217034 ignition[986]: INFO : Ignition 2.21.0 Aug 13 01:51:53.217034 ignition[986]: INFO : Stage: mount Aug 13 01:51:53.218419 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:53.218419 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:53.219946 ignition[986]: INFO : mount: mount passed Aug 13 01:51:53.219946 ignition[986]: INFO : Ignition finished successfully Aug 13 01:51:53.221270 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:51:53.224349 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:51:53.902740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:51:53.939569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (997) Aug 13 01:51:53.943692 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:51:53.943757 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:51:53.945773 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:51:53.953598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:51:54.051459 ignition[1014]: INFO : Ignition 2.21.0 Aug 13 01:51:54.051459 ignition[1014]: INFO : Stage: files Aug 13 01:51:54.053605 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:54.053605 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:54.053605 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:51:54.056389 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:51:54.056389 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:51:54.056389 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:51:54.056389 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:51:54.060226 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:51:54.060226 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:51:54.060226 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:51:54.056750 unknown[1014]: wrote ssh authorized keys file for user: core Aug 13 01:51:54.603403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:51:55.075037 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:51:55.075037 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:51:55.075037 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:51:55.300905 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:51:55.414396 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:51:55.414396 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:51:55.416403 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:51:55.423684 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:51:55.423684 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:51:55.423684 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:51:55.423684 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:51:55.423684 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:51:55.423684 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:51:56.150141 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:51:57.269144 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:51:57.269144 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 01:51:57.272184 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:51:57.272184 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:51:57.272184 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 01:51:57.272184 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 01:51:57.272184 ignition[1014]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:51:57.278293 ignition[1014]: INFO : files: files passed Aug 13 01:51:57.278293 ignition[1014]: INFO : Ignition finished successfully Aug 13 01:51:57.277528 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:51:57.281684 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:51:57.284658 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:51:57.313012 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:51:57.313605 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:51:57.323269 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:51:57.323269 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:51:57.325881 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:51:57.327379 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:51:57.328814 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:51:57.330801 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:51:57.387592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:51:57.387735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:51:57.389129 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:51:57.390222 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:51:57.391490 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:51:57.393629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:51:57.428059 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:51:57.430670 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:51:57.450430 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:51:57.451241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:51:57.452625 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:51:57.453828 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:51:57.453980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:51:57.455272 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:51:57.456065 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:51:57.457591 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:51:57.458929 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:51:57.460116 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:51:57.461541 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:51:57.462891 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:51:57.464434 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:51:57.465624 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:51:57.466903 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:51:57.468162 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:51:57.469366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:51:57.469583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:51:57.470764 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:51:57.471549 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:51:57.472711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:51:57.472878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:51:57.473970 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:51:57.474104 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:51:57.475697 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:51:57.475820 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:51:57.476586 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:51:57.476725 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:51:57.478630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:51:57.482716 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:51:57.484000 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:51:57.484184 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:51:57.486143 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:51:57.486242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:51:57.490849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:51:57.507969 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:51:57.534977 ignition[1067]: INFO : Ignition 2.21.0 Aug 13 01:51:57.534977 ignition[1067]: INFO : Stage: umount Aug 13 01:51:57.534977 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:51:57.534977 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:51:57.534977 ignition[1067]: INFO : umount: umount passed Aug 13 01:51:57.534977 ignition[1067]: INFO : Ignition finished successfully Aug 13 01:51:57.531135 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:51:57.531287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:51:57.537790 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:51:57.537850 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:51:57.538788 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:51:57.538839 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:51:57.539394 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:51:57.539444 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:51:57.541753 systemd[1]: Stopped target network.target - Network. Aug 13 01:51:57.542386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:51:57.542439 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:51:57.543001 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:51:57.543477 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:51:57.548041 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:51:57.549295 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:51:57.549809 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:51:57.551648 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:51:57.551707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:51:57.552290 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:51:57.552340 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:51:57.553550 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:51:57.553606 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:51:57.554649 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:51:57.554695 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:51:57.555987 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:51:57.557196 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:51:57.559679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:51:57.560295 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:51:57.560398 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:51:57.562868 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:51:57.562974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:51:57.566470 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:51:57.566912 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:51:57.567025 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:51:57.568851 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:51:57.570119 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:51:57.570744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:51:57.570785 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:51:57.571953 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:51:57.572006 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:51:57.574117 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:51:57.575841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:51:57.575934 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:51:57.577177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:51:57.577225 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:51:57.580694 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:51:57.580742 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:51:57.582604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:51:57.582651 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:51:57.584050 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:51:57.587364 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:51:57.587433 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:51:57.601167 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:51:57.602672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:51:57.604582 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:51:57.604694 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:51:57.606216 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:51:57.606283 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:51:57.607625 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:51:57.607661 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:51:57.608825 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:51:57.608877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:51:57.610617 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:51:57.610665 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:51:57.611683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:51:57.611736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:51:57.614618 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:51:57.615573 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:51:57.615626 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:51:57.617612 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:51:57.617666 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:51:57.620503 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:51:57.620584 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:51:57.621699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:51:57.621744 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:51:57.622544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:51:57.622591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:51:57.625531 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:51:57.625590 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:51:57.625632 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:51:57.625676 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:51:57.631711 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:51:57.631836 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:51:57.633186 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:51:57.635148 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:51:57.668980 systemd[1]: Switching root. Aug 13 01:51:57.713140 systemd-journald[206]: Journal stopped Aug 13 01:51:59.123490 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:51:59.123622 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:51:59.123645 kernel: SELinux: policy capability open_perms=1 Aug 13 01:51:59.123660 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:51:59.123669 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:51:59.123678 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:51:59.123688 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:51:59.123698 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:51:59.123708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:51:59.123719 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:51:59.123734 kernel: audit: type=1403 audit(1755049917.886:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:51:59.123744 systemd[1]: Successfully loaded SELinux policy in 59.391ms. Aug 13 01:51:59.123756 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.771ms. Aug 13 01:51:59.123767 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:51:59.123779 systemd[1]: Detected virtualization kvm. Aug 13 01:51:59.123792 systemd[1]: Detected architecture x86-64. Aug 13 01:51:59.123802 systemd[1]: Detected first boot. Aug 13 01:51:59.123812 systemd[1]: Initializing machine ID from random generator. Aug 13 01:51:59.123823 zram_generator::config[1112]: No configuration found. Aug 13 01:51:59.123833 kernel: Guest personality initialized and is inactive Aug 13 01:51:59.123847 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:51:59.123857 kernel: Initialized host personality Aug 13 01:51:59.123870 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:51:59.123881 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:51:59.123893 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:51:59.123904 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:51:59.123914 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:51:59.123924 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:51:59.123934 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:51:59.123947 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:51:59.123958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:51:59.123968 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:51:59.123979 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:51:59.123990 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:51:59.124000 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:51:59.124010 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:51:59.124022 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:51:59.124033 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:51:59.124044 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:51:59.124055 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:51:59.124068 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:51:59.124079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:51:59.124090 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:51:59.124101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:51:59.124114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:51:59.124124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:51:59.124135 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:51:59.124146 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:51:59.124157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:51:59.124168 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:51:59.124180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:51:59.124190 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:51:59.124203 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:51:59.124218 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:51:59.124236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:51:59.124249 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:51:59.124260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:51:59.124274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:51:59.124285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:51:59.124296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:51:59.124309 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:51:59.124323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:51:59.124338 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:51:59.124350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:51:59.124361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:51:59.124375 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:51:59.124386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:51:59.124402 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:51:59.124413 systemd[1]: Reached target machines.target - Containers. Aug 13 01:51:59.124424 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:51:59.124435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:51:59.124447 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:51:59.124458 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:51:59.124470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:51:59.124482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:51:59.124493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:51:59.124504 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:51:59.124549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:51:59.124568 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:51:59.124582 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:51:59.124594 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:51:59.124605 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:51:59.124620 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:51:59.124633 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:51:59.124644 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:51:59.124655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:51:59.124666 kernel: ACPI: bus type drm_connector registered Aug 13 01:51:59.124676 kernel: loop: module loaded Aug 13 01:51:59.124688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:51:59.124705 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:51:59.124720 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:51:59.124730 kernel: fuse: init (API version 7.41) Aug 13 01:51:59.124741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:51:59.124752 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:51:59.124762 systemd[1]: Stopped verity-setup.service. Aug 13 01:51:59.124774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:51:59.124791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:51:59.124807 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:51:59.124822 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:51:59.124833 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:51:59.124843 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:51:59.124854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:51:59.124897 systemd-journald[1189]: Collecting audit messages is disabled. Aug 13 01:51:59.124922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:51:59.124948 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:51:59.124960 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:51:59.124972 systemd-journald[1189]: Journal started Aug 13 01:51:59.124993 systemd-journald[1189]: Runtime Journal (/run/log/journal/edf433f65cec4171987cce3d7e0049ee) is 8M, max 78.5M, 70.5M free. Aug 13 01:51:58.589363 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:51:58.602563 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:51:58.603177 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:51:59.127549 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:51:59.130772 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:51:59.131299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:51:59.131760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:51:59.132784 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:51:59.132998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:51:59.133922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:51:59.134242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:51:59.135267 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:51:59.135556 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:51:59.136445 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:51:59.136704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:51:59.137962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:51:59.138915 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:51:59.139982 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:51:59.140929 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:51:59.155974 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:51:59.159616 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:51:59.163773 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:51:59.165613 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:51:59.165647 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:51:59.189876 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:51:59.194648 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:51:59.195339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:51:59.199284 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:51:59.202912 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:51:59.204615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:51:59.205746 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:51:59.207621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:51:59.210721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:51:59.215683 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:51:59.282234 systemd-journald[1189]: Time spent on flushing to /var/log/journal/edf433f65cec4171987cce3d7e0049ee is 94.352ms for 1002 entries. Aug 13 01:51:59.282234 systemd-journald[1189]: System Journal (/var/log/journal/edf433f65cec4171987cce3d7e0049ee) is 8M, max 195.6M, 187.6M free. Aug 13 01:51:59.380362 systemd-journald[1189]: Received client request to flush runtime journal. Aug 13 01:51:59.279306 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:51:59.282276 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:51:59.285799 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:51:59.336208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:51:59.350685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:51:59.351552 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:51:59.384136 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:51:59.392896 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 01:51:59.398778 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:51:59.424549 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:51:59.466418 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:51:59.469183 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:51:59.478537 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:51:59.513337 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Aug 13 01:51:59.513817 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Aug 13 01:51:59.520948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:51:59.524713 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:51:59.548580 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 01:51:59.605237 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:51:59.609375 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:51:59.615792 kernel: loop3: detected capacity change from 0 to 8 Aug 13 01:51:59.612688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:51:59.638538 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 01:51:59.744360 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 01:51:59.767406 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Aug 13 01:51:59.767859 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Aug 13 01:51:59.776858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:51:59.808579 kernel: loop6: detected capacity change from 0 to 146240 Aug 13 01:51:59.890177 kernel: loop7: detected capacity change from 0 to 8 Aug 13 01:51:59.888949 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:51:59.889687 (sd-merge)[1262]: Merged extensions into '/usr'. Aug 13 01:51:59.897031 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:51:59.897234 systemd[1]: Reloading... Aug 13 01:52:00.179694 zram_generator::config[1288]: No configuration found. Aug 13 01:52:00.347799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:52:00.436482 systemd[1]: Reloading finished in 537 ms. Aug 13 01:52:00.496044 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:52:00.508131 systemd[1]: Starting ensure-sysext.service... Aug 13 01:52:00.554448 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:52:00.611221 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:52:00.613558 systemd[1]: Reloading... Aug 13 01:52:00.763823 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:52:00.764382 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:52:00.764423 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:52:00.764823 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:52:00.765093 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:52:00.766064 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:52:00.766309 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Aug 13 01:52:00.766372 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Aug 13 01:52:00.771654 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:52:00.771665 systemd-tmpfiles[1333]: Skipping /boot Aug 13 01:52:00.868552 zram_generator::config[1361]: No configuration found. Aug 13 01:52:00.870916 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:52:00.887633 systemd-tmpfiles[1333]: Skipping /boot Aug 13 01:52:01.011916 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:52:01.103807 systemd[1]: Reloading finished in 489 ms. Aug 13 01:52:01.119025 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:52:01.120079 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:52:01.142247 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:52:01.153922 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:52:01.158809 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:52:01.161238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:52:01.167986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:52:01.180645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:52:01.183342 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:52:01.189670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:52:01.189902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:52:01.193801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:52:01.197273 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:52:01.207909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:52:01.208784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:52:01.208940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:52:01.209049 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:52:01.236013 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:52:01.243291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:52:01.266851 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:52:01.268365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:52:01.268772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:52:01.274381 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:52:01.288959 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:52:01.301848 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:52:01.302266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:52:01.312842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:52:01.314714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:52:01.314859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:52:01.314965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:52:01.321641 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:52:01.321958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:52:01.326260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:52:01.328082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:52:01.328210 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:52:01.328353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:52:01.330132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:52:01.330488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:52:01.336627 systemd[1]: Finished ensure-sysext.service. Aug 13 01:52:01.338961 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:52:01.339192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:52:01.340823 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:52:01.348095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:52:01.352295 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:52:01.403691 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:52:01.405984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:52:01.406224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:52:01.407400 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:52:01.407703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:52:01.409845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:52:01.409896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:52:01.425028 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Aug 13 01:52:01.437366 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:52:01.447237 augenrules[1454]: No rules Aug 13 01:52:01.452763 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:52:01.457813 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:52:01.471322 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:52:01.479736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:52:01.819596 systemd-resolved[1410]: Positive Trust Anchors: Aug 13 01:52:01.819618 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:52:01.819650 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:52:01.822589 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:52:01.826106 systemd-resolved[1410]: Defaulting to hostname 'linux'. Aug 13 01:52:01.830277 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:52:01.831210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:52:01.874115 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:52:01.874723 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:52:01.876555 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:52:01.877229 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:52:01.877899 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:52:01.879149 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:52:01.879732 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:52:01.880966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:52:01.881002 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:52:01.881611 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:52:01.883769 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:52:01.884461 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:52:01.885066 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:52:01.887323 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:52:01.891060 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:52:01.895109 systemd-networkd[1465]: lo: Link UP Aug 13 01:52:01.895126 systemd-networkd[1465]: lo: Gained carrier Aug 13 01:52:01.896648 systemd-networkd[1465]: Enumeration completed Aug 13 01:52:01.914686 systemd-timesyncd[1436]: No network connectivity, watching for changes. Aug 13 01:52:01.921246 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:52:01.921265 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:52:01.921271 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:52:01.922814 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:52:01.923990 systemd-networkd[1465]: eth0: Link UP Aug 13 01:52:01.924065 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:52:01.924297 systemd-networkd[1465]: eth0: Gained carrier Aug 13 01:52:01.924313 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:52:01.930543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:52:01.933595 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:52:01.956032 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:52:01.965579 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:52:01.967459 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:52:01.969094 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:52:01.971171 systemd[1]: Reached target network.target - Network. Aug 13 01:52:01.972409 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:52:01.973265 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:52:01.974159 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:52:01.974190 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:52:01.977889 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:52:01.979694 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:52:01.985155 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:52:01.990711 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:52:01.990958 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:52:01.991316 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:52:01.998250 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:52:02.000956 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:52:02.002594 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:52:02.003931 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:52:02.007027 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:52:02.039394 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:52:02.054986 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:52:02.067425 extend-filesystems[1509]: Found /dev/sda6 Aug 13 01:52:02.067425 extend-filesystems[1509]: Found /dev/sda9 Aug 13 01:52:02.088550 jq[1508]: false Aug 13 01:52:02.082902 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:52:02.099257 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:52:02.106658 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:52:02.113104 extend-filesystems[1509]: Checking size of /dev/sda9 Aug 13 01:52:02.135144 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:52:02.138051 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:52:02.139658 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:52:02.151476 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:52:02.174797 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:52:02.205208 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:52:02.206214 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:52:02.206467 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:52:02.235528 oslogin_cache_refresh[1510]: Refreshing passwd entry cache Aug 13 01:52:02.237952 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Refreshing passwd entry cache Aug 13 01:52:02.230677 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:52:02.238143 jq[1543]: true Aug 13 01:52:02.230974 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:52:02.231888 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:52:02.232120 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:52:02.248553 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Failure getting users, quitting Aug 13 01:52:02.248553 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:52:02.248553 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Refreshing group entry cache Aug 13 01:52:02.248553 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Failure getting groups, quitting Aug 13 01:52:02.248553 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:52:02.243649 oslogin_cache_refresh[1510]: Failure getting users, quitting Aug 13 01:52:02.243672 oslogin_cache_refresh[1510]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:52:02.243719 oslogin_cache_refresh[1510]: Refreshing group entry cache Aug 13 01:52:02.244228 oslogin_cache_refresh[1510]: Failure getting groups, quitting Aug 13 01:52:02.244238 oslogin_cache_refresh[1510]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:52:02.249430 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:52:02.250992 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:52:02.301990 coreos-metadata[1505]: Aug 13 01:52:02.301 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:52:02.329680 extend-filesystems[1509]: Resized partition /dev/sda9 Aug 13 01:52:02.335586 extend-filesystems[1563]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:52:02.338069 jq[1548]: true Aug 13 01:52:02.341400 update_engine[1539]: I20250813 01:52:02.341300 1539 main.cc:92] Flatcar Update Engine starting Aug 13 01:52:02.492533 tar[1547]: linux-amd64/helm Aug 13 01:52:02.545540 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:52:02.545638 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:52:02.558964 (ntainerd)[1568]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:52:02.692605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:52:02.700474 extend-filesystems[1563]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:52:02.700474 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:52:02.700474 extend-filesystems[1563]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:52:02.696298 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:52:02.724013 extend-filesystems[1509]: Resized filesystem in /dev/sda9 Aug 13 01:52:02.697261 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:52:02.697554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:52:02.765432 dbus-daemon[1506]: [system] SELinux support is enabled Aug 13 01:52:02.765691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:52:02.768682 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:52:02.768711 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:52:02.769375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:52:02.769394 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:52:02.818984 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:52:02.838806 update_engine[1539]: I20250813 01:52:02.838413 1539 update_check_scheduler.cc:74] Next update check in 9m54s Aug 13 01:52:02.848575 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:52:02.884890 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:52:02.886444 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:52:02.887572 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:52:03.143538 sshd_keygen[1538]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:52:03.201943 systemd[1]: Starting sshkeys.service... Aug 13 01:52:03.307910 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:52:03.313815 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:52:03.355354 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:52:03.361207 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:52:03.372204 coreos-metadata[1505]: Aug 13 01:52:03.370 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:52:03.388360 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:52:03.427831 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:52:03.428159 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:52:03.455909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:52:03.523808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:52:03.570232 systemd-networkd[1465]: eth0: Gained IPv6LL Aug 13 01:52:03.570877 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:52:03.743781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:52:03.769369 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:52:03.823045 coreos-metadata[1609]: Aug 13 01:52:03.822 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:52:03.825605 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:52:03.826583 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:52:03.859050 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:52:03.963567 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:52:04.009487 containerd[1568]: time="2025-08-13T01:52:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:52:04.016711 containerd[1568]: time="2025-08-13T01:52:04.016662359Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:52:04.030408 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:52:04.032602 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:52:04.077856 containerd[1568]: time="2025-08-13T01:52:04.077777699Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.67µs" Aug 13 01:52:04.079571 containerd[1568]: time="2025-08-13T01:52:04.079541538Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:52:04.079685 containerd[1568]: time="2025-08-13T01:52:04.079663198Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:52:04.079962 containerd[1568]: time="2025-08-13T01:52:04.079942618Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:52:04.080041 containerd[1568]: time="2025-08-13T01:52:04.080023098Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:52:04.080126 containerd[1568]: time="2025-08-13T01:52:04.080108728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080312 containerd[1568]: time="2025-08-13T01:52:04.080289238Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080365 containerd[1568]: time="2025-08-13T01:52:04.080351328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080700 containerd[1568]: time="2025-08-13T01:52:04.080675977Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080756 containerd[1568]: time="2025-08-13T01:52:04.080740917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080806 containerd[1568]: time="2025-08-13T01:52:04.080790677Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080863 containerd[1568]: time="2025-08-13T01:52:04.080849927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:52:04.080996 containerd[1568]: time="2025-08-13T01:52:04.080979907Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:52:04.081390 containerd[1568]: time="2025-08-13T01:52:04.081369267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:52:04.081621 containerd[1568]: time="2025-08-13T01:52:04.081549867Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:52:04.081806 systemd-logind[1525]: New seat seat0. Aug 13 01:52:04.082052 containerd[1568]: time="2025-08-13T01:52:04.082023867Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:52:04.082489 containerd[1568]: time="2025-08-13T01:52:04.082353377Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:52:04.083804 containerd[1568]: time="2025-08-13T01:52:04.083784476Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:52:04.084723 containerd[1568]: time="2025-08-13T01:52:04.084696485Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:52:04.087787 containerd[1568]: time="2025-08-13T01:52:04.087765244Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:52:04.087881 containerd[1568]: time="2025-08-13T01:52:04.087866324Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:52:04.087981 containerd[1568]: time="2025-08-13T01:52:04.087964804Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:52:04.088051 containerd[1568]: time="2025-08-13T01:52:04.088038484Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:52:04.088112 containerd[1568]: time="2025-08-13T01:52:04.088099534Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:52:04.088159 containerd[1568]: time="2025-08-13T01:52:04.088147964Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:52:04.088203 containerd[1568]: time="2025-08-13T01:52:04.088192874Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:52:04.088246 containerd[1568]: time="2025-08-13T01:52:04.088236274Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:52:04.088288 containerd[1568]: time="2025-08-13T01:52:04.088277724Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:52:04.088328 containerd[1568]: time="2025-08-13T01:52:04.088318424Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:52:04.088378 containerd[1568]: time="2025-08-13T01:52:04.088366484Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:52:04.088424 containerd[1568]: time="2025-08-13T01:52:04.088413704Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:52:04.088629 containerd[1568]: time="2025-08-13T01:52:04.088611843Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:52:04.088701 containerd[1568]: time="2025-08-13T01:52:04.088688123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:52:04.088756 containerd[1568]: time="2025-08-13T01:52:04.088744473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:52:04.088825 containerd[1568]: time="2025-08-13T01:52:04.088812773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:52:04.088869 containerd[1568]: time="2025-08-13T01:52:04.088858953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089544763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089575053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089587123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089599793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089610423Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089623953Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089696953Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089710073Z" level=info msg="Start snapshots syncer" Aug 13 01:52:04.090540 containerd[1568]: time="2025-08-13T01:52:04.089734193Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:52:04.090758 containerd[1568]: time="2025-08-13T01:52:04.089997793Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:52:04.090758 containerd[1568]: time="2025-08-13T01:52:04.090045303Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090121623Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090229863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090260693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090270913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090282813Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090294833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090305203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090315533Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090344673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090355673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090366823Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090396173Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090410413Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:52:04.090881 containerd[1568]: time="2025-08-13T01:52:04.090418493Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090427543Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090434553Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090444423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090455003Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090472073Z" level=info msg="runtime interface created" Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090477373Z" level=info msg="created NRI interface" Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090487473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:52:04.091111 containerd[1568]: time="2025-08-13T01:52:04.090498013Z" level=info msg="Connect containerd service" Aug 13 01:52:04.091346 containerd[1568]: time="2025-08-13T01:52:04.091328792Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:52:04.092747 containerd[1568]: time="2025-08-13T01:52:04.092725641Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:52:04.140238 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:52:04.145580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:52:04.344865 systemd-networkd[1465]: eth0: DHCPv4 address 172.236.112.106/24, gateway 172.236.112.1 acquired from 23.40.196.251 Aug 13 01:52:04.345064 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1465 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:52:04.345708 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:52:04.347055 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:52:04.347943 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:52:04.353434 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:52:04.355583 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:52:04.360849 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:52:04.365722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:04.386985 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:52:04.584932 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:52:04.611049 systemd[1]: Started sshd@0-172.236.112.106:22-147.75.109.163:34900.service - OpenSSH per-connection server daemon (147.75.109.163:34900). Aug 13 01:52:04.663829 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:52:04.879579 coreos-metadata[1609]: Aug 13 01:52:04.879 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:52:04.941538 tar[1547]: linux-amd64/LICENSE Aug 13 01:52:04.941538 tar[1547]: linux-amd64/README.md Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.991955502Z" level=info msg="Start subscribing containerd event" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992016122Z" level=info msg="Start recovering state" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992141132Z" level=info msg="Start event monitor" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992168102Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992181342Z" level=info msg="Start streaming server" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992197422Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992209662Z" level=info msg="runtime interface starting up..." Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992220332Z" level=info msg="starting plugins..." Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992241712Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:52:04.992546 containerd[1568]: time="2025-08-13T01:52:04.992475701Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:52:04.993700 containerd[1568]: time="2025-08-13T01:52:04.993045811Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:52:04.993700 containerd[1568]: time="2025-08-13T01:52:04.993629601Z" level=info msg="containerd successfully booted in 0.984697s" Aug 13 01:52:04.993450 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:52:04.996660 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:52:05.015875 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:52:05.018129 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:52:05.018562 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1646 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:52:05.025689 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:52:05.030845 coreos-metadata[1609]: Aug 13 01:52:05.030 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:52:05.204251 coreos-metadata[1609]: Aug 13 01:52:05.204 INFO Fetch successful Aug 13 01:52:05.210096 polkitd[1673]: Started polkitd version 126 Aug 13 01:52:05.215711 polkitd[1673]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:52:05.216339 polkitd[1673]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:52:05.216382 polkitd[1673]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:52:05.216712 polkitd[1673]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:52:05.216773 polkitd[1673]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:52:05.216860 polkitd[1673]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:52:05.217481 polkitd[1673]: Finished loading, compiling and executing 2 rules Aug 13 01:52:05.217767 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:52:05.219785 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:52:05.220099 polkitd[1673]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:52:05.228299 sshd[1658]: Accepted publickey for core from 147.75.109.163 port 34900 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:05.234208 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:05.239683 systemd-resolved[1410]: System hostname changed to '172-236-112-106'. Aug 13 01:52:05.240305 systemd-hostnamed[1646]: Hostname set to <172-236-112-106> (transient) Aug 13 01:52:05.248977 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:52:05.251557 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:52:05.254745 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:52:05.284920 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:52:05.297660 systemd[1]: Finished sshkeys.service. Aug 13 01:52:05.323388 systemd-logind[1525]: New session 1 of user core. Aug 13 01:52:05.415076 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:52:05.421724 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:52:05.431276 coreos-metadata[1505]: Aug 13 01:52:05.430 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Aug 13 01:52:05.442392 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:52:05.446797 systemd-logind[1525]: New session c1 of user core. Aug 13 01:52:05.554267 coreos-metadata[1505]: Aug 13 01:52:05.553 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:52:05.667581 systemd[1689]: Queued start job for default target default.target. Aug 13 01:52:05.686420 systemd[1689]: Created slice app.slice - User Application Slice. Aug 13 01:52:05.686677 systemd[1689]: Reached target paths.target - Paths. Aug 13 01:52:05.686851 systemd[1689]: Reached target timers.target - Timers. Aug 13 01:52:05.689505 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:52:05.742797 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:52:05.742984 systemd[1689]: Reached target sockets.target - Sockets. Aug 13 01:52:05.743187 systemd[1689]: Reached target basic.target - Basic System. Aug 13 01:52:05.743273 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:52:05.743982 systemd[1689]: Reached target default.target - Main User Target. Aug 13 01:52:05.744228 systemd[1689]: Startup finished in 287ms. Aug 13 01:52:05.767419 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:52:05.773416 coreos-metadata[1505]: Aug 13 01:52:05.772 INFO Fetch successful Aug 13 01:52:05.773416 coreos-metadata[1505]: Aug 13 01:52:05.773 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:52:06.066592 systemd[1]: Started sshd@1-172.236.112.106:22-147.75.109.163:34912.service - OpenSSH per-connection server daemon (147.75.109.163:34912). Aug 13 01:52:06.075636 coreos-metadata[1505]: Aug 13 01:52:06.075 INFO Fetch successful Aug 13 01:52:06.376429 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:52:06.378490 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:52:06.379618 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:52:06.477090 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 34912 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:06.505146 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:06.516942 systemd-logind[1525]: New session 2 of user core. Aug 13 01:52:06.543987 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:52:06.840692 sshd[1722]: Connection closed by 147.75.109.163 port 34912 Aug 13 01:52:06.842765 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:06.850018 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:52:06.854365 systemd[1]: sshd@1-172.236.112.106:22-147.75.109.163:34912.service: Deactivated successfully. Aug 13 01:52:06.859536 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:52:06.864987 systemd-logind[1525]: Removed session 2. Aug 13 01:52:06.909889 systemd[1]: Started sshd@2-172.236.112.106:22-147.75.109.163:34918.service - OpenSSH per-connection server daemon (147.75.109.163:34918). Aug 13 01:52:07.415260 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 34918 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:07.414915 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:07.426920 systemd-logind[1525]: New session 3 of user core. Aug 13 01:52:07.431871 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:52:07.749218 sshd[1730]: Connection closed by 147.75.109.163 port 34918 Aug 13 01:52:07.747885 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:07.758491 systemd[1]: sshd@2-172.236.112.106:22-147.75.109.163:34918.service: Deactivated successfully. Aug 13 01:52:07.763217 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:52:07.767608 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:52:07.771754 systemd-logind[1525]: Removed session 3. Aug 13 01:52:08.326731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:08.329252 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:52:08.340650 systemd[1]: Startup finished in 4.698s (kernel) + 10.160s (initrd) + 10.511s (userspace) = 25.371s. Aug 13 01:52:08.402590 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:52:10.280075 kubelet[1740]: E0813 01:52:10.279901 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:52:10.286359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:52:10.286698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:52:10.287722 systemd[1]: kubelet.service: Consumed 3.098s CPU time, 265.5M memory peak. Aug 13 01:52:17.798243 systemd[1]: Started sshd@3-172.236.112.106:22-147.75.109.163:43082.service - OpenSSH per-connection server daemon (147.75.109.163:43082). Aug 13 01:52:18.151954 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 43082 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:18.153958 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:18.159211 systemd-logind[1525]: New session 4 of user core. Aug 13 01:52:18.166737 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:52:18.398197 sshd[1754]: Connection closed by 147.75.109.163 port 43082 Aug 13 01:52:18.399048 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:18.404145 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:52:18.404388 systemd[1]: sshd@3-172.236.112.106:22-147.75.109.163:43082.service: Deactivated successfully. Aug 13 01:52:18.406868 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:52:18.408479 systemd-logind[1525]: Removed session 4. Aug 13 01:52:18.461730 systemd[1]: Started sshd@4-172.236.112.106:22-147.75.109.163:43084.service - OpenSSH per-connection server daemon (147.75.109.163:43084). Aug 13 01:52:18.801245 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 43084 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:18.803119 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:18.809495 systemd-logind[1525]: New session 5 of user core. Aug 13 01:52:18.825796 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:52:19.038777 sshd[1762]: Connection closed by 147.75.109.163 port 43084 Aug 13 01:52:19.039672 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:19.044969 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:52:19.045219 systemd[1]: sshd@4-172.236.112.106:22-147.75.109.163:43084.service: Deactivated successfully. Aug 13 01:52:19.047328 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:52:19.049112 systemd-logind[1525]: Removed session 5. Aug 13 01:52:19.115565 systemd[1]: Started sshd@5-172.236.112.106:22-147.75.109.163:43088.service - OpenSSH per-connection server daemon (147.75.109.163:43088). Aug 13 01:52:19.476627 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 43088 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:19.478767 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:19.484354 systemd-logind[1525]: New session 6 of user core. Aug 13 01:52:19.495678 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:52:19.748845 sshd[1770]: Connection closed by 147.75.109.163 port 43088 Aug 13 01:52:19.749625 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:19.754303 systemd[1]: sshd@5-172.236.112.106:22-147.75.109.163:43088.service: Deactivated successfully. Aug 13 01:52:19.756902 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:52:19.757825 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:52:19.759283 systemd-logind[1525]: Removed session 6. Aug 13 01:52:19.815607 systemd[1]: Started sshd@6-172.236.112.106:22-147.75.109.163:43098.service - OpenSSH per-connection server daemon (147.75.109.163:43098). Aug 13 01:52:20.173498 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 43098 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:20.175500 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:20.181750 systemd-logind[1525]: New session 7 of user core. Aug 13 01:52:20.194837 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:52:20.388045 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:52:20.388371 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:52:20.389572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:52:20.392668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:20.408694 sudo[1779]: pam_unix(sudo:session): session closed for user root Aug 13 01:52:20.462546 sshd[1778]: Connection closed by 147.75.109.163 port 43098 Aug 13 01:52:20.462074 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:20.469575 systemd[1]: sshd@6-172.236.112.106:22-147.75.109.163:43098.service: Deactivated successfully. Aug 13 01:52:20.472262 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:52:20.475445 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:52:20.478733 systemd-logind[1525]: Removed session 7. Aug 13 01:52:20.524791 systemd[1]: Started sshd@7-172.236.112.106:22-147.75.109.163:43100.service - OpenSSH per-connection server daemon (147.75.109.163:43100). Aug 13 01:52:20.610201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:20.620942 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:52:20.736298 kubelet[1795]: E0813 01:52:20.736080 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:52:20.743207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:52:20.743442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:52:20.744131 systemd[1]: kubelet.service: Consumed 291ms CPU time, 109.6M memory peak. Aug 13 01:52:20.871972 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 43100 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:20.874809 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:20.884586 systemd-logind[1525]: New session 8 of user core. Aug 13 01:52:20.891973 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:52:21.069664 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:52:21.070096 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:52:21.076622 sudo[1804]: pam_unix(sudo:session): session closed for user root Aug 13 01:52:21.085185 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:52:21.085774 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:52:21.100025 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:52:21.152052 augenrules[1826]: No rules Aug 13 01:52:21.154113 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:52:21.154426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:52:21.156312 sudo[1803]: pam_unix(sudo:session): session closed for user root Aug 13 01:52:21.207189 sshd[1802]: Connection closed by 147.75.109.163 port 43100 Aug 13 01:52:21.208058 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:21.213566 systemd[1]: sshd@7-172.236.112.106:22-147.75.109.163:43100.service: Deactivated successfully. Aug 13 01:52:21.216433 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:52:21.219727 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:52:21.221262 systemd-logind[1525]: Removed session 8. Aug 13 01:52:21.268385 systemd[1]: Started sshd@8-172.236.112.106:22-147.75.109.163:43102.service - OpenSSH per-connection server daemon (147.75.109.163:43102). Aug 13 01:52:21.626399 sshd[1835]: Accepted publickey for core from 147.75.109.163 port 43102 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:21.628587 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:21.635691 systemd-logind[1525]: New session 9 of user core. Aug 13 01:52:21.641747 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:52:21.823446 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:52:21.823837 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:52:23.140456 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:52:23.163138 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:52:23.942191 dockerd[1855]: time="2025-08-13T01:52:23.941882474Z" level=info msg="Starting up" Aug 13 01:52:23.943767 dockerd[1855]: time="2025-08-13T01:52:23.943740704Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:52:24.042144 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1595325056-merged.mount: Deactivated successfully. Aug 13 01:52:24.051946 systemd[1]: var-lib-docker-metacopy\x2dcheck4271252431-merged.mount: Deactivated successfully. Aug 13 01:52:24.073406 dockerd[1855]: time="2025-08-13T01:52:24.073350499Z" level=info msg="Loading containers: start." Aug 13 01:52:24.084550 kernel: Initializing XFRM netlink socket Aug 13 01:52:24.337396 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Aug 13 01:52:24.387871 systemd-networkd[1465]: docker0: Link UP Aug 13 01:52:24.392192 dockerd[1855]: time="2025-08-13T01:52:24.392128919Z" level=info msg="Loading containers: done." Aug 13 01:52:24.424742 dockerd[1855]: time="2025-08-13T01:52:24.424686473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:52:24.424922 dockerd[1855]: time="2025-08-13T01:52:24.424786463Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:52:24.424922 dockerd[1855]: time="2025-08-13T01:52:24.424891953Z" level=info msg="Initializing buildkit" Aug 13 01:52:24.465016 dockerd[1855]: time="2025-08-13T01:52:24.464945563Z" level=info msg="Completed buildkit initialization" Aug 13 01:52:24.469725 dockerd[1855]: time="2025-08-13T01:52:24.469692120Z" level=info msg="Daemon has completed initialization" Aug 13 01:52:24.470005 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:52:24.470837 dockerd[1855]: time="2025-08-13T01:52:24.470765740Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:52:25.675512 systemd-resolved[1410]: Clock change detected. Flushing caches. Aug 13 01:52:25.676164 systemd-timesyncd[1436]: Contacted time server [2603:c020:0:8369::bad:beef]:123 (2.flatcar.pool.ntp.org). Aug 13 01:52:25.677235 systemd-timesyncd[1436]: Initial clock synchronization to Wed 2025-08-13 01:52:25.675247 UTC. Aug 13 01:52:26.503765 containerd[1568]: time="2025-08-13T01:52:26.503701192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:52:27.325240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415183899.mount: Deactivated successfully. Aug 13 01:52:29.177825 containerd[1568]: time="2025-08-13T01:52:29.177769025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:29.179141 containerd[1568]: time="2025-08-13T01:52:29.178872174Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 01:52:29.179782 containerd[1568]: time="2025-08-13T01:52:29.179750644Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:29.182375 containerd[1568]: time="2025-08-13T01:52:29.182343893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:29.183676 containerd[1568]: time="2025-08-13T01:52:29.183647072Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.67990135s" Aug 13 01:52:29.183746 containerd[1568]: time="2025-08-13T01:52:29.183684442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:52:29.184315 containerd[1568]: time="2025-08-13T01:52:29.184288792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:52:31.458296 containerd[1568]: time="2025-08-13T01:52:31.458220734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:31.459322 containerd[1568]: time="2025-08-13T01:52:31.459213854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 01:52:31.459894 containerd[1568]: time="2025-08-13T01:52:31.459865694Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:31.462114 containerd[1568]: time="2025-08-13T01:52:31.462082183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:31.463140 containerd[1568]: time="2025-08-13T01:52:31.463113822Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 2.27879783s" Aug 13 01:52:31.463222 containerd[1568]: time="2025-08-13T01:52:31.463207302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:52:31.464041 containerd[1568]: time="2025-08-13T01:52:31.463807792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:52:31.832384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:52:31.834826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:32.083608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:32.093898 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:52:32.229520 kubelet[2122]: E0813 01:52:32.229432 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:52:32.234313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:52:32.234851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:52:32.235340 systemd[1]: kubelet.service: Consumed 317ms CPU time, 108.4M memory peak. Aug 13 01:52:33.335031 containerd[1568]: time="2025-08-13T01:52:33.334927906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:33.336348 containerd[1568]: time="2025-08-13T01:52:33.336297425Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 01:52:33.337154 containerd[1568]: time="2025-08-13T01:52:33.337101855Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:33.339110 containerd[1568]: time="2025-08-13T01:52:33.339080794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:33.340521 containerd[1568]: time="2025-08-13T01:52:33.339902663Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.876069061s" Aug 13 01:52:33.340521 containerd[1568]: time="2025-08-13T01:52:33.339935513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:52:33.340887 containerd[1568]: time="2025-08-13T01:52:33.340851563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:52:35.002241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004805166.mount: Deactivated successfully. Aug 13 01:52:35.858038 containerd[1568]: time="2025-08-13T01:52:35.857964234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:35.859038 containerd[1568]: time="2025-08-13T01:52:35.858847214Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 01:52:35.859546 containerd[1568]: time="2025-08-13T01:52:35.859512943Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:35.861623 containerd[1568]: time="2025-08-13T01:52:35.861579572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:35.862013 containerd[1568]: time="2025-08-13T01:52:35.861970512Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.521084069s" Aug 13 01:52:35.862013 containerd[1568]: time="2025-08-13T01:52:35.862011452Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:52:35.862994 containerd[1568]: time="2025-08-13T01:52:35.862923652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:52:36.305811 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:52:36.599532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301793671.mount: Deactivated successfully. Aug 13 01:52:37.966609 containerd[1568]: time="2025-08-13T01:52:37.966537270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:37.967593 containerd[1568]: time="2025-08-13T01:52:37.967558359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:52:37.968376 containerd[1568]: time="2025-08-13T01:52:37.968302879Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:37.971015 containerd[1568]: time="2025-08-13T01:52:37.970936017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:37.972109 containerd[1568]: time="2025-08-13T01:52:37.971922587Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.108931065s" Aug 13 01:52:37.972109 containerd[1568]: time="2025-08-13T01:52:37.971958337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:52:37.972642 containerd[1568]: time="2025-08-13T01:52:37.972603467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:52:38.645750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2472525656.mount: Deactivated successfully. Aug 13 01:52:38.651302 containerd[1568]: time="2025-08-13T01:52:38.651075997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:52:38.651842 containerd[1568]: time="2025-08-13T01:52:38.651821597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:52:38.652576 containerd[1568]: time="2025-08-13T01:52:38.652538226Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:52:38.654373 containerd[1568]: time="2025-08-13T01:52:38.654329716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:52:38.654955 containerd[1568]: time="2025-08-13T01:52:38.654925085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 682.292908ms" Aug 13 01:52:38.655013 containerd[1568]: time="2025-08-13T01:52:38.654958585Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:52:38.655686 containerd[1568]: time="2025-08-13T01:52:38.655619935Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:52:39.403296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212479064.mount: Deactivated successfully. Aug 13 01:52:41.572519 containerd[1568]: time="2025-08-13T01:52:41.572367166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:41.573803 containerd[1568]: time="2025-08-13T01:52:41.573773175Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 01:52:41.574167 containerd[1568]: time="2025-08-13T01:52:41.574094915Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:41.576748 containerd[1568]: time="2025-08-13T01:52:41.576712024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:52:41.578566 containerd[1568]: time="2025-08-13T01:52:41.578535023Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.922890868s" Aug 13 01:52:41.578645 containerd[1568]: time="2025-08-13T01:52:41.578572393Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:52:42.302768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:52:42.307546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:42.498970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:42.507806 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:52:42.664388 kubelet[2282]: E0813 01:52:42.664304 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:52:42.669530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:52:42.669730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:52:42.670241 systemd[1]: kubelet.service: Consumed 318ms CPU time, 110.8M memory peak. Aug 13 01:52:44.442741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:44.442946 systemd[1]: kubelet.service: Consumed 318ms CPU time, 110.8M memory peak. Aug 13 01:52:44.445464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:44.501598 systemd[1]: Reload requested from client PID 2296 ('systemctl') (unit session-9.scope)... Aug 13 01:52:44.501617 systemd[1]: Reloading... Aug 13 01:52:44.720527 zram_generator::config[2357]: No configuration found. Aug 13 01:52:44.806042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:52:44.933332 systemd[1]: Reloading finished in 431 ms. Aug 13 01:52:45.003808 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:52:45.003937 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:52:45.004279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:45.004367 systemd[1]: kubelet.service: Consumed 178ms CPU time, 98.5M memory peak. Aug 13 01:52:45.006463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:45.185916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:45.195811 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:52:45.284828 kubelet[2394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:52:45.284828 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:52:45.284828 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:52:45.284828 kubelet[2394]: I0813 01:52:45.284785 2394 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:52:45.686498 kubelet[2394]: I0813 01:52:45.686102 2394 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:52:45.686498 kubelet[2394]: I0813 01:52:45.686140 2394 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:52:45.686684 kubelet[2394]: I0813 01:52:45.686532 2394 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:52:45.718137 kubelet[2394]: E0813 01:52:45.718099 2394 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.236.112.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:45.719454 kubelet[2394]: I0813 01:52:45.719440 2394 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:52:45.725803 kubelet[2394]: I0813 01:52:45.725766 2394 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:52:45.748110 kubelet[2394]: I0813 01:52:45.748046 2394 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:52:45.749620 kubelet[2394]: I0813 01:52:45.749448 2394 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:52:45.749853 kubelet[2394]: I0813 01:52:45.749767 2394 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:52:45.750218 kubelet[2394]: I0813 01:52:45.749841 2394 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-112-106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:52:45.750397 kubelet[2394]: I0813 01:52:45.750227 2394 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:52:45.750397 kubelet[2394]: I0813 01:52:45.750247 2394 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:52:45.750576 kubelet[2394]: I0813 01:52:45.750467 2394 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:52:45.754048 kubelet[2394]: I0813 01:52:45.753521 2394 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:52:45.754048 kubelet[2394]: I0813 01:52:45.753564 2394 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:52:45.754048 kubelet[2394]: I0813 01:52:45.753622 2394 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:52:45.754048 kubelet[2394]: I0813 01:52:45.753655 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:52:45.762372 kubelet[2394]: W0813 01:52:45.762258 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.112.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.112.106:6443: connect: connection refused Aug 13 01:52:45.762372 kubelet[2394]: E0813 01:52:45.762349 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.112.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:45.762635 kubelet[2394]: W0813 01:52:45.762319 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.112.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-112-106&limit=500&resourceVersion=0": dial tcp 172.236.112.106:6443: connect: connection refused Aug 13 01:52:45.762727 kubelet[2394]: E0813 01:52:45.762707 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.112.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-112-106&limit=500&resourceVersion=0\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:45.762916 kubelet[2394]: I0813 01:52:45.762882 2394 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:52:45.763375 kubelet[2394]: I0813 01:52:45.763347 2394 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:52:45.764072 kubelet[2394]: W0813 01:52:45.764041 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:52:45.766053 kubelet[2394]: I0813 01:52:45.765977 2394 server.go:1274] "Started kubelet" Aug 13 01:52:45.768311 kubelet[2394]: I0813 01:52:45.768280 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:52:45.771742 kubelet[2394]: I0813 01:52:45.771683 2394 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:52:45.773741 kubelet[2394]: I0813 01:52:45.772833 2394 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:52:45.778383 kubelet[2394]: E0813 01:52:45.777287 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.112.106:6443/api/v1/namespaces/default/events\": dial tcp 172.236.112.106:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-112-106.185b30a52c8a64c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-112-106,UID:172-236-112-106,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-112-106,},FirstTimestamp:2025-08-13 01:52:45.765944519 +0000 UTC m=+0.563319329,LastTimestamp:2025-08-13 01:52:45.765944519 +0000 UTC m=+0.563319329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-112-106,}" Aug 13 01:52:45.782076 kubelet[2394]: I0813 01:52:45.782011 2394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:52:45.784753 kubelet[2394]: I0813 01:52:45.784733 2394 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:52:45.809882 kubelet[2394]: I0813 01:52:45.809827 2394 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:52:45.810060 kubelet[2394]: I0813 01:52:45.806371 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:52:45.810060 kubelet[2394]: I0813 01:52:45.785060 2394 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:52:45.810153 kubelet[2394]: I0813 01:52:45.810110 2394 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:52:45.810195 kubelet[2394]: E0813 01:52:45.806409 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-236-112-106\" not found" Aug 13 01:52:45.811701 kubelet[2394]: E0813 01:52:45.811633 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.112.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-112-106?timeout=10s\": dial tcp 172.236.112.106:6443: connect: connection refused" interval="200ms" Aug 13 01:52:45.812013 kubelet[2394]: I0813 01:52:45.811976 2394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:52:45.814207 kubelet[2394]: E0813 01:52:45.814174 2394 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:52:45.814760 kubelet[2394]: I0813 01:52:45.814730 2394 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:52:45.814760 kubelet[2394]: I0813 01:52:45.814751 2394 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:52:45.825417 kubelet[2394]: W0813 01:52:45.825338 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.112.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.112.106:6443: connect: connection refused Aug 13 01:52:45.825835 kubelet[2394]: E0813 01:52:45.825569 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.112.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:45.829386 kubelet[2394]: I0813 01:52:45.829338 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:52:45.831528 kubelet[2394]: I0813 01:52:45.831362 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:52:45.831528 kubelet[2394]: I0813 01:52:45.831389 2394 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:52:45.831528 kubelet[2394]: I0813 01:52:45.831414 2394 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:52:45.831721 kubelet[2394]: E0813 01:52:45.831536 2394 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:52:45.840992 kubelet[2394]: W0813 01:52:45.840832 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.112.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.112.106:6443: connect: connection refused Aug 13 01:52:45.840992 kubelet[2394]: E0813 01:52:45.840893 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.112.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:45.854226 kubelet[2394]: I0813 01:52:45.854182 2394 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:52:45.854226 kubelet[2394]: I0813 01:52:45.854212 2394 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:52:45.854339 kubelet[2394]: I0813 01:52:45.854236 2394 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:52:45.856135 kubelet[2394]: I0813 01:52:45.856095 2394 policy_none.go:49] "None policy: Start" Aug 13 01:52:45.857306 kubelet[2394]: I0813 01:52:45.856951 2394 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:52:45.857306 kubelet[2394]: I0813 01:52:45.857010 2394 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:52:45.864172 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:52:45.890316 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:52:45.894382 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:52:45.905499 kubelet[2394]: I0813 01:52:45.905452 2394 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:52:45.905954 kubelet[2394]: I0813 01:52:45.905934 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:52:45.905995 kubelet[2394]: I0813 01:52:45.905953 2394 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:52:45.906611 kubelet[2394]: I0813 01:52:45.906386 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:52:45.909297 kubelet[2394]: E0813 01:52:45.909275 2394 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-112-106\" not found" Aug 13 01:52:45.942990 systemd[1]: Created slice kubepods-burstable-pod17e9581650b027b7404c8df4c295a813.slice - libcontainer container kubepods-burstable-pod17e9581650b027b7404c8df4c295a813.slice. Aug 13 01:52:45.954543 systemd[1]: Created slice kubepods-burstable-pod7612615acb7d2783c3487ab09dc4fda0.slice - libcontainer container kubepods-burstable-pod7612615acb7d2783c3487ab09dc4fda0.slice. Aug 13 01:52:45.976121 systemd[1]: Created slice kubepods-burstable-pod32f9006e59298087db75d8ffe2720ed9.slice - libcontainer container kubepods-burstable-pod32f9006e59298087db75d8ffe2720ed9.slice. Aug 13 01:52:46.009447 kubelet[2394]: I0813 01:52:46.009366 2394 kubelet_node_status.go:72] "Attempting to register node" node="172-236-112-106" Aug 13 01:52:46.010174 kubelet[2394]: E0813 01:52:46.010109 2394 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.112.106:6443/api/v1/nodes\": dial tcp 172.236.112.106:6443: connect: connection refused" node="172-236-112-106" Aug 13 01:52:46.010614 kubelet[2394]: I0813 01:52:46.010588 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17e9581650b027b7404c8df4c295a813-kubeconfig\") pod \"kube-scheduler-172-236-112-106\" (UID: \"17e9581650b027b7404c8df4c295a813\") " pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:52:46.010673 kubelet[2394]: I0813 01:52:46.010619 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-ca-certs\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:46.010673 kubelet[2394]: I0813 01:52:46.010638 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-k8s-certs\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:46.010673 kubelet[2394]: I0813 01:52:46.010654 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:46.010673 kubelet[2394]: I0813 01:52:46.010672 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7612615acb7d2783c3487ab09dc4fda0-ca-certs\") pod \"kube-apiserver-172-236-112-106\" (UID: \"7612615acb7d2783c3487ab09dc4fda0\") " pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:46.010821 kubelet[2394]: I0813 01:52:46.010685 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7612615acb7d2783c3487ab09dc4fda0-k8s-certs\") pod \"kube-apiserver-172-236-112-106\" (UID: \"7612615acb7d2783c3487ab09dc4fda0\") " pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:46.010821 kubelet[2394]: I0813 01:52:46.010703 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7612615acb7d2783c3487ab09dc4fda0-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-112-106\" (UID: \"7612615acb7d2783c3487ab09dc4fda0\") " pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:46.010821 kubelet[2394]: I0813 01:52:46.010721 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-flexvolume-dir\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:46.010821 kubelet[2394]: I0813 01:52:46.010736 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-kubeconfig\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:46.012348 kubelet[2394]: E0813 01:52:46.012297 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.112.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-112-106?timeout=10s\": dial tcp 172.236.112.106:6443: connect: connection refused" interval="400ms" Aug 13 01:52:46.212566 kubelet[2394]: I0813 01:52:46.212421 2394 kubelet_node_status.go:72] "Attempting to register node" node="172-236-112-106" Aug 13 01:52:46.213204 kubelet[2394]: E0813 01:52:46.213174 2394 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.112.106:6443/api/v1/nodes\": dial tcp 172.236.112.106:6443: connect: connection refused" node="172-236-112-106" Aug 13 01:52:46.253185 kubelet[2394]: E0813 01:52:46.253076 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:46.254290 containerd[1568]: time="2025-08-13T01:52:46.254217295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-112-106,Uid:17e9581650b027b7404c8df4c295a813,Namespace:kube-system,Attempt:0,}" Aug 13 01:52:46.280505 kubelet[2394]: E0813 01:52:46.273193 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:46.280505 kubelet[2394]: E0813 01:52:46.279169 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:46.280763 containerd[1568]: time="2025-08-13T01:52:46.273922935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-112-106,Uid:7612615acb7d2783c3487ab09dc4fda0,Namespace:kube-system,Attempt:0,}" Aug 13 01:52:46.291064 containerd[1568]: time="2025-08-13T01:52:46.291003116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-112-106,Uid:32f9006e59298087db75d8ffe2720ed9,Namespace:kube-system,Attempt:0,}" Aug 13 01:52:46.377971 containerd[1568]: time="2025-08-13T01:52:46.377879973Z" level=info msg="connecting to shim 676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173" address="unix:///run/containerd/s/89eb94d2360279177dbeff9605ede092ba964dbeae8c6d59e0fca81d9121000c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:52:46.385296 containerd[1568]: time="2025-08-13T01:52:46.385172339Z" level=info msg="connecting to shim 296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0" address="unix:///run/containerd/s/c592a812da482ddef7549b1b2156a7a4ebe8cf248075d389662195aad823a789" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:52:46.398653 containerd[1568]: time="2025-08-13T01:52:46.398594393Z" level=info msg="connecting to shim 79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454" address="unix:///run/containerd/s/afbee406252319d1f4c3acc116897fea5ff2eaef8df9a421c5413c34371869c7" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:52:46.423859 kubelet[2394]: E0813 01:52:46.423457 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.112.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-112-106?timeout=10s\": dial tcp 172.236.112.106:6443: connect: connection refused" interval="800ms" Aug 13 01:52:46.466754 systemd[1]: Started cri-containerd-676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173.scope - libcontainer container 676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173. Aug 13 01:52:46.516036 systemd[1]: Started cri-containerd-296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0.scope - libcontainer container 296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0. Aug 13 01:52:46.548114 systemd[1]: Started cri-containerd-79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454.scope - libcontainer container 79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454. Aug 13 01:52:46.625792 kubelet[2394]: I0813 01:52:46.621814 2394 kubelet_node_status.go:72] "Attempting to register node" node="172-236-112-106" Aug 13 01:52:46.625792 kubelet[2394]: E0813 01:52:46.622569 2394 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.236.112.106:6443/api/v1/nodes\": dial tcp 172.236.112.106:6443: connect: connection refused" node="172-236-112-106" Aug 13 01:52:46.674221 containerd[1568]: time="2025-08-13T01:52:46.674148595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-112-106,Uid:7612615acb7d2783c3487ab09dc4fda0,Namespace:kube-system,Attempt:0,} returns sandbox id \"676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173\"" Aug 13 01:52:46.678531 kubelet[2394]: E0813 01:52:46.677835 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:46.683741 containerd[1568]: time="2025-08-13T01:52:46.683220240Z" level=info msg="CreateContainer within sandbox \"676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:52:46.722754 containerd[1568]: time="2025-08-13T01:52:46.722608750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-112-106,Uid:32f9006e59298087db75d8ffe2720ed9,Namespace:kube-system,Attempt:0,} returns sandbox id \"79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454\"" Aug 13 01:52:46.725981 kubelet[2394]: E0813 01:52:46.725936 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:46.728498 containerd[1568]: time="2025-08-13T01:52:46.728015288Z" level=info msg="Container 3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:52:46.729864 containerd[1568]: time="2025-08-13T01:52:46.729838397Z" level=info msg="CreateContainer within sandbox \"79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:52:46.738635 containerd[1568]: time="2025-08-13T01:52:46.738599152Z" level=info msg="CreateContainer within sandbox \"676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975\"" Aug 13 01:52:46.740606 containerd[1568]: time="2025-08-13T01:52:46.740576201Z" level=info msg="Container cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:52:46.741899 containerd[1568]: time="2025-08-13T01:52:46.741260781Z" level=info msg="StartContainer for \"3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975\"" Aug 13 01:52:46.749526 containerd[1568]: time="2025-08-13T01:52:46.749467227Z" level=info msg="connecting to shim 3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975" address="unix:///run/containerd/s/89eb94d2360279177dbeff9605ede092ba964dbeae8c6d59e0fca81d9121000c" protocol=ttrpc version=3 Aug 13 01:52:46.754555 containerd[1568]: time="2025-08-13T01:52:46.754524504Z" level=info msg="CreateContainer within sandbox \"79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b\"" Aug 13 01:52:46.755639 containerd[1568]: time="2025-08-13T01:52:46.755606444Z" level=info msg="StartContainer for \"cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b\"" Aug 13 01:52:46.757341 containerd[1568]: time="2025-08-13T01:52:46.757279253Z" level=info msg="connecting to shim cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b" address="unix:///run/containerd/s/afbee406252319d1f4c3acc116897fea5ff2eaef8df9a421c5413c34371869c7" protocol=ttrpc version=3 Aug 13 01:52:46.762937 kubelet[2394]: W0813 01:52:46.762834 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.112.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.112.106:6443: connect: connection refused Aug 13 01:52:46.762937 kubelet[2394]: E0813 01:52:46.762914 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.112.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:46.765561 containerd[1568]: time="2025-08-13T01:52:46.765415149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-112-106,Uid:17e9581650b027b7404c8df4c295a813,Namespace:kube-system,Attempt:0,} returns sandbox id \"296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0\"" Aug 13 01:52:46.766808 kubelet[2394]: E0813 01:52:46.766577 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:46.769334 containerd[1568]: time="2025-08-13T01:52:46.769302587Z" level=info msg="CreateContainer within sandbox \"296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:52:46.788547 containerd[1568]: time="2025-08-13T01:52:46.788318848Z" level=info msg="Container e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:52:46.791652 systemd[1]: Started cri-containerd-3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975.scope - libcontainer container 3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975. Aug 13 01:52:46.799703 containerd[1568]: time="2025-08-13T01:52:46.799268112Z" level=info msg="CreateContainer within sandbox \"296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698\"" Aug 13 01:52:46.801961 containerd[1568]: time="2025-08-13T01:52:46.800641711Z" level=info msg="StartContainer for \"e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698\"" Aug 13 01:52:46.801720 systemd[1]: Started cri-containerd-cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b.scope - libcontainer container cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b. Aug 13 01:52:46.805270 containerd[1568]: time="2025-08-13T01:52:46.805224539Z" level=info msg="connecting to shim e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698" address="unix:///run/containerd/s/c592a812da482ddef7549b1b2156a7a4ebe8cf248075d389662195aad823a789" protocol=ttrpc version=3 Aug 13 01:52:46.852855 systemd[1]: Started cri-containerd-e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698.scope - libcontainer container e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698. Aug 13 01:52:46.945808 containerd[1568]: time="2025-08-13T01:52:46.945751229Z" level=info msg="StartContainer for \"cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b\" returns successfully" Aug 13 01:52:46.954324 containerd[1568]: time="2025-08-13T01:52:46.954178775Z" level=info msg="StartContainer for \"3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975\" returns successfully" Aug 13 01:52:47.060326 kubelet[2394]: W0813 01:52:47.058623 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.112.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-112-106&limit=500&resourceVersion=0": dial tcp 172.236.112.106:6443: connect: connection refused Aug 13 01:52:47.060326 kubelet[2394]: E0813 01:52:47.058724 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.112.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-112-106&limit=500&resourceVersion=0\": dial tcp 172.236.112.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:52:47.153748 containerd[1568]: time="2025-08-13T01:52:47.153608395Z" level=info msg="StartContainer for \"e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698\" returns successfully" Aug 13 01:52:47.427574 kubelet[2394]: I0813 01:52:47.427511 2394 kubelet_node_status.go:72] "Attempting to register node" node="172-236-112-106" Aug 13 01:52:47.879917 kubelet[2394]: E0813 01:52:47.879727 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:47.880947 kubelet[2394]: E0813 01:52:47.880920 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:47.885856 kubelet[2394]: E0813 01:52:47.885801 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:48.888714 kubelet[2394]: E0813 01:52:48.888681 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:48.889792 kubelet[2394]: E0813 01:52:48.889524 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:48.889792 kubelet[2394]: E0813 01:52:48.889757 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:49.024697 update_engine[1539]: I20250813 01:52:49.024564 1539 update_attempter.cc:509] Updating boot flags... Aug 13 01:52:49.731027 kubelet[2394]: E0813 01:52:49.730945 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-112-106\" not found" node="172-236-112-106" Aug 13 01:52:49.886301 kubelet[2394]: I0813 01:52:49.886046 2394 kubelet_node_status.go:75] "Successfully registered node" node="172-236-112-106" Aug 13 01:52:49.886301 kubelet[2394]: E0813 01:52:49.886097 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-236-112-106\": node \"172-236-112-106\" not found" Aug 13 01:52:49.908096 kubelet[2394]: E0813 01:52:49.907768 2394 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-112-106\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:49.908096 kubelet[2394]: E0813 01:52:49.907984 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:50.764765 kubelet[2394]: I0813 01:52:50.764707 2394 apiserver.go:52] "Watching apiserver" Aug 13 01:52:50.810856 kubelet[2394]: I0813 01:52:50.810774 2394 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:52:51.640828 systemd[1]: Reload requested from client PID 2682 ('systemctl') (unit session-9.scope)... Aug 13 01:52:51.640852 systemd[1]: Reloading... Aug 13 01:52:51.810532 zram_generator::config[2737]: No configuration found. Aug 13 01:52:51.904763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:52:52.033076 systemd[1]: Reloading finished in 391 ms. Aug 13 01:52:52.075787 kubelet[2394]: E0813 01:52:52.075674 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:52.079202 kubelet[2394]: I0813 01:52:52.079067 2394 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:52:52.080563 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:52.095052 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:52:52.095410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:52.095498 systemd[1]: kubelet.service: Consumed 1.119s CPU time, 130.4M memory peak. Aug 13 01:52:52.099178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:52:52.293028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:52:52.303014 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:52:52.393662 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:52:52.393662 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:52:52.393662 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:52:52.394222 kubelet[2776]: I0813 01:52:52.393743 2776 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:52:52.402581 kubelet[2776]: I0813 01:52:52.402546 2776 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:52:52.402581 kubelet[2776]: I0813 01:52:52.402573 2776 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:52:52.402945 kubelet[2776]: I0813 01:52:52.402922 2776 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:52:52.404668 kubelet[2776]: I0813 01:52:52.404648 2776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:52:52.406714 kubelet[2776]: I0813 01:52:52.406690 2776 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:52:52.430678 kubelet[2776]: I0813 01:52:52.430644 2776 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:52:52.434774 kubelet[2776]: I0813 01:52:52.434746 2776 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:52:52.435027 kubelet[2776]: I0813 01:52:52.435013 2776 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:52:52.435280 kubelet[2776]: I0813 01:52:52.435247 2776 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:52:52.435524 kubelet[2776]: I0813 01:52:52.435330 2776 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-112-106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:52:52.435665 kubelet[2776]: I0813 01:52:52.435652 2776 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:52:52.435718 kubelet[2776]: I0813 01:52:52.435708 2776 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:52:52.435797 kubelet[2776]: I0813 01:52:52.435785 2776 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:52:52.435973 kubelet[2776]: I0813 01:52:52.435957 2776 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:52:52.436104 kubelet[2776]: I0813 01:52:52.436089 2776 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:52:52.436195 kubelet[2776]: I0813 01:52:52.436185 2776 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:52:52.436250 kubelet[2776]: I0813 01:52:52.436240 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:52:52.443205 kubelet[2776]: I0813 01:52:52.442272 2776 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:52:52.443205 kubelet[2776]: I0813 01:52:52.442754 2776 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:52:52.443205 kubelet[2776]: I0813 01:52:52.443193 2776 server.go:1274] "Started kubelet" Aug 13 01:52:52.444486 kubelet[2776]: I0813 01:52:52.444298 2776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:52:52.445820 kubelet[2776]: I0813 01:52:52.445254 2776 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:52:52.448144 kubelet[2776]: I0813 01:52:52.447377 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:52:52.448144 kubelet[2776]: I0813 01:52:52.447679 2776 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:52:52.449052 kubelet[2776]: I0813 01:52:52.448891 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:52:52.451450 kubelet[2776]: I0813 01:52:52.451421 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:52:52.458718 kubelet[2776]: I0813 01:52:52.458056 2776 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:52:52.458718 kubelet[2776]: I0813 01:52:52.458203 2776 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:52:52.458718 kubelet[2776]: I0813 01:52:52.458343 2776 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:52:52.462212 kubelet[2776]: I0813 01:52:52.462182 2776 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:52:52.462398 kubelet[2776]: I0813 01:52:52.462375 2776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:52:52.465978 kubelet[2776]: E0813 01:52:52.465951 2776 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:52:52.467240 kubelet[2776]: I0813 01:52:52.467052 2776 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:52:52.482959 kubelet[2776]: I0813 01:52:52.482902 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:52:52.484362 kubelet[2776]: I0813 01:52:52.484345 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:52:52.484489 kubelet[2776]: I0813 01:52:52.484460 2776 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:52:52.484574 kubelet[2776]: I0813 01:52:52.484564 2776 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:52:52.484675 kubelet[2776]: E0813 01:52:52.484655 2776 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:52:52.535627 kubelet[2776]: I0813 01:52:52.535598 2776 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:52:52.535844 kubelet[2776]: I0813 01:52:52.535828 2776 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:52:52.535908 kubelet[2776]: I0813 01:52:52.535899 2776 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:52:52.536128 kubelet[2776]: I0813 01:52:52.536109 2776 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:52:52.536237 kubelet[2776]: I0813 01:52:52.536208 2776 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:52:52.536286 kubelet[2776]: I0813 01:52:52.536279 2776 policy_none.go:49] "None policy: Start" Aug 13 01:52:52.537028 kubelet[2776]: I0813 01:52:52.537014 2776 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:52:52.537181 kubelet[2776]: I0813 01:52:52.537169 2776 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:52:52.537384 kubelet[2776]: I0813 01:52:52.537371 2776 state_mem.go:75] "Updated machine memory state" Aug 13 01:52:52.547870 kubelet[2776]: I0813 01:52:52.547422 2776 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:52:52.547870 kubelet[2776]: I0813 01:52:52.547760 2776 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:52:52.547870 kubelet[2776]: I0813 01:52:52.547774 2776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:52:52.549651 kubelet[2776]: I0813 01:52:52.548508 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:52:52.592648 kubelet[2776]: E0813 01:52:52.592539 2776 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-236-112-106\" already exists" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:52.638720 sudo[2807]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:52:52.639198 sudo[2807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 01:52:52.660423 kubelet[2776]: I0813 01:52:52.660376 2776 kubelet_node_status.go:72] "Attempting to register node" node="172-236-112-106" Aug 13 01:52:52.671393 kubelet[2776]: I0813 01:52:52.671345 2776 kubelet_node_status.go:111] "Node was previously registered" node="172-236-112-106" Aug 13 01:52:52.671580 kubelet[2776]: I0813 01:52:52.671444 2776 kubelet_node_status.go:75] "Successfully registered node" node="172-236-112-106" Aug 13 01:52:52.758985 kubelet[2776]: I0813 01:52:52.758862 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-k8s-certs\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:52.758985 kubelet[2776]: I0813 01:52:52.758988 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-kubeconfig\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:52.759338 kubelet[2776]: I0813 01:52:52.759049 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:52.759338 kubelet[2776]: I0813 01:52:52.759072 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17e9581650b027b7404c8df4c295a813-kubeconfig\") pod \"kube-scheduler-172-236-112-106\" (UID: \"17e9581650b027b7404c8df4c295a813\") " pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:52:52.759338 kubelet[2776]: I0813 01:52:52.759122 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7612615acb7d2783c3487ab09dc4fda0-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-112-106\" (UID: \"7612615acb7d2783c3487ab09dc4fda0\") " pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:52.759338 kubelet[2776]: I0813 01:52:52.759140 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-ca-certs\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:52.759338 kubelet[2776]: I0813 01:52:52.759156 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32f9006e59298087db75d8ffe2720ed9-flexvolume-dir\") pod \"kube-controller-manager-172-236-112-106\" (UID: \"32f9006e59298087db75d8ffe2720ed9\") " pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:52:52.759541 kubelet[2776]: I0813 01:52:52.759378 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7612615acb7d2783c3487ab09dc4fda0-ca-certs\") pod \"kube-apiserver-172-236-112-106\" (UID: \"7612615acb7d2783c3487ab09dc4fda0\") " pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:52.759541 kubelet[2776]: I0813 01:52:52.759398 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7612615acb7d2783c3487ab09dc4fda0-k8s-certs\") pod \"kube-apiserver-172-236-112-106\" (UID: \"7612615acb7d2783c3487ab09dc4fda0\") " pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:52.894181 kubelet[2776]: E0813 01:52:52.894122 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:52.896487 kubelet[2776]: E0813 01:52:52.894944 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:52.896487 kubelet[2776]: E0813 01:52:52.895199 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:53.444460 kubelet[2776]: I0813 01:52:53.444391 2776 apiserver.go:52] "Watching apiserver" Aug 13 01:52:53.468868 kubelet[2776]: I0813 01:52:53.468816 2776 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:52:53.507276 kubelet[2776]: E0813 01:52:53.507178 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:53.521062 kubelet[2776]: E0813 01:52:53.521008 2776 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-236-112-106\" already exists" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:52:53.522176 kubelet[2776]: E0813 01:52:53.521238 2776 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-172-236-112-106\" already exists" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:52:53.522176 kubelet[2776]: E0813 01:52:53.521930 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:53.523029 kubelet[2776]: E0813 01:52:53.523012 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:53.552501 kubelet[2776]: I0813 01:52:53.552417 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-112-106" podStartSLOduration=1.552391315 podStartE2EDuration="1.552391315s" podCreationTimestamp="2025-08-13 01:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:52:53.54268897 +0000 UTC m=+1.213543944" watchObservedRunningTime="2025-08-13 01:52:53.552391315 +0000 UTC m=+1.223246289" Aug 13 01:52:53.561929 kubelet[2776]: I0813 01:52:53.561590 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-112-106" podStartSLOduration=1.56156733 podStartE2EDuration="1.56156733s" podCreationTimestamp="2025-08-13 01:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:52:53.553180034 +0000 UTC m=+1.224035008" watchObservedRunningTime="2025-08-13 01:52:53.56156733 +0000 UTC m=+1.232422294" Aug 13 01:52:53.570761 kubelet[2776]: I0813 01:52:53.570697 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-112-106" podStartSLOduration=1.570666076 podStartE2EDuration="1.570666076s" podCreationTimestamp="2025-08-13 01:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:52:53.56255316 +0000 UTC m=+1.233408124" watchObservedRunningTime="2025-08-13 01:52:53.570666076 +0000 UTC m=+1.241521040" Aug 13 01:52:53.723197 sudo[2807]: pam_unix(sudo:session): session closed for user root Aug 13 01:52:54.509234 kubelet[2776]: E0813 01:52:54.508918 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:54.509234 kubelet[2776]: E0813 01:52:54.509055 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:55.788749 sudo[1838]: pam_unix(sudo:session): session closed for user root Aug 13 01:52:55.841538 sshd[1837]: Connection closed by 147.75.109.163 port 43102 Aug 13 01:52:55.841985 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:55.849387 systemd[1]: sshd@8-172.236.112.106:22-147.75.109.163:43102.service: Deactivated successfully. Aug 13 01:52:55.852590 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:52:55.852883 systemd[1]: session-9.scope: Consumed 7.290s CPU time, 269.4M memory peak. Aug 13 01:52:55.854648 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:52:55.856965 systemd-logind[1525]: Removed session 9. Aug 13 01:52:57.399960 kubelet[2776]: I0813 01:52:57.399906 2776 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:52:57.401128 kubelet[2776]: I0813 01:52:57.400660 2776 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:52:57.401175 containerd[1568]: time="2025-08-13T01:52:57.400383520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:52:57.939301 systemd[1]: Created slice kubepods-besteffort-pod5a0bd478_50c4_42d2_86de_976c8b3dca1f.slice - libcontainer container kubepods-besteffort-pod5a0bd478_50c4_42d2_86de_976c8b3dca1f.slice. Aug 13 01:52:57.962015 systemd[1]: Created slice kubepods-burstable-pode4cc5d6f_9acd_479d_8967_faf3e92c7070.slice - libcontainer container kubepods-burstable-pode4cc5d6f_9acd_479d_8967_faf3e92c7070.slice. Aug 13 01:52:58.044825 kubelet[2776]: I0813 01:52:58.044776 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a0bd478-50c4-42d2-86de-976c8b3dca1f-kube-proxy\") pod \"kube-proxy-n69hs\" (UID: \"5a0bd478-50c4-42d2-86de-976c8b3dca1f\") " pod="kube-system/kube-proxy-n69hs" Aug 13 01:52:58.045777 kubelet[2776]: I0813 01:52:58.045253 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a0bd478-50c4-42d2-86de-976c8b3dca1f-lib-modules\") pod \"kube-proxy-n69hs\" (UID: \"5a0bd478-50c4-42d2-86de-976c8b3dca1f\") " pod="kube-system/kube-proxy-n69hs" Aug 13 01:52:58.045980 kubelet[2776]: I0813 01:52:58.045953 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-bpf-maps\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.046181 kubelet[2776]: I0813 01:52:58.046131 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-net\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.046181 kubelet[2776]: I0813 01:52:58.046158 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-run\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.046439 kubelet[2776]: I0813 01:52:58.046354 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8x5\" (UniqueName: \"kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-kube-api-access-mg8x5\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.046737 kubelet[2776]: I0813 01:52:58.046689 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hostproc\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.046896 kubelet[2776]: I0813 01:52:58.046804 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-lib-modules\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.047095 kubelet[2776]: I0813 01:52:58.047079 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a0bd478-50c4-42d2-86de-976c8b3dca1f-xtables-lock\") pod \"kube-proxy-n69hs\" (UID: \"5a0bd478-50c4-42d2-86de-976c8b3dca1f\") " pod="kube-system/kube-proxy-n69hs" Aug 13 01:52:58.047281 kubelet[2776]: I0813 01:52:58.047144 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hubble-tls\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.047281 kubelet[2776]: I0813 01:52:58.047162 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-xtables-lock\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.047281 kubelet[2776]: I0813 01:52:58.047178 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-config-path\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.048564 kubelet[2776]: I0813 01:52:58.047606 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gsl\" (UniqueName: \"kubernetes.io/projected/5a0bd478-50c4-42d2-86de-976c8b3dca1f-kube-api-access-q9gsl\") pod \"kube-proxy-n69hs\" (UID: \"5a0bd478-50c4-42d2-86de-976c8b3dca1f\") " pod="kube-system/kube-proxy-n69hs" Aug 13 01:52:58.048564 kubelet[2776]: I0813 01:52:58.047637 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cni-path\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.048564 kubelet[2776]: I0813 01:52:58.047662 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-cgroup\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.048564 kubelet[2776]: I0813 01:52:58.047679 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4cc5d6f-9acd-479d-8967-faf3e92c7070-clustermesh-secrets\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.048564 kubelet[2776]: I0813 01:52:58.047694 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-etc-cni-netd\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.048745 kubelet[2776]: I0813 01:52:58.047708 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-kernel\") pod \"cilium-vfxxm\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " pod="kube-system/cilium-vfxxm" Aug 13 01:52:58.260411 kubelet[2776]: E0813 01:52:58.259729 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:58.262160 containerd[1568]: time="2025-08-13T01:52:58.261931639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n69hs,Uid:5a0bd478-50c4-42d2-86de-976c8b3dca1f,Namespace:kube-system,Attempt:0,}" Aug 13 01:52:58.269022 kubelet[2776]: E0813 01:52:58.268977 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:58.269904 containerd[1568]: time="2025-08-13T01:52:58.269871178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfxxm,Uid:e4cc5d6f-9acd-479d-8967-faf3e92c7070,Namespace:kube-system,Attempt:0,}" Aug 13 01:52:58.301307 containerd[1568]: time="2025-08-13T01:52:58.301248589Z" level=info msg="connecting to shim f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b" address="unix:///run/containerd/s/3b4c441df8a2580bc01a8ed897a60396e9d9f1b8a5887a0185e18e5664e9cf40" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:52:58.344658 containerd[1568]: time="2025-08-13T01:52:58.344599624Z" level=info msg="connecting to shim 1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154" address="unix:///run/containerd/s/c5e75c303f8ffbfc957afe9511b072584e3fb2a77e5c4da1d6fcb7961146cf56" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:52:58.396099 systemd[1]: Started cri-containerd-1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154.scope - libcontainer container 1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154. Aug 13 01:52:58.427665 systemd[1]: Started cri-containerd-f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b.scope - libcontainer container f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b. Aug 13 01:52:58.512362 containerd[1568]: time="2025-08-13T01:52:58.512205219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfxxm,Uid:e4cc5d6f-9acd-479d-8967-faf3e92c7070,Namespace:kube-system,Attempt:0,} returns sandbox id \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\"" Aug 13 01:52:58.515379 kubelet[2776]: E0813 01:52:58.515300 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:58.518809 containerd[1568]: time="2025-08-13T01:52:58.518731439Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:52:58.573312 containerd[1568]: time="2025-08-13T01:52:58.573248261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n69hs,Uid:5a0bd478-50c4-42d2-86de-976c8b3dca1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b\"" Aug 13 01:52:58.574492 kubelet[2776]: E0813 01:52:58.574186 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:58.577816 containerd[1568]: time="2025-08-13T01:52:58.577633396Z" level=info msg="CreateContainer within sandbox \"f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:52:58.590723 systemd[1]: Created slice kubepods-besteffort-podaf06653a_d19b_41b2_aff8_7c6df089793a.slice - libcontainer container kubepods-besteffort-podaf06653a_d19b_41b2_aff8_7c6df089793a.slice. Aug 13 01:52:58.602702 containerd[1568]: time="2025-08-13T01:52:58.602633903Z" level=info msg="Container 559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:52:58.617966 containerd[1568]: time="2025-08-13T01:52:58.617904487Z" level=info msg="CreateContainer within sandbox \"f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556\"" Aug 13 01:52:58.619762 containerd[1568]: time="2025-08-13T01:52:58.618698312Z" level=info msg="StartContainer for \"559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556\"" Aug 13 01:52:58.620851 containerd[1568]: time="2025-08-13T01:52:58.620803600Z" level=info msg="connecting to shim 559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556" address="unix:///run/containerd/s/3b4c441df8a2580bc01a8ed897a60396e9d9f1b8a5887a0185e18e5664e9cf40" protocol=ttrpc version=3 Aug 13 01:52:58.643653 systemd[1]: Started cri-containerd-559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556.scope - libcontainer container 559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556. Aug 13 01:52:58.753309 kubelet[2776]: I0813 01:52:58.752937 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwshb\" (UniqueName: \"kubernetes.io/projected/af06653a-d19b-41b2-aff8-7c6df089793a-kube-api-access-bwshb\") pod \"cilium-operator-5d85765b45-8ck97\" (UID: \"af06653a-d19b-41b2-aff8-7c6df089793a\") " pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:52:58.753309 kubelet[2776]: I0813 01:52:58.752992 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af06653a-d19b-41b2-aff8-7c6df089793a-cilium-config-path\") pod \"cilium-operator-5d85765b45-8ck97\" (UID: \"af06653a-d19b-41b2-aff8-7c6df089793a\") " pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:52:58.766165 containerd[1568]: time="2025-08-13T01:52:58.766014910Z" level=info msg="StartContainer for \"559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556\" returns successfully" Aug 13 01:52:58.896502 kubelet[2776]: E0813 01:52:58.895986 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:58.896775 containerd[1568]: time="2025-08-13T01:52:58.896738780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8ck97,Uid:af06653a-d19b-41b2-aff8-7c6df089793a,Namespace:kube-system,Attempt:0,}" Aug 13 01:52:58.957312 containerd[1568]: time="2025-08-13T01:52:58.957233798Z" level=info msg="connecting to shim 2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d" address="unix:///run/containerd/s/a08eb5922d2a3d7150fce0101f74d4852b6f67c4630ec5f09c4fc6c5ae826aaf" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:52:58.999178 systemd[1]: Started cri-containerd-2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d.scope - libcontainer container 2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d. Aug 13 01:52:59.085975 containerd[1568]: time="2025-08-13T01:52:59.085931423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8ck97,Uid:af06653a-d19b-41b2-aff8-7c6df089793a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\"" Aug 13 01:52:59.087987 kubelet[2776]: E0813 01:52:59.087894 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:59.540096 kubelet[2776]: E0813 01:52:59.539084 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:52:59.633565 kubelet[2776]: I0813 01:52:59.617505 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n69hs" podStartSLOduration=2.617484793 podStartE2EDuration="2.617484793s" podCreationTimestamp="2025-08-13 01:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:52:59.616360723 +0000 UTC m=+7.287215707" watchObservedRunningTime="2025-08-13 01:52:59.617484793 +0000 UTC m=+7.288339767" Aug 13 01:53:00.968316 kubelet[2776]: E0813 01:53:00.967933 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:01.233695 kubelet[2776]: E0813 01:53:01.233416 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:01.565975 kubelet[2776]: E0813 01:53:01.565676 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:01.566578 kubelet[2776]: E0813 01:53:01.566194 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:02.925207 kubelet[2776]: E0813 01:53:02.925134 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:05.069811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398452511.mount: Deactivated successfully. Aug 13 01:53:08.999509 containerd[1568]: time="2025-08-13T01:53:08.999411435Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:53:09.000294 containerd[1568]: time="2025-08-13T01:53:09.000270642Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 01:53:09.001417 containerd[1568]: time="2025-08-13T01:53:09.001225367Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:53:09.002519 containerd[1568]: time="2025-08-13T01:53:09.002491234Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.483705188s" Aug 13 01:53:09.002566 containerd[1568]: time="2025-08-13T01:53:09.002526587Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:53:09.003918 containerd[1568]: time="2025-08-13T01:53:09.003892860Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:53:09.006275 containerd[1568]: time="2025-08-13T01:53:09.006079913Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:53:09.018935 containerd[1568]: time="2025-08-13T01:53:09.017259843Z" level=info msg="Container 5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:53:09.021101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885934597.mount: Deactivated successfully. Aug 13 01:53:09.026143 containerd[1568]: time="2025-08-13T01:53:09.026107761Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\"" Aug 13 01:53:09.026730 containerd[1568]: time="2025-08-13T01:53:09.026706120Z" level=info msg="StartContainer for \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\"" Aug 13 01:53:09.028057 containerd[1568]: time="2025-08-13T01:53:09.027982867Z" level=info msg="connecting to shim 5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce" address="unix:///run/containerd/s/c5e75c303f8ffbfc957afe9511b072584e3fb2a77e5c4da1d6fcb7961146cf56" protocol=ttrpc version=3 Aug 13 01:53:09.114800 systemd[1]: Started cri-containerd-5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce.scope - libcontainer container 5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce. Aug 13 01:53:09.175599 containerd[1568]: time="2025-08-13T01:53:09.175531636Z" level=info msg="StartContainer for \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" returns successfully" Aug 13 01:53:09.190314 systemd[1]: cri-containerd-5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce.scope: Deactivated successfully. Aug 13 01:53:09.194878 containerd[1568]: time="2025-08-13T01:53:09.194654638Z" level=info msg="received exit event container_id:\"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" id:\"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" pid:3188 exited_at:{seconds:1755049989 nanos:192450665}" Aug 13 01:53:09.195140 containerd[1568]: time="2025-08-13T01:53:09.195069423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" id:\"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" pid:3188 exited_at:{seconds:1755049989 nanos:192450665}" Aug 13 01:53:09.261115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce-rootfs.mount: Deactivated successfully. Aug 13 01:53:09.591320 kubelet[2776]: E0813 01:53:09.591214 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:09.596108 containerd[1568]: time="2025-08-13T01:53:09.596053390Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:53:09.606000 containerd[1568]: time="2025-08-13T01:53:09.605966196Z" level=info msg="Container e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:53:09.610319 containerd[1568]: time="2025-08-13T01:53:09.610270254Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\"" Aug 13 01:53:09.611518 containerd[1568]: time="2025-08-13T01:53:09.610981123Z" level=info msg="StartContainer for \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\"" Aug 13 01:53:09.612448 containerd[1568]: time="2025-08-13T01:53:09.612380700Z" level=info msg="connecting to shim e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a" address="unix:///run/containerd/s/c5e75c303f8ffbfc957afe9511b072584e3fb2a77e5c4da1d6fcb7961146cf56" protocol=ttrpc version=3 Aug 13 01:53:09.677754 systemd[1]: Started cri-containerd-e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a.scope - libcontainer container e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a. Aug 13 01:53:09.814574 containerd[1568]: time="2025-08-13T01:53:09.814503553Z" level=info msg="StartContainer for \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" returns successfully" Aug 13 01:53:09.860940 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:53:09.861224 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:53:09.861448 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:53:09.864370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:53:09.867872 systemd[1]: cri-containerd-e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a.scope: Deactivated successfully. Aug 13 01:53:09.872922 containerd[1568]: time="2025-08-13T01:53:09.872046457Z" level=info msg="received exit event container_id:\"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" id:\"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" pid:3232 exited_at:{seconds:1755049989 nanos:870867898}" Aug 13 01:53:09.873561 containerd[1568]: time="2025-08-13T01:53:09.872054658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" id:\"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" pid:3232 exited_at:{seconds:1755049989 nanos:870867898}" Aug 13 01:53:09.963051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:53:10.612443 kubelet[2776]: E0813 01:53:10.612366 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:10.620727 containerd[1568]: time="2025-08-13T01:53:10.620655033Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:53:10.693497 containerd[1568]: time="2025-08-13T01:53:10.689410680Z" level=info msg="Container c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:53:10.691377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346871436.mount: Deactivated successfully. Aug 13 01:53:10.708536 containerd[1568]: time="2025-08-13T01:53:10.708462196Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\"" Aug 13 01:53:10.710769 containerd[1568]: time="2025-08-13T01:53:10.710738534Z" level=info msg="StartContainer for \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\"" Aug 13 01:53:10.712137 containerd[1568]: time="2025-08-13T01:53:10.712096161Z" level=info msg="connecting to shim c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394" address="unix:///run/containerd/s/c5e75c303f8ffbfc957afe9511b072584e3fb2a77e5c4da1d6fcb7961146cf56" protocol=ttrpc version=3 Aug 13 01:53:10.804864 systemd[1]: Started cri-containerd-c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394.scope - libcontainer container c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394. Aug 13 01:53:10.970463 containerd[1568]: time="2025-08-13T01:53:10.969326608Z" level=info msg="StartContainer for \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" returns successfully" Aug 13 01:53:10.972303 systemd[1]: cri-containerd-c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394.scope: Deactivated successfully. Aug 13 01:53:10.976614 containerd[1568]: time="2025-08-13T01:53:10.976328324Z" level=info msg="received exit event container_id:\"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" id:\"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" pid:3295 exited_at:{seconds:1755049990 nanos:975137281}" Aug 13 01:53:10.976871 containerd[1568]: time="2025-08-13T01:53:10.976849355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" id:\"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" pid:3295 exited_at:{seconds:1755049990 nanos:975137281}" Aug 13 01:53:10.986916 containerd[1568]: time="2025-08-13T01:53:10.986847595Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:53:10.988389 containerd[1568]: time="2025-08-13T01:53:10.988316030Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 01:53:10.989523 containerd[1568]: time="2025-08-13T01:53:10.988879284Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:53:10.991616 containerd[1568]: time="2025-08-13T01:53:10.991564874Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.987546103s" Aug 13 01:53:10.991616 containerd[1568]: time="2025-08-13T01:53:10.991609927Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:53:10.999858 containerd[1568]: time="2025-08-13T01:53:10.999794456Z" level=info msg="CreateContainer within sandbox \"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:53:11.015360 containerd[1568]: time="2025-08-13T01:53:11.014666092Z" level=info msg="Container 583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:53:11.020873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394-rootfs.mount: Deactivated successfully. Aug 13 01:53:11.036191 containerd[1568]: time="2025-08-13T01:53:11.035965840Z" level=info msg="CreateContainer within sandbox \"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\"" Aug 13 01:53:11.039062 containerd[1568]: time="2025-08-13T01:53:11.038859231Z" level=info msg="StartContainer for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\"" Aug 13 01:53:11.040950 containerd[1568]: time="2025-08-13T01:53:11.040371162Z" level=info msg="connecting to shim 583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1" address="unix:///run/containerd/s/a08eb5922d2a3d7150fce0101f74d4852b6f67c4630ec5f09c4fc6c5ae826aaf" protocol=ttrpc version=3 Aug 13 01:53:11.087643 systemd[1]: Started cri-containerd-583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1.scope - libcontainer container 583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1. Aug 13 01:53:11.125589 containerd[1568]: time="2025-08-13T01:53:11.125526001Z" level=info msg="StartContainer for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" returns successfully" Aug 13 01:53:11.646386 kubelet[2776]: E0813 01:53:11.646066 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:11.649874 kubelet[2776]: E0813 01:53:11.649780 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:11.650891 containerd[1568]: time="2025-08-13T01:53:11.650857964Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:53:11.663399 containerd[1568]: time="2025-08-13T01:53:11.662703811Z" level=info msg="Container 5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:53:11.675590 containerd[1568]: time="2025-08-13T01:53:11.675533870Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\"" Aug 13 01:53:11.677251 containerd[1568]: time="2025-08-13T01:53:11.677225493Z" level=info msg="StartContainer for \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\"" Aug 13 01:53:11.679683 containerd[1568]: time="2025-08-13T01:53:11.679633080Z" level=info msg="connecting to shim 5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13" address="unix:///run/containerd/s/c5e75c303f8ffbfc957afe9511b072584e3fb2a77e5c4da1d6fcb7961146cf56" protocol=ttrpc version=3 Aug 13 01:53:11.773561 systemd[1]: Started cri-containerd-5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13.scope - libcontainer container 5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13. Aug 13 01:53:12.020881 containerd[1568]: time="2025-08-13T01:53:12.020673515Z" level=info msg="StartContainer for \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" returns successfully" Aug 13 01:53:12.024716 containerd[1568]: time="2025-08-13T01:53:12.024682300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" id:\"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" pid:3371 exited_at:{seconds:1755049992 nanos:24258501}" Aug 13 01:53:12.024790 systemd[1]: cri-containerd-5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13.scope: Deactivated successfully. Aug 13 01:53:12.025756 containerd[1568]: time="2025-08-13T01:53:12.025351376Z" level=info msg="received exit event container_id:\"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" id:\"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" pid:3371 exited_at:{seconds:1755049992 nanos:24258501}" Aug 13 01:53:12.065032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13-rootfs.mount: Deactivated successfully. Aug 13 01:53:12.668595 kubelet[2776]: E0813 01:53:12.668158 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:12.669541 kubelet[2776]: E0813 01:53:12.669524 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:12.672298 containerd[1568]: time="2025-08-13T01:53:12.672218812Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:53:12.734614 containerd[1568]: time="2025-08-13T01:53:12.729773928Z" level=info msg="Container 9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:53:12.743203 kubelet[2776]: I0813 01:53:12.742621 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8ck97" podStartSLOduration=2.841005721 podStartE2EDuration="14.742592776s" podCreationTimestamp="2025-08-13 01:52:58 +0000 UTC" firstStartedPulling="2025-08-13 01:52:59.091594635 +0000 UTC m=+6.762449599" lastFinishedPulling="2025-08-13 01:53:10.99318169 +0000 UTC m=+18.664036654" observedRunningTime="2025-08-13 01:53:11.94568925 +0000 UTC m=+19.616544234" watchObservedRunningTime="2025-08-13 01:53:12.742592776 +0000 UTC m=+20.413447740" Aug 13 01:53:12.747892 containerd[1568]: time="2025-08-13T01:53:12.747819834Z" level=info msg="CreateContainer within sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\"" Aug 13 01:53:12.749751 containerd[1568]: time="2025-08-13T01:53:12.749697533Z" level=info msg="StartContainer for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\"" Aug 13 01:53:12.753050 containerd[1568]: time="2025-08-13T01:53:12.752798115Z" level=info msg="connecting to shim 9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562" address="unix:///run/containerd/s/c5e75c303f8ffbfc957afe9511b072584e3fb2a77e5c4da1d6fcb7961146cf56" protocol=ttrpc version=3 Aug 13 01:53:12.779112 kubelet[2776]: I0813 01:53:12.778818 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:12.779112 kubelet[2776]: I0813 01:53:12.778875 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:53:12.805528 kubelet[2776]: I0813 01:53:12.805464 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:53:12.875081 systemd[1]: Started cri-containerd-9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562.scope - libcontainer container 9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562. Aug 13 01:53:12.949619 kubelet[2776]: I0813 01:53:12.949271 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:12.949619 kubelet[2776]: I0813 01:53:12.949442 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-vfxxm","kube-system/cilium-operator-5d85765b45-8ck97","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:53:12.950851 kubelet[2776]: E0813 01:53:12.950748 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:53:12.950851 kubelet[2776]: E0813 01:53:12.950786 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:53:12.950851 kubelet[2776]: E0813 01:53:12.950797 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:53:12.950851 kubelet[2776]: E0813 01:53:12.950807 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:53:12.950851 kubelet[2776]: E0813 01:53:12.950816 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:53:12.950851 kubelet[2776]: E0813 01:53:12.950825 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:53:12.950851 kubelet[2776]: I0813 01:53:12.950835 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:53:13.004660 containerd[1568]: time="2025-08-13T01:53:13.004596080Z" level=info msg="StartContainer for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" returns successfully" Aug 13 01:53:13.251892 containerd[1568]: time="2025-08-13T01:53:13.251744063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" id:\"883710e8785fdc85c485f052b00b09e60c7b9004023989017e789d40f1b6e1c9\" pid:3440 exited_at:{seconds:1755049993 nanos:251106562}" Aug 13 01:53:13.333199 kubelet[2776]: I0813 01:53:13.333152 2776 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:53:13.680995 kubelet[2776]: E0813 01:53:13.680935 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:14.683403 kubelet[2776]: E0813 01:53:14.683356 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:15.211643 systemd-networkd[1465]: cilium_host: Link UP Aug 13 01:53:15.211944 systemd-networkd[1465]: cilium_net: Link UP Aug 13 01:53:15.213433 systemd-networkd[1465]: cilium_host: Gained carrier Aug 13 01:53:15.214833 systemd-networkd[1465]: cilium_net: Gained carrier Aug 13 01:53:15.216571 systemd-networkd[1465]: cilium_net: Gained IPv6LL Aug 13 01:53:15.287690 systemd-networkd[1465]: cilium_host: Gained IPv6LL Aug 13 01:53:15.360077 systemd-networkd[1465]: cilium_vxlan: Link UP Aug 13 01:53:15.360091 systemd-networkd[1465]: cilium_vxlan: Gained carrier Aug 13 01:53:15.696054 kubelet[2776]: E0813 01:53:15.696003 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:15.788521 kernel: NET: Registered PF_ALG protocol family Aug 13 01:53:16.644691 systemd-networkd[1465]: lxc_health: Link UP Aug 13 01:53:16.665966 systemd-networkd[1465]: lxc_health: Gained carrier Aug 13 01:53:17.127594 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL Aug 13 01:53:18.272370 kubelet[2776]: E0813 01:53:18.270991 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:18.294675 kubelet[2776]: I0813 01:53:18.294369 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vfxxm" podStartSLOduration=10.808727653 podStartE2EDuration="21.294349231s" podCreationTimestamp="2025-08-13 01:52:57 +0000 UTC" firstStartedPulling="2025-08-13 01:52:58.518184985 +0000 UTC m=+6.189039949" lastFinishedPulling="2025-08-13 01:53:09.003806553 +0000 UTC m=+16.674661527" observedRunningTime="2025-08-13 01:53:13.7023075 +0000 UTC m=+21.373162464" watchObservedRunningTime="2025-08-13 01:53:18.294349231 +0000 UTC m=+25.965204195" Aug 13 01:53:18.406771 systemd-networkd[1465]: lxc_health: Gained IPv6LL Aug 13 01:53:22.966607 kubelet[2776]: I0813 01:53:22.966546 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:22.966607 kubelet[2776]: I0813 01:53:22.966620 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:53:22.968908 kubelet[2776]: I0813 01:53:22.968325 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:53:22.983359 kubelet[2776]: I0813 01:53:22.983324 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:22.983586 kubelet[2776]: I0813 01:53:22.983462 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:53:22.983586 kubelet[2776]: E0813 01:53:22.983530 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:53:22.983586 kubelet[2776]: E0813 01:53:22.983544 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:53:22.983586 kubelet[2776]: E0813 01:53:22.983554 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:53:22.983586 kubelet[2776]: E0813 01:53:22.983568 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:53:22.983586 kubelet[2776]: E0813 01:53:22.983578 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:53:22.983586 kubelet[2776]: E0813 01:53:22.983589 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:53:22.983751 kubelet[2776]: I0813 01:53:22.983601 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:53:25.794800 kubelet[2776]: I0813 01:53:25.794543 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:53:25.798636 kubelet[2776]: E0813 01:53:25.798598 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:26.722264 kubelet[2776]: E0813 01:53:26.722219 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:53:33.003044 kubelet[2776]: I0813 01:53:33.002891 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:33.003044 kubelet[2776]: I0813 01:53:33.002955 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:53:33.007086 kubelet[2776]: I0813 01:53:33.007030 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:53:33.022834 kubelet[2776]: I0813 01:53:33.022733 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:33.023042 kubelet[2776]: I0813 01:53:33.022952 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:53:33.023042 kubelet[2776]: E0813 01:53:33.023012 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:53:33.023042 kubelet[2776]: E0813 01:53:33.023033 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:53:33.023126 kubelet[2776]: E0813 01:53:33.023049 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:53:33.023126 kubelet[2776]: E0813 01:53:33.023064 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:53:33.023126 kubelet[2776]: E0813 01:53:33.023078 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:53:33.023126 kubelet[2776]: E0813 01:53:33.023093 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:53:33.023126 kubelet[2776]: I0813 01:53:33.023110 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:53:43.039245 kubelet[2776]: I0813 01:53:43.038801 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:43.039245 kubelet[2776]: I0813 01:53:43.038846 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:53:43.041159 kubelet[2776]: I0813 01:53:43.041102 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:53:43.052243 kubelet[2776]: I0813 01:53:43.052220 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:43.052342 kubelet[2776]: I0813 01:53:43.052326 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:53:43.052431 kubelet[2776]: E0813 01:53:43.052368 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:53:43.052431 kubelet[2776]: E0813 01:53:43.052381 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:53:43.052431 kubelet[2776]: E0813 01:53:43.052393 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:53:43.052431 kubelet[2776]: E0813 01:53:43.052406 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:53:43.052431 kubelet[2776]: E0813 01:53:43.052420 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:53:43.052431 kubelet[2776]: E0813 01:53:43.052433 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:53:43.052594 kubelet[2776]: I0813 01:53:43.052447 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:53:53.067949 kubelet[2776]: I0813 01:53:53.067909 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:53.067949 kubelet[2776]: I0813 01:53:53.067954 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:53:53.069080 kubelet[2776]: I0813 01:53:53.068079 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:53:53.069080 kubelet[2776]: E0813 01:53:53.068117 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:53:53.069080 kubelet[2776]: E0813 01:53:53.068129 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:53:53.069080 kubelet[2776]: E0813 01:53:53.068139 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:53:53.069080 kubelet[2776]: E0813 01:53:53.068147 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:53:53.069080 kubelet[2776]: E0813 01:53:53.068158 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:53:53.069080 kubelet[2776]: E0813 01:53:53.068167 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:53:53.069080 kubelet[2776]: I0813 01:53:53.068178 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:54:03.084406 kubelet[2776]: I0813 01:54:03.084355 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:03.084406 kubelet[2776]: I0813 01:54:03.084400 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:54:03.087092 kubelet[2776]: I0813 01:54:03.087071 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:54:03.098608 kubelet[2776]: I0813 01:54:03.098463 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:03.098795 kubelet[2776]: I0813 01:54:03.098684 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:54:03.098795 kubelet[2776]: E0813 01:54:03.098736 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:54:03.098795 kubelet[2776]: E0813 01:54:03.098755 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:54:03.098795 kubelet[2776]: E0813 01:54:03.098772 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:54:03.098795 kubelet[2776]: E0813 01:54:03.098788 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:54:03.098795 kubelet[2776]: E0813 01:54:03.098799 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:54:03.098993 kubelet[2776]: E0813 01:54:03.098807 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:54:03.098993 kubelet[2776]: I0813 01:54:03.098819 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:54:06.486881 kubelet[2776]: E0813 01:54:06.485606 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:54:12.486503 kubelet[2776]: E0813 01:54:12.486256 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:54:13.114166 kubelet[2776]: I0813 01:54:13.114125 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:13.114166 kubelet[2776]: I0813 01:54:13.114170 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:54:13.116751 kubelet[2776]: I0813 01:54:13.116696 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:54:13.129435 kubelet[2776]: I0813 01:54:13.129395 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:13.129634 kubelet[2776]: I0813 01:54:13.129534 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:54:13.129634 kubelet[2776]: E0813 01:54:13.129581 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:54:13.129634 kubelet[2776]: E0813 01:54:13.129593 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:54:13.129634 kubelet[2776]: E0813 01:54:13.129602 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:54:13.129634 kubelet[2776]: E0813 01:54:13.129612 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:54:13.129634 kubelet[2776]: E0813 01:54:13.129621 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:54:13.129634 kubelet[2776]: E0813 01:54:13.129629 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:54:13.129804 kubelet[2776]: I0813 01:54:13.129639 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:54:22.486522 kubelet[2776]: E0813 01:54:22.486345 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:54:23.146211 kubelet[2776]: I0813 01:54:23.146172 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:23.146567 kubelet[2776]: I0813 01:54:23.146439 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:54:23.148660 kubelet[2776]: I0813 01:54:23.148591 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:54:23.161212 kubelet[2776]: I0813 01:54:23.161181 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:23.161423 kubelet[2776]: I0813 01:54:23.161275 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:54:23.161423 kubelet[2776]: E0813 01:54:23.161323 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:54:23.161423 kubelet[2776]: E0813 01:54:23.161344 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:54:23.161423 kubelet[2776]: E0813 01:54:23.161359 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:54:23.161423 kubelet[2776]: E0813 01:54:23.161376 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:54:23.161423 kubelet[2776]: E0813 01:54:23.161392 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:54:23.161423 kubelet[2776]: E0813 01:54:23.161405 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:54:23.161423 kubelet[2776]: I0813 01:54:23.161415 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:54:31.486123 kubelet[2776]: E0813 01:54:31.486031 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:54:33.177228 kubelet[2776]: I0813 01:54:33.177175 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:33.177228 kubelet[2776]: I0813 01:54:33.177220 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:54:33.180901 kubelet[2776]: I0813 01:54:33.180877 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:54:33.193930 kubelet[2776]: I0813 01:54:33.193892 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:33.194091 kubelet[2776]: I0813 01:54:33.194067 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:54:33.194151 kubelet[2776]: E0813 01:54:33.194122 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:54:33.194182 kubelet[2776]: E0813 01:54:33.194150 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:54:33.194182 kubelet[2776]: E0813 01:54:33.194167 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:54:33.194182 kubelet[2776]: E0813 01:54:33.194180 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:54:33.194275 kubelet[2776]: E0813 01:54:33.194192 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:54:33.194275 kubelet[2776]: E0813 01:54:33.194203 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:54:33.194275 kubelet[2776]: I0813 01:54:33.194216 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:54:35.486177 kubelet[2776]: E0813 01:54:35.486117 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:54:43.210017 kubelet[2776]: I0813 01:54:43.209978 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:43.210017 kubelet[2776]: I0813 01:54:43.210021 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:54:43.213099 kubelet[2776]: I0813 01:54:43.213058 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:54:43.224050 kubelet[2776]: I0813 01:54:43.224029 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:43.224160 kubelet[2776]: I0813 01:54:43.224145 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:54:43.224193 kubelet[2776]: E0813 01:54:43.224184 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:54:43.224220 kubelet[2776]: E0813 01:54:43.224198 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:54:43.224220 kubelet[2776]: E0813 01:54:43.224208 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:54:43.224220 kubelet[2776]: E0813 01:54:43.224217 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:54:43.224290 kubelet[2776]: E0813 01:54:43.224225 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:54:43.224290 kubelet[2776]: E0813 01:54:43.224234 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:54:43.224290 kubelet[2776]: I0813 01:54:43.224243 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:54:47.485958 kubelet[2776]: E0813 01:54:47.485691 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:54:53.289669 kubelet[2776]: I0813 01:54:53.282546 2776 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:53.289669 kubelet[2776]: I0813 01:54:53.282626 2776 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:54:53.289669 kubelet[2776]: I0813 01:54:53.287492 2776 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:54:53.293393 kubelet[2776]: I0813 01:54:53.293339 2776 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:54:53.294202 containerd[1568]: time="2025-08-13T01:54:53.294142524Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:54:53.309741 containerd[1568]: time="2025-08-13T01:54:53.309321139Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:54:53.315788 containerd[1568]: time="2025-08-13T01:54:53.315113636Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:54:53.317216 containerd[1568]: time="2025-08-13T01:54:53.317102903Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:54:53.317632 containerd[1568]: time="2025-08-13T01:54:53.317564334Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:54:53.318597 kubelet[2776]: I0813 01:54:53.318107 2776 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 01:54:53.319052 containerd[1568]: time="2025-08-13T01:54:53.319005514Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:54:53.326733 containerd[1568]: time="2025-08-13T01:54:53.326656785Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:54:53.328856 containerd[1568]: time="2025-08-13T01:54:53.328778725Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 01:54:53.330877 containerd[1568]: time="2025-08-13T01:54:53.330756872Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 01:54:53.331311 containerd[1568]: time="2025-08-13T01:54:53.331072929Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:54:53.347329 kubelet[2776]: I0813 01:54:53.347280 2776 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:54:53.347980 kubelet[2776]: I0813 01:54:53.347739 2776 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-8ck97","kube-system/cilium-vfxxm","kube-system/kube-controller-manager-172-236-112-106","kube-system/kube-proxy-n69hs","kube-system/kube-apiserver-172-236-112-106","kube-system/kube-scheduler-172-236-112-106"] Aug 13 01:54:53.347980 kubelet[2776]: E0813 01:54:53.347831 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-8ck97" Aug 13 01:54:53.347980 kubelet[2776]: E0813 01:54:53.347863 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vfxxm" Aug 13 01:54:53.347980 kubelet[2776]: E0813 01:54:53.347893 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-112-106" Aug 13 01:54:53.347980 kubelet[2776]: E0813 01:54:53.347912 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-n69hs" Aug 13 01:54:53.347980 kubelet[2776]: E0813 01:54:53.347928 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-112-106" Aug 13 01:54:53.347980 kubelet[2776]: E0813 01:54:53.347941 2776 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-112-106" Aug 13 01:54:53.347980 kubelet[2776]: I0813 01:54:53.347956 2776 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:55:12.206754 systemd[1]: Started sshd@9-172.236.112.106:22-147.75.109.163:50610.service - OpenSSH per-connection server daemon (147.75.109.163:50610). Aug 13 01:55:12.564629 sshd[3879]: Accepted publickey for core from 147.75.109.163 port 50610 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:12.565667 sshd-session[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:12.576310 systemd-logind[1525]: New session 10 of user core. Aug 13 01:55:12.581702 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:55:12.886954 sshd[3881]: Connection closed by 147.75.109.163 port 50610 Aug 13 01:55:12.888767 sshd-session[3879]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:12.894321 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:55:12.895096 systemd[1]: sshd@9-172.236.112.106:22-147.75.109.163:50610.service: Deactivated successfully. Aug 13 01:55:12.899071 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:55:12.901316 systemd-logind[1525]: Removed session 10. Aug 13 01:55:17.486411 kubelet[2776]: E0813 01:55:17.486349 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:55:17.954537 systemd[1]: Started sshd@10-172.236.112.106:22-147.75.109.163:50612.service - OpenSSH per-connection server daemon (147.75.109.163:50612). Aug 13 01:55:18.306260 sshd[3894]: Accepted publickey for core from 147.75.109.163 port 50612 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:18.308119 sshd-session[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:18.313402 systemd-logind[1525]: New session 11 of user core. Aug 13 01:55:18.325801 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:55:18.629417 sshd[3896]: Connection closed by 147.75.109.163 port 50612 Aug 13 01:55:18.630195 sshd-session[3894]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:18.634192 systemd[1]: sshd@10-172.236.112.106:22-147.75.109.163:50612.service: Deactivated successfully. Aug 13 01:55:18.636683 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:55:18.638141 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:55:18.640972 systemd-logind[1525]: Removed session 11. Aug 13 01:55:23.698217 systemd[1]: Started sshd@11-172.236.112.106:22-147.75.109.163:58296.service - OpenSSH per-connection server daemon (147.75.109.163:58296). Aug 13 01:55:24.050168 sshd[3909]: Accepted publickey for core from 147.75.109.163 port 58296 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:24.051857 sshd-session[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:24.057198 systemd-logind[1525]: New session 12 of user core. Aug 13 01:55:24.061686 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:55:24.369930 sshd[3911]: Connection closed by 147.75.109.163 port 58296 Aug 13 01:55:24.370896 sshd-session[3909]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:24.376231 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:55:24.377878 systemd[1]: sshd@11-172.236.112.106:22-147.75.109.163:58296.service: Deactivated successfully. Aug 13 01:55:24.381159 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:55:24.383442 systemd-logind[1525]: Removed session 12. Aug 13 01:55:24.486536 kubelet[2776]: E0813 01:55:24.485287 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:55:27.486248 kubelet[2776]: E0813 01:55:27.486189 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:55:29.432506 systemd[1]: Started sshd@12-172.236.112.106:22-147.75.109.163:51874.service - OpenSSH per-connection server daemon (147.75.109.163:51874). Aug 13 01:55:29.790427 sshd[3926]: Accepted publickey for core from 147.75.109.163 port 51874 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:29.792674 sshd-session[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:29.798487 systemd-logind[1525]: New session 13 of user core. Aug 13 01:55:29.803645 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:55:30.120188 sshd[3928]: Connection closed by 147.75.109.163 port 51874 Aug 13 01:55:30.121079 sshd-session[3926]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:30.125276 systemd[1]: sshd@12-172.236.112.106:22-147.75.109.163:51874.service: Deactivated successfully. Aug 13 01:55:30.128183 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:55:30.130178 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:55:30.132095 systemd-logind[1525]: Removed session 13. Aug 13 01:55:30.187449 systemd[1]: Started sshd@13-172.236.112.106:22-147.75.109.163:51878.service - OpenSSH per-connection server daemon (147.75.109.163:51878). Aug 13 01:55:30.543647 sshd[3941]: Accepted publickey for core from 147.75.109.163 port 51878 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:30.544791 sshd-session[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:30.551834 systemd-logind[1525]: New session 14 of user core. Aug 13 01:55:30.558729 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:55:30.879217 sshd[3943]: Connection closed by 147.75.109.163 port 51878 Aug 13 01:55:30.879953 sshd-session[3941]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:30.884822 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:55:30.885060 systemd[1]: sshd@13-172.236.112.106:22-147.75.109.163:51878.service: Deactivated successfully. Aug 13 01:55:30.887447 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:55:30.889787 systemd-logind[1525]: Removed session 14. Aug 13 01:55:30.940701 systemd[1]: Started sshd@14-172.236.112.106:22-147.75.109.163:51884.service - OpenSSH per-connection server daemon (147.75.109.163:51884). Aug 13 01:55:31.288323 sshd[3953]: Accepted publickey for core from 147.75.109.163 port 51884 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:31.290653 sshd-session[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:31.298141 systemd-logind[1525]: New session 15 of user core. Aug 13 01:55:31.305711 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:55:31.599061 sshd[3955]: Connection closed by 147.75.109.163 port 51884 Aug 13 01:55:31.599977 sshd-session[3953]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:31.605107 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:55:31.606048 systemd[1]: sshd@14-172.236.112.106:22-147.75.109.163:51884.service: Deactivated successfully. Aug 13 01:55:31.608675 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:55:31.610726 systemd-logind[1525]: Removed session 15. Aug 13 01:55:32.486551 kubelet[2776]: E0813 01:55:32.486162 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:55:36.661578 systemd[1]: Started sshd@15-172.236.112.106:22-147.75.109.163:51892.service - OpenSSH per-connection server daemon (147.75.109.163:51892). Aug 13 01:55:37.008324 sshd[3966]: Accepted publickey for core from 147.75.109.163 port 51892 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:37.010176 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:37.016044 systemd-logind[1525]: New session 16 of user core. Aug 13 01:55:37.019695 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:55:37.318882 sshd[3968]: Connection closed by 147.75.109.163 port 51892 Aug 13 01:55:37.319573 sshd-session[3966]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:37.324361 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:55:37.325195 systemd[1]: sshd@15-172.236.112.106:22-147.75.109.163:51892.service: Deactivated successfully. Aug 13 01:55:37.327999 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:55:37.329860 systemd-logind[1525]: Removed session 16. Aug 13 01:55:42.386088 systemd[1]: Started sshd@16-172.236.112.106:22-147.75.109.163:48624.service - OpenSSH per-connection server daemon (147.75.109.163:48624). Aug 13 01:55:42.740337 sshd[3980]: Accepted publickey for core from 147.75.109.163 port 48624 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:42.742454 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:42.749398 systemd-logind[1525]: New session 17 of user core. Aug 13 01:55:42.752620 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:55:43.053543 sshd[3982]: Connection closed by 147.75.109.163 port 48624 Aug 13 01:55:43.054740 sshd-session[3980]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:43.059710 systemd[1]: sshd@16-172.236.112.106:22-147.75.109.163:48624.service: Deactivated successfully. Aug 13 01:55:43.063525 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:55:43.065456 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:55:43.068744 systemd-logind[1525]: Removed session 17. Aug 13 01:55:46.487017 kubelet[2776]: E0813 01:55:46.485758 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:55:48.123059 systemd[1]: Started sshd@17-172.236.112.106:22-147.75.109.163:60398.service - OpenSSH per-connection server daemon (147.75.109.163:60398). Aug 13 01:55:48.484000 sshd[4004]: Accepted publickey for core from 147.75.109.163 port 60398 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:48.487018 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:48.494730 systemd-logind[1525]: New session 18 of user core. Aug 13 01:55:48.501658 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:55:48.796286 sshd[4006]: Connection closed by 147.75.109.163 port 60398 Aug 13 01:55:48.798739 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:48.804084 systemd[1]: sshd@17-172.236.112.106:22-147.75.109.163:60398.service: Deactivated successfully. Aug 13 01:55:48.805993 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:55:48.807301 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:55:48.809313 systemd-logind[1525]: Removed session 18. Aug 13 01:55:48.865332 systemd[1]: Started sshd@18-172.236.112.106:22-147.75.109.163:60410.service - OpenSSH per-connection server daemon (147.75.109.163:60410). Aug 13 01:55:49.201993 sshd[4018]: Accepted publickey for core from 147.75.109.163 port 60410 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:49.204126 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:49.211813 systemd-logind[1525]: New session 19 of user core. Aug 13 01:55:49.214693 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:55:49.544544 sshd[4020]: Connection closed by 147.75.109.163 port 60410 Aug 13 01:55:49.545636 sshd-session[4018]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:49.551175 systemd[1]: sshd@18-172.236.112.106:22-147.75.109.163:60410.service: Deactivated successfully. Aug 13 01:55:49.554716 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:55:49.556064 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:55:49.557994 systemd-logind[1525]: Removed session 19. Aug 13 01:55:49.609774 systemd[1]: Started sshd@19-172.236.112.106:22-147.75.109.163:60424.service - OpenSSH per-connection server daemon (147.75.109.163:60424). Aug 13 01:55:49.970754 sshd[4030]: Accepted publickey for core from 147.75.109.163 port 60424 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:49.972826 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:49.980527 systemd-logind[1525]: New session 20 of user core. Aug 13 01:55:49.989661 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:55:51.467914 sshd[4032]: Connection closed by 147.75.109.163 port 60424 Aug 13 01:55:51.468851 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:51.474655 systemd[1]: sshd@19-172.236.112.106:22-147.75.109.163:60424.service: Deactivated successfully. Aug 13 01:55:51.477565 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:55:51.477849 systemd[1]: session-20.scope: Consumed 521ms CPU time, 64.9M memory peak. Aug 13 01:55:51.478868 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:55:51.481333 systemd-logind[1525]: Removed session 20. Aug 13 01:55:51.527047 systemd[1]: Started sshd@20-172.236.112.106:22-147.75.109.163:60436.service - OpenSSH per-connection server daemon (147.75.109.163:60436). Aug 13 01:55:51.885346 sshd[4049]: Accepted publickey for core from 147.75.109.163 port 60436 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:51.887349 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:51.894968 systemd-logind[1525]: New session 21 of user core. Aug 13 01:55:51.901046 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:55:52.321296 sshd[4051]: Connection closed by 147.75.109.163 port 60436 Aug 13 01:55:52.322061 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:52.328308 systemd[1]: sshd@20-172.236.112.106:22-147.75.109.163:60436.service: Deactivated successfully. Aug 13 01:55:52.331877 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:55:52.334276 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:55:52.336200 systemd-logind[1525]: Removed session 21. Aug 13 01:55:52.383173 systemd[1]: Started sshd@21-172.236.112.106:22-147.75.109.163:60448.service - OpenSSH per-connection server daemon (147.75.109.163:60448). Aug 13 01:55:52.487002 kubelet[2776]: E0813 01:55:52.486616 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:55:52.735454 sshd[4061]: Accepted publickey for core from 147.75.109.163 port 60448 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:52.737521 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:52.742842 systemd-logind[1525]: New session 22 of user core. Aug 13 01:55:52.748681 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:55:53.037878 sshd[4065]: Connection closed by 147.75.109.163 port 60448 Aug 13 01:55:53.039081 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:53.045062 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:55:53.046116 systemd[1]: sshd@21-172.236.112.106:22-147.75.109.163:60448.service: Deactivated successfully. Aug 13 01:55:53.048998 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:55:53.050785 systemd-logind[1525]: Removed session 22. Aug 13 01:55:58.106040 systemd[1]: Started sshd@22-172.236.112.106:22-147.75.109.163:34790.service - OpenSSH per-connection server daemon (147.75.109.163:34790). Aug 13 01:55:58.451583 sshd[4077]: Accepted publickey for core from 147.75.109.163 port 34790 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:55:58.453427 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:55:58.460140 systemd-logind[1525]: New session 23 of user core. Aug 13 01:55:58.471683 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:55:58.757300 sshd[4079]: Connection closed by 147.75.109.163 port 34790 Aug 13 01:55:58.758493 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Aug 13 01:55:58.763330 systemd[1]: sshd@22-172.236.112.106:22-147.75.109.163:34790.service: Deactivated successfully. Aug 13 01:55:58.765302 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:55:58.766656 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:55:58.768585 systemd-logind[1525]: Removed session 23. Aug 13 01:56:03.825648 systemd[1]: Started sshd@23-172.236.112.106:22-147.75.109.163:34792.service - OpenSSH per-connection server daemon (147.75.109.163:34792). Aug 13 01:56:04.172966 sshd[4104]: Accepted publickey for core from 147.75.109.163 port 34792 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:04.174983 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:04.181957 systemd-logind[1525]: New session 24 of user core. Aug 13 01:56:04.190713 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:56:04.486754 sshd[4106]: Connection closed by 147.75.109.163 port 34792 Aug 13 01:56:04.487127 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:04.494725 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:56:04.495417 systemd[1]: sshd@23-172.236.112.106:22-147.75.109.163:34792.service: Deactivated successfully. Aug 13 01:56:04.501318 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:56:04.505233 systemd-logind[1525]: Removed session 24. Aug 13 01:56:09.551175 systemd[1]: Started sshd@24-172.236.112.106:22-147.75.109.163:38648.service - OpenSSH per-connection server daemon (147.75.109.163:38648). Aug 13 01:56:09.911464 sshd[4118]: Accepted publickey for core from 147.75.109.163 port 38648 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:09.913743 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:09.919821 systemd-logind[1525]: New session 25 of user core. Aug 13 01:56:09.923685 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:56:10.225902 sshd[4120]: Connection closed by 147.75.109.163 port 38648 Aug 13 01:56:10.226841 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:10.231646 systemd[1]: sshd@24-172.236.112.106:22-147.75.109.163:38648.service: Deactivated successfully. Aug 13 01:56:10.234914 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:56:10.236252 systemd-logind[1525]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:56:10.238226 systemd-logind[1525]: Removed session 25. Aug 13 01:56:15.296697 systemd[1]: Started sshd@25-172.236.112.106:22-147.75.109.163:38654.service - OpenSSH per-connection server daemon (147.75.109.163:38654). Aug 13 01:56:15.650126 sshd[4132]: Accepted publickey for core from 147.75.109.163 port 38654 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:15.652415 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:15.660000 systemd-logind[1525]: New session 26 of user core. Aug 13 01:56:15.667818 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:56:15.953914 sshd[4134]: Connection closed by 147.75.109.163 port 38654 Aug 13 01:56:15.954833 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:15.959626 systemd-logind[1525]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:56:15.960316 systemd[1]: sshd@25-172.236.112.106:22-147.75.109.163:38654.service: Deactivated successfully. Aug 13 01:56:15.962215 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:56:15.964740 systemd-logind[1525]: Removed session 26. Aug 13 01:56:21.023926 systemd[1]: Started sshd@26-172.236.112.106:22-147.75.109.163:33556.service - OpenSSH per-connection server daemon (147.75.109.163:33556). Aug 13 01:56:21.383372 sshd[4146]: Accepted publickey for core from 147.75.109.163 port 33556 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:21.385221 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:21.391521 systemd-logind[1525]: New session 27 of user core. Aug 13 01:56:21.396644 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:56:21.689145 sshd[4148]: Connection closed by 147.75.109.163 port 33556 Aug 13 01:56:21.690002 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:21.695354 systemd[1]: sshd@26-172.236.112.106:22-147.75.109.163:33556.service: Deactivated successfully. Aug 13 01:56:21.698318 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:56:21.699524 systemd-logind[1525]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:56:21.701902 systemd-logind[1525]: Removed session 27. Aug 13 01:56:26.752938 systemd[1]: Started sshd@27-172.236.112.106:22-147.75.109.163:33562.service - OpenSSH per-connection server daemon (147.75.109.163:33562). Aug 13 01:56:27.099755 sshd[4160]: Accepted publickey for core from 147.75.109.163 port 33562 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:27.101943 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:27.107813 systemd-logind[1525]: New session 28 of user core. Aug 13 01:56:27.113649 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:56:27.417530 sshd[4163]: Connection closed by 147.75.109.163 port 33562 Aug 13 01:56:27.418268 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:27.424415 systemd-logind[1525]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:56:27.425377 systemd[1]: sshd@27-172.236.112.106:22-147.75.109.163:33562.service: Deactivated successfully. Aug 13 01:56:27.427792 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:56:27.429289 systemd-logind[1525]: Removed session 28. Aug 13 01:56:32.486007 systemd[1]: Started sshd@28-172.236.112.106:22-147.75.109.163:47492.service - OpenSSH per-connection server daemon (147.75.109.163:47492). Aug 13 01:56:32.839843 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 47492 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:32.842153 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:32.849372 systemd-logind[1525]: New session 29 of user core. Aug 13 01:56:32.856947 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:56:33.149232 sshd[4180]: Connection closed by 147.75.109.163 port 47492 Aug 13 01:56:33.149760 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:33.156979 systemd[1]: sshd@28-172.236.112.106:22-147.75.109.163:47492.service: Deactivated successfully. Aug 13 01:56:33.159960 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:56:33.161572 systemd-logind[1525]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:56:33.163260 systemd-logind[1525]: Removed session 29. Aug 13 01:56:38.212992 systemd[1]: Started sshd@29-172.236.112.106:22-147.75.109.163:53374.service - OpenSSH per-connection server daemon (147.75.109.163:53374). Aug 13 01:56:38.566105 sshd[4192]: Accepted publickey for core from 147.75.109.163 port 53374 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:38.568570 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:38.576189 systemd-logind[1525]: New session 30 of user core. Aug 13 01:56:38.580655 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:56:38.880771 sshd[4195]: Connection closed by 147.75.109.163 port 53374 Aug 13 01:56:38.881738 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:38.886856 systemd-logind[1525]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:56:38.887302 systemd[1]: sshd@29-172.236.112.106:22-147.75.109.163:53374.service: Deactivated successfully. Aug 13 01:56:38.890255 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:56:38.892201 systemd-logind[1525]: Removed session 30. Aug 13 01:56:42.488560 kubelet[2776]: E0813 01:56:42.488393 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:56:43.487002 kubelet[2776]: E0813 01:56:43.486890 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:56:43.947294 systemd[1]: Started sshd@30-172.236.112.106:22-147.75.109.163:53378.service - OpenSSH per-connection server daemon (147.75.109.163:53378). Aug 13 01:56:44.302888 sshd[4207]: Accepted publickey for core from 147.75.109.163 port 53378 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:44.304735 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:44.310942 systemd-logind[1525]: New session 31 of user core. Aug 13 01:56:44.318691 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:56:44.608960 sshd[4209]: Connection closed by 147.75.109.163 port 53378 Aug 13 01:56:44.609930 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:44.615825 systemd[1]: sshd@30-172.236.112.106:22-147.75.109.163:53378.service: Deactivated successfully. Aug 13 01:56:44.619111 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:56:44.620409 systemd-logind[1525]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:56:44.622298 systemd-logind[1525]: Removed session 31. Aug 13 01:56:49.676279 systemd[1]: Started sshd@31-172.236.112.106:22-147.75.109.163:34842.service - OpenSSH per-connection server daemon (147.75.109.163:34842). Aug 13 01:56:50.030260 sshd[4221]: Accepted publickey for core from 147.75.109.163 port 34842 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:50.032095 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:50.038588 systemd-logind[1525]: New session 32 of user core. Aug 13 01:56:50.043714 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:56:50.353820 sshd[4223]: Connection closed by 147.75.109.163 port 34842 Aug 13 01:56:50.354720 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:50.358276 systemd-logind[1525]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:56:50.359094 systemd[1]: sshd@31-172.236.112.106:22-147.75.109.163:34842.service: Deactivated successfully. Aug 13 01:56:50.361725 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:56:50.364965 systemd-logind[1525]: Removed session 32. Aug 13 01:56:53.486332 kubelet[2776]: E0813 01:56:53.486263 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:56:55.418771 systemd[1]: Started sshd@32-172.236.112.106:22-147.75.109.163:34848.service - OpenSSH per-connection server daemon (147.75.109.163:34848). Aug 13 01:56:55.754882 sshd[4237]: Accepted publickey for core from 147.75.109.163 port 34848 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:56:55.756841 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:56:55.763711 systemd-logind[1525]: New session 33 of user core. Aug 13 01:56:55.768704 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:56:56.063184 sshd[4239]: Connection closed by 147.75.109.163 port 34848 Aug 13 01:56:56.064551 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Aug 13 01:56:56.069736 systemd[1]: sshd@32-172.236.112.106:22-147.75.109.163:34848.service: Deactivated successfully. Aug 13 01:56:56.072567 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:56:56.074771 systemd-logind[1525]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:56:56.076919 systemd-logind[1525]: Removed session 33. Aug 13 01:56:59.486502 kubelet[2776]: E0813 01:56:59.486320 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:56:59.487022 kubelet[2776]: E0813 01:56:59.486643 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:57:01.129770 systemd[1]: Started sshd@33-172.236.112.106:22-147.75.109.163:44212.service - OpenSSH per-connection server daemon (147.75.109.163:44212). Aug 13 01:57:01.484674 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 44212 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:01.486868 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:01.492526 systemd-logind[1525]: New session 34 of user core. Aug 13 01:57:01.503776 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:57:01.801250 sshd[4256]: Connection closed by 147.75.109.163 port 44212 Aug 13 01:57:01.802369 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:01.807457 systemd[1]: sshd@33-172.236.112.106:22-147.75.109.163:44212.service: Deactivated successfully. Aug 13 01:57:01.809885 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:57:01.810942 systemd-logind[1525]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:57:01.812917 systemd-logind[1525]: Removed session 34. Aug 13 01:57:06.863248 systemd[1]: Started sshd@34-172.236.112.106:22-147.75.109.163:44214.service - OpenSSH per-connection server daemon (147.75.109.163:44214). Aug 13 01:57:07.205670 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 44214 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:07.207955 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:07.215238 systemd-logind[1525]: New session 35 of user core. Aug 13 01:57:07.221669 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:57:07.511762 sshd[4270]: Connection closed by 147.75.109.163 port 44214 Aug 13 01:57:07.512802 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:07.517747 systemd-logind[1525]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:57:07.518896 systemd[1]: sshd@34-172.236.112.106:22-147.75.109.163:44214.service: Deactivated successfully. Aug 13 01:57:07.521116 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:57:07.523124 systemd-logind[1525]: Removed session 35. Aug 13 01:57:10.486144 kubelet[2776]: E0813 01:57:10.485652 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:57:12.579050 systemd[1]: Started sshd@35-172.236.112.106:22-147.75.109.163:46856.service - OpenSSH per-connection server daemon (147.75.109.163:46856). Aug 13 01:57:12.939761 sshd[4281]: Accepted publickey for core from 147.75.109.163 port 46856 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:12.941716 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:12.947852 systemd-logind[1525]: New session 36 of user core. Aug 13 01:57:12.952733 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:57:13.250365 sshd[4283]: Connection closed by 147.75.109.163 port 46856 Aug 13 01:57:13.251718 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:13.256360 systemd-logind[1525]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:57:13.256853 systemd[1]: sshd@35-172.236.112.106:22-147.75.109.163:46856.service: Deactivated successfully. Aug 13 01:57:13.260900 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:57:13.264837 systemd-logind[1525]: Removed session 36. Aug 13 01:57:18.311657 systemd[1]: Started sshd@36-172.236.112.106:22-147.75.109.163:60276.service - OpenSSH per-connection server daemon (147.75.109.163:60276). Aug 13 01:57:18.663944 sshd[4296]: Accepted publickey for core from 147.75.109.163 port 60276 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:18.665827 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:18.671624 systemd-logind[1525]: New session 37 of user core. Aug 13 01:57:18.675644 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:57:18.974632 sshd[4298]: Connection closed by 147.75.109.163 port 60276 Aug 13 01:57:18.975772 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:18.981626 systemd[1]: sshd@36-172.236.112.106:22-147.75.109.163:60276.service: Deactivated successfully. Aug 13 01:57:18.984809 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:57:18.986982 systemd-logind[1525]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:57:18.988926 systemd-logind[1525]: Removed session 37. Aug 13 01:57:24.043898 systemd[1]: Started sshd@37-172.236.112.106:22-147.75.109.163:60288.service - OpenSSH per-connection server daemon (147.75.109.163:60288). Aug 13 01:57:24.396703 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 60288 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:24.398604 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:24.404443 systemd-logind[1525]: New session 38 of user core. Aug 13 01:57:24.409635 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:57:24.712458 sshd[4312]: Connection closed by 147.75.109.163 port 60288 Aug 13 01:57:24.713730 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:24.720021 systemd-logind[1525]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:57:24.720529 systemd[1]: sshd@37-172.236.112.106:22-147.75.109.163:60288.service: Deactivated successfully. Aug 13 01:57:24.723511 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:57:24.726293 systemd-logind[1525]: Removed session 38. Aug 13 01:57:29.789454 systemd[1]: Started sshd@38-172.236.112.106:22-147.75.109.163:52946.service - OpenSSH per-connection server daemon (147.75.109.163:52946). Aug 13 01:57:30.144184 sshd[4326]: Accepted publickey for core from 147.75.109.163 port 52946 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:30.145990 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:30.152141 systemd-logind[1525]: New session 39 of user core. Aug 13 01:57:30.165667 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:57:30.452545 sshd[4328]: Connection closed by 147.75.109.163 port 52946 Aug 13 01:57:30.453611 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:30.459167 systemd[1]: sshd@38-172.236.112.106:22-147.75.109.163:52946.service: Deactivated successfully. Aug 13 01:57:30.461839 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:57:30.462839 systemd-logind[1525]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:57:30.464874 systemd-logind[1525]: Removed session 39. Aug 13 01:57:35.516543 systemd[1]: Started sshd@39-172.236.112.106:22-147.75.109.163:52962.service - OpenSSH per-connection server daemon (147.75.109.163:52962). Aug 13 01:57:35.868302 sshd[4341]: Accepted publickey for core from 147.75.109.163 port 52962 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:35.870540 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:35.877292 systemd-logind[1525]: New session 40 of user core. Aug 13 01:57:35.883721 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:57:36.172290 sshd[4343]: Connection closed by 147.75.109.163 port 52962 Aug 13 01:57:36.173061 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:36.179064 systemd[1]: sshd@39-172.236.112.106:22-147.75.109.163:52962.service: Deactivated successfully. Aug 13 01:57:36.181321 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:57:36.182297 systemd-logind[1525]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:57:36.184345 systemd-logind[1525]: Removed session 40. Aug 13 01:57:41.234793 systemd[1]: Started sshd@40-172.236.112.106:22-147.75.109.163:48710.service - OpenSSH per-connection server daemon (147.75.109.163:48710). Aug 13 01:57:41.571103 sshd[4355]: Accepted publickey for core from 147.75.109.163 port 48710 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:41.572939 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:41.579128 systemd-logind[1525]: New session 41 of user core. Aug 13 01:57:41.583698 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:57:41.870878 sshd[4357]: Connection closed by 147.75.109.163 port 48710 Aug 13 01:57:41.871634 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:41.877266 systemd[1]: sshd@40-172.236.112.106:22-147.75.109.163:48710.service: Deactivated successfully. Aug 13 01:57:41.880257 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:57:41.882162 systemd-logind[1525]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:57:41.883833 systemd-logind[1525]: Removed session 41. Aug 13 01:57:46.674729 containerd[1568]: time="2025-08-13T01:57:46.674575615Z" level=warning msg="container event discarded" container=676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173 type=CONTAINER_CREATED_EVENT Aug 13 01:57:46.685986 containerd[1568]: time="2025-08-13T01:57:46.685923053Z" level=warning msg="container event discarded" container=676a96e7d73defdf162f86b676d1942877804ed72740f7aff9a9ade9a4f06173 type=CONTAINER_STARTED_EVENT Aug 13 01:57:46.733289 containerd[1568]: time="2025-08-13T01:57:46.733212858Z" level=warning msg="container event discarded" container=79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454 type=CONTAINER_CREATED_EVENT Aug 13 01:57:46.733289 containerd[1568]: time="2025-08-13T01:57:46.733277088Z" level=warning msg="container event discarded" container=79f2c3213699071df833a63da562afff9999cd840ad081c950e2b1bc5a924454 type=CONTAINER_STARTED_EVENT Aug 13 01:57:46.746610 containerd[1568]: time="2025-08-13T01:57:46.746531317Z" level=warning msg="container event discarded" container=3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975 type=CONTAINER_CREATED_EVENT Aug 13 01:57:46.763908 containerd[1568]: time="2025-08-13T01:57:46.763797438Z" level=warning msg="container event discarded" container=cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b type=CONTAINER_CREATED_EVENT Aug 13 01:57:46.776192 containerd[1568]: time="2025-08-13T01:57:46.776121613Z" level=warning msg="container event discarded" container=296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0 type=CONTAINER_CREATED_EVENT Aug 13 01:57:46.776192 containerd[1568]: time="2025-08-13T01:57:46.776177592Z" level=warning msg="container event discarded" container=296604d5a181c8b292004d1c231754f63d04fb0c72ea2234ea5750a0b06dccb0 type=CONTAINER_STARTED_EVENT Aug 13 01:57:46.808562 containerd[1568]: time="2025-08-13T01:57:46.808436196Z" level=warning msg="container event discarded" container=e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698 type=CONTAINER_CREATED_EVENT Aug 13 01:57:46.933052 systemd[1]: Started sshd@41-172.236.112.106:22-147.75.109.163:48712.service - OpenSSH per-connection server daemon (147.75.109.163:48712). Aug 13 01:57:46.953245 containerd[1568]: time="2025-08-13T01:57:46.953161288Z" level=warning msg="container event discarded" container=cfad0dc4f62f0d5fdbd90fd35c9895bec121428b6417bd1463d21327ca36b18b type=CONTAINER_STARTED_EVENT Aug 13 01:57:46.964551 containerd[1568]: time="2025-08-13T01:57:46.964444566Z" level=warning msg="container event discarded" container=3644d2bd85041de4a22453e4d5e84b3ea267f5ca4806a653b0ae2d9498be5975 type=CONTAINER_STARTED_EVENT Aug 13 01:57:47.161397 containerd[1568]: time="2025-08-13T01:57:47.161297524Z" level=warning msg="container event discarded" container=e087009e6cdb998b5588f006d6bfb90f18b3b784e500bd6418ad284744ea3698 type=CONTAINER_STARTED_EVENT Aug 13 01:57:47.283245 sshd[4370]: Accepted publickey for core from 147.75.109.163 port 48712 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:47.285116 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:47.290688 systemd-logind[1525]: New session 42 of user core. Aug 13 01:57:47.296661 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:57:47.585174 sshd[4372]: Connection closed by 147.75.109.163 port 48712 Aug 13 01:57:47.586073 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:47.592275 systemd[1]: sshd@41-172.236.112.106:22-147.75.109.163:48712.service: Deactivated successfully. Aug 13 01:57:47.595664 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:57:47.596835 systemd-logind[1525]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:57:47.599394 systemd-logind[1525]: Removed session 42. Aug 13 01:57:48.486430 kubelet[2776]: E0813 01:57:48.486305 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:57:52.651817 systemd[1]: Started sshd@42-172.236.112.106:22-147.75.109.163:56890.service - OpenSSH per-connection server daemon (147.75.109.163:56890). Aug 13 01:57:53.001354 sshd[4386]: Accepted publickey for core from 147.75.109.163 port 56890 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:53.003559 sshd-session[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:53.010042 systemd-logind[1525]: New session 43 of user core. Aug 13 01:57:53.016672 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:57:53.303376 sshd[4388]: Connection closed by 147.75.109.163 port 56890 Aug 13 01:57:53.304284 sshd-session[4386]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:53.309761 systemd-logind[1525]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:57:53.310234 systemd[1]: sshd@42-172.236.112.106:22-147.75.109.163:56890.service: Deactivated successfully. Aug 13 01:57:53.312799 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:57:53.314954 systemd-logind[1525]: Removed session 43. Aug 13 01:57:57.486889 kubelet[2776]: E0813 01:57:57.486786 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:57:58.373673 systemd[1]: Started sshd@43-172.236.112.106:22-147.75.109.163:36462.service - OpenSSH per-connection server daemon (147.75.109.163:36462). Aug 13 01:57:58.523408 containerd[1568]: time="2025-08-13T01:57:58.523258736Z" level=warning msg="container event discarded" container=1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154 type=CONTAINER_CREATED_EVENT Aug 13 01:57:58.523408 containerd[1568]: time="2025-08-13T01:57:58.523357375Z" level=warning msg="container event discarded" container=1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154 type=CONTAINER_STARTED_EVENT Aug 13 01:57:58.583814 containerd[1568]: time="2025-08-13T01:57:58.583672564Z" level=warning msg="container event discarded" container=f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b type=CONTAINER_CREATED_EVENT Aug 13 01:57:58.583814 containerd[1568]: time="2025-08-13T01:57:58.583750863Z" level=warning msg="container event discarded" container=f6e0c66137d411b2770b860dba62768932a114d19d97d1e8716a0981a1b7947b type=CONTAINER_STARTED_EVENT Aug 13 01:57:58.626293 containerd[1568]: time="2025-08-13T01:57:58.626057967Z" level=warning msg="container event discarded" container=559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556 type=CONTAINER_CREATED_EVENT Aug 13 01:57:58.723248 sshd[4400]: Accepted publickey for core from 147.75.109.163 port 36462 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:57:58.725176 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:57:58.730804 systemd-logind[1525]: New session 44 of user core. Aug 13 01:57:58.739688 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:57:58.775536 containerd[1568]: time="2025-08-13T01:57:58.775439826Z" level=warning msg="container event discarded" container=559310e89a523d2547b011144105fb660bf65662ff7baef9b4c48c28e69ff556 type=CONTAINER_STARTED_EVENT Aug 13 01:57:59.037550 sshd[4402]: Connection closed by 147.75.109.163 port 36462 Aug 13 01:57:59.038574 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Aug 13 01:57:59.043942 systemd-logind[1525]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:57:59.044355 systemd[1]: sshd@43-172.236.112.106:22-147.75.109.163:36462.service: Deactivated successfully. Aug 13 01:57:59.046887 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:57:59.049621 systemd-logind[1525]: Removed session 44. Aug 13 01:57:59.096282 containerd[1568]: time="2025-08-13T01:57:59.096152656Z" level=warning msg="container event discarded" container=2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d type=CONTAINER_CREATED_EVENT Aug 13 01:57:59.096282 containerd[1568]: time="2025-08-13T01:57:59.096250746Z" level=warning msg="container event discarded" container=2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d type=CONTAINER_STARTED_EVENT Aug 13 01:58:01.486153 kubelet[2776]: E0813 01:58:01.485987 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:58:04.101411 systemd[1]: Started sshd@44-172.236.112.106:22-147.75.109.163:36472.service - OpenSSH per-connection server daemon (147.75.109.163:36472). Aug 13 01:58:04.445278 sshd[4416]: Accepted publickey for core from 147.75.109.163 port 36472 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:04.447220 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:04.452903 systemd-logind[1525]: New session 45 of user core. Aug 13 01:58:04.460665 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:58:04.746878 sshd[4418]: Connection closed by 147.75.109.163 port 36472 Aug 13 01:58:04.747721 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:04.754486 systemd[1]: sshd@44-172.236.112.106:22-147.75.109.163:36472.service: Deactivated successfully. Aug 13 01:58:04.757746 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:58:04.759063 systemd-logind[1525]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:58:04.761376 systemd-logind[1525]: Removed session 45. Aug 13 01:58:09.036719 containerd[1568]: time="2025-08-13T01:58:09.036629502Z" level=warning msg="container event discarded" container=5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce type=CONTAINER_CREATED_EVENT Aug 13 01:58:09.185337 containerd[1568]: time="2025-08-13T01:58:09.185202430Z" level=warning msg="container event discarded" container=5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce type=CONTAINER_STARTED_EVENT Aug 13 01:58:09.330816 containerd[1568]: time="2025-08-13T01:58:09.330598379Z" level=warning msg="container event discarded" container=5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce type=CONTAINER_STOPPED_EVENT Aug 13 01:58:09.620064 containerd[1568]: time="2025-08-13T01:58:09.619939097Z" level=warning msg="container event discarded" container=e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a type=CONTAINER_CREATED_EVENT Aug 13 01:58:09.818269 systemd[1]: Started sshd@45-172.236.112.106:22-147.75.109.163:60930.service - OpenSSH per-connection server daemon (147.75.109.163:60930). Aug 13 01:58:09.824562 containerd[1568]: time="2025-08-13T01:58:09.824464494Z" level=warning msg="container event discarded" container=e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a type=CONTAINER_STARTED_EVENT Aug 13 01:58:09.999147 containerd[1568]: time="2025-08-13T01:58:09.998935285Z" level=warning msg="container event discarded" container=e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a type=CONTAINER_STOPPED_EVENT Aug 13 01:58:10.159101 sshd[4430]: Accepted publickey for core from 147.75.109.163 port 60930 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:10.161097 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:10.167207 systemd-logind[1525]: New session 46 of user core. Aug 13 01:58:10.172674 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:58:10.473102 sshd[4432]: Connection closed by 147.75.109.163 port 60930 Aug 13 01:58:10.473877 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:10.478744 systemd[1]: sshd@45-172.236.112.106:22-147.75.109.163:60930.service: Deactivated successfully. Aug 13 01:58:10.481208 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:58:10.482013 systemd-logind[1525]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:58:10.483794 systemd-logind[1525]: Removed session 46. Aug 13 01:58:10.486610 kubelet[2776]: E0813 01:58:10.485542 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:58:10.718619 containerd[1568]: time="2025-08-13T01:58:10.718524493Z" level=warning msg="container event discarded" container=c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394 type=CONTAINER_CREATED_EVENT Aug 13 01:58:10.979289 containerd[1568]: time="2025-08-13T01:58:10.979200596Z" level=warning msg="container event discarded" container=c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394 type=CONTAINER_STARTED_EVENT Aug 13 01:58:11.045607 containerd[1568]: time="2025-08-13T01:58:11.045530845Z" level=warning msg="container event discarded" container=583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1 type=CONTAINER_CREATED_EVENT Aug 13 01:58:11.045607 containerd[1568]: time="2025-08-13T01:58:11.045599074Z" level=warning msg="container event discarded" container=c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394 type=CONTAINER_STOPPED_EVENT Aug 13 01:58:11.134917 containerd[1568]: time="2025-08-13T01:58:11.134827812Z" level=warning msg="container event discarded" container=583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1 type=CONTAINER_STARTED_EVENT Aug 13 01:58:11.685654 containerd[1568]: time="2025-08-13T01:58:11.685531800Z" level=warning msg="container event discarded" container=5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13 type=CONTAINER_CREATED_EVENT Aug 13 01:58:12.029512 containerd[1568]: time="2025-08-13T01:58:12.029292510Z" level=warning msg="container event discarded" container=5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13 type=CONTAINER_STARTED_EVENT Aug 13 01:58:12.092722 containerd[1568]: time="2025-08-13T01:58:12.092620554Z" level=warning msg="container event discarded" container=5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13 type=CONTAINER_STOPPED_EVENT Aug 13 01:58:12.486334 kubelet[2776]: E0813 01:58:12.485666 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:58:12.754563 containerd[1568]: time="2025-08-13T01:58:12.754337806Z" level=warning msg="container event discarded" container=9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562 type=CONTAINER_CREATED_EVENT Aug 13 01:58:13.015053 containerd[1568]: time="2025-08-13T01:58:13.014886517Z" level=warning msg="container event discarded" container=9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562 type=CONTAINER_STARTED_EVENT Aug 13 01:58:15.539304 systemd[1]: Started sshd@46-172.236.112.106:22-147.75.109.163:60938.service - OpenSSH per-connection server daemon (147.75.109.163:60938). Aug 13 01:58:15.889929 sshd[4443]: Accepted publickey for core from 147.75.109.163 port 60938 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:15.891973 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:15.896799 systemd-logind[1525]: New session 47 of user core. Aug 13 01:58:15.905710 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:58:16.196596 sshd[4445]: Connection closed by 147.75.109.163 port 60938 Aug 13 01:58:16.197598 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:16.202853 systemd-logind[1525]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:58:16.203073 systemd[1]: sshd@46-172.236.112.106:22-147.75.109.163:60938.service: Deactivated successfully. Aug 13 01:58:16.205924 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:58:16.208085 systemd-logind[1525]: Removed session 47. Aug 13 01:58:20.486139 kubelet[2776]: E0813 01:58:20.485466 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:58:21.261586 systemd[1]: Started sshd@47-172.236.112.106:22-147.75.109.163:40838.service - OpenSSH per-connection server daemon (147.75.109.163:40838). Aug 13 01:58:21.606264 sshd[4457]: Accepted publickey for core from 147.75.109.163 port 40838 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:21.608435 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:21.614380 systemd-logind[1525]: New session 48 of user core. Aug 13 01:58:21.625781 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:58:21.912770 sshd[4459]: Connection closed by 147.75.109.163 port 40838 Aug 13 01:58:21.913881 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:21.918646 systemd[1]: sshd@47-172.236.112.106:22-147.75.109.163:40838.service: Deactivated successfully. Aug 13 01:58:21.922377 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:58:21.924213 systemd-logind[1525]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:58:21.927252 systemd-logind[1525]: Removed session 48. Aug 13 01:58:26.979981 systemd[1]: Started sshd@48-172.236.112.106:22-147.75.109.163:40854.service - OpenSSH per-connection server daemon (147.75.109.163:40854). Aug 13 01:58:27.324254 sshd[4471]: Accepted publickey for core from 147.75.109.163 port 40854 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:27.326431 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:27.332706 systemd-logind[1525]: New session 49 of user core. Aug 13 01:58:27.337640 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:58:27.647912 sshd[4473]: Connection closed by 147.75.109.163 port 40854 Aug 13 01:58:27.648888 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:27.656584 systemd[1]: sshd@48-172.236.112.106:22-147.75.109.163:40854.service: Deactivated successfully. Aug 13 01:58:27.661915 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:58:27.663693 systemd-logind[1525]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:58:27.667758 systemd-logind[1525]: Removed session 49. Aug 13 01:58:32.714298 systemd[1]: Started sshd@49-172.236.112.106:22-147.75.109.163:59980.service - OpenSSH per-connection server daemon (147.75.109.163:59980). Aug 13 01:58:33.067141 sshd[4487]: Accepted publickey for core from 147.75.109.163 port 59980 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:33.068934 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:33.073762 systemd-logind[1525]: New session 50 of user core. Aug 13 01:58:33.078616 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:58:33.375946 sshd[4489]: Connection closed by 147.75.109.163 port 59980 Aug 13 01:58:33.376789 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:33.381516 systemd[1]: sshd@49-172.236.112.106:22-147.75.109.163:59980.service: Deactivated successfully. Aug 13 01:58:33.383855 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:58:33.384769 systemd-logind[1525]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:58:33.386335 systemd-logind[1525]: Removed session 50. Aug 13 01:58:38.442271 systemd[1]: Started sshd@50-172.236.112.106:22-147.75.109.163:54846.service - OpenSSH per-connection server daemon (147.75.109.163:54846). Aug 13 01:58:38.797793 sshd[4501]: Accepted publickey for core from 147.75.109.163 port 54846 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:38.800082 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:38.806666 systemd-logind[1525]: New session 51 of user core. Aug 13 01:58:38.813701 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:58:39.107914 sshd[4503]: Connection closed by 147.75.109.163 port 54846 Aug 13 01:58:39.108318 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:39.115798 systemd-logind[1525]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:58:39.116297 systemd[1]: sshd@50-172.236.112.106:22-147.75.109.163:54846.service: Deactivated successfully. Aug 13 01:58:39.119206 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:58:39.121946 systemd-logind[1525]: Removed session 51. Aug 13 01:58:44.174506 systemd[1]: Started sshd@51-172.236.112.106:22-147.75.109.163:54850.service - OpenSSH per-connection server daemon (147.75.109.163:54850). Aug 13 01:58:44.524795 sshd[4514]: Accepted publickey for core from 147.75.109.163 port 54850 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:44.527375 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:44.533820 systemd-logind[1525]: New session 52 of user core. Aug 13 01:58:44.539686 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:58:44.832590 sshd[4516]: Connection closed by 147.75.109.163 port 54850 Aug 13 01:58:44.833760 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:44.839962 systemd[1]: sshd@51-172.236.112.106:22-147.75.109.163:54850.service: Deactivated successfully. Aug 13 01:58:44.843423 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:58:44.845291 systemd-logind[1525]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:58:44.847342 systemd-logind[1525]: Removed session 52. Aug 13 01:58:49.893620 systemd[1]: Started sshd@52-172.236.112.106:22-147.75.109.163:42196.service - OpenSSH per-connection server daemon (147.75.109.163:42196). Aug 13 01:58:50.229879 sshd[4528]: Accepted publickey for core from 147.75.109.163 port 42196 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:50.232254 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:50.238989 systemd-logind[1525]: New session 53 of user core. Aug 13 01:58:50.251709 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 01:58:50.534515 sshd[4530]: Connection closed by 147.75.109.163 port 42196 Aug 13 01:58:50.535240 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:50.539991 systemd[1]: sshd@52-172.236.112.106:22-147.75.109.163:42196.service: Deactivated successfully. Aug 13 01:58:50.542213 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 01:58:50.543138 systemd-logind[1525]: Session 53 logged out. Waiting for processes to exit. Aug 13 01:58:50.544698 systemd-logind[1525]: Removed session 53. Aug 13 01:58:55.598013 systemd[1]: Started sshd@53-172.236.112.106:22-147.75.109.163:42200.service - OpenSSH per-connection server daemon (147.75.109.163:42200). Aug 13 01:58:55.954228 sshd[4544]: Accepted publickey for core from 147.75.109.163 port 42200 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:58:55.956033 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:58:55.961177 systemd-logind[1525]: New session 54 of user core. Aug 13 01:58:55.969719 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 01:58:56.267955 sshd[4546]: Connection closed by 147.75.109.163 port 42200 Aug 13 01:58:56.268685 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Aug 13 01:58:56.273836 systemd[1]: sshd@53-172.236.112.106:22-147.75.109.163:42200.service: Deactivated successfully. Aug 13 01:58:56.276675 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 01:58:56.278057 systemd-logind[1525]: Session 54 logged out. Waiting for processes to exit. Aug 13 01:58:56.280303 systemd-logind[1525]: Removed session 54. Aug 13 01:58:58.486518 kubelet[2776]: E0813 01:58:58.485756 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:00.487171 kubelet[2776]: E0813 01:59:00.486516 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:01.328609 systemd[1]: Started sshd@54-172.236.112.106:22-147.75.109.163:43314.service - OpenSSH per-connection server daemon (147.75.109.163:43314). Aug 13 01:59:01.682824 sshd[4559]: Accepted publickey for core from 147.75.109.163 port 43314 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:01.685039 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:01.690549 systemd-logind[1525]: New session 55 of user core. Aug 13 01:59:01.694635 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 01:59:01.986020 sshd[4561]: Connection closed by 147.75.109.163 port 43314 Aug 13 01:59:01.986756 sshd-session[4559]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:01.994090 systemd-logind[1525]: Session 55 logged out. Waiting for processes to exit. Aug 13 01:59:01.994244 systemd[1]: sshd@54-172.236.112.106:22-147.75.109.163:43314.service: Deactivated successfully. Aug 13 01:59:01.997065 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 01:59:01.999438 systemd-logind[1525]: Removed session 55. Aug 13 01:59:07.067105 systemd[1]: Started sshd@55-172.236.112.106:22-147.75.109.163:43318.service - OpenSSH per-connection server daemon (147.75.109.163:43318). Aug 13 01:59:07.500834 sshd[4572]: Accepted publickey for core from 147.75.109.163 port 43318 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:07.504573 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:07.514684 systemd-logind[1525]: New session 56 of user core. Aug 13 01:59:07.522325 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 01:59:07.889015 sshd[4574]: Connection closed by 147.75.109.163 port 43318 Aug 13 01:59:07.889990 sshd-session[4572]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:07.905171 systemd[1]: sshd@55-172.236.112.106:22-147.75.109.163:43318.service: Deactivated successfully. Aug 13 01:59:07.911282 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 01:59:07.917205 systemd-logind[1525]: Session 56 logged out. Waiting for processes to exit. Aug 13 01:59:07.922191 systemd-logind[1525]: Removed session 56. Aug 13 01:59:12.949062 systemd[1]: Started sshd@56-172.236.112.106:22-147.75.109.163:34514.service - OpenSSH per-connection server daemon (147.75.109.163:34514). Aug 13 01:59:13.303448 sshd[4586]: Accepted publickey for core from 147.75.109.163 port 34514 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:13.307623 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:13.315641 systemd-logind[1525]: New session 57 of user core. Aug 13 01:59:13.324760 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 01:59:13.613455 sshd[4588]: Connection closed by 147.75.109.163 port 34514 Aug 13 01:59:13.614308 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:13.619839 systemd-logind[1525]: Session 57 logged out. Waiting for processes to exit. Aug 13 01:59:13.620783 systemd[1]: sshd@56-172.236.112.106:22-147.75.109.163:34514.service: Deactivated successfully. Aug 13 01:59:13.623287 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 01:59:13.626182 systemd-logind[1525]: Removed session 57. Aug 13 01:59:13.676292 systemd[1]: Started sshd@57-172.236.112.106:22-147.75.109.163:34516.service - OpenSSH per-connection server daemon (147.75.109.163:34516). Aug 13 01:59:14.030943 sshd[4599]: Accepted publickey for core from 147.75.109.163 port 34516 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:14.032603 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:14.039589 systemd-logind[1525]: New session 58 of user core. Aug 13 01:59:14.047755 systemd[1]: Started session-58.scope - Session 58 of User core. Aug 13 01:59:14.420028 systemd[1]: Started sshd@58-172.236.112.106:22-220.178.8.154:33236.service - OpenSSH per-connection server daemon (220.178.8.154:33236). Aug 13 01:59:15.781147 containerd[1568]: time="2025-08-13T01:59:15.780905915Z" level=info msg="StopContainer for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" with timeout 30 (s)" Aug 13 01:59:15.782118 containerd[1568]: time="2025-08-13T01:59:15.782074673Z" level=info msg="Stop container \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" with signal terminated" Aug 13 01:59:15.806563 systemd[1]: cri-containerd-583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1.scope: Deactivated successfully. Aug 13 01:59:15.811769 containerd[1568]: time="2025-08-13T01:59:15.811698441Z" level=info msg="received exit event container_id:\"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" id:\"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" pid:3339 exited_at:{seconds:1755050355 nanos:811165957}" Aug 13 01:59:15.814569 containerd[1568]: time="2025-08-13T01:59:15.814247495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" id:\"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" pid:3339 exited_at:{seconds:1755050355 nanos:811165957}" Aug 13 01:59:15.814569 containerd[1568]: time="2025-08-13T01:59:15.814396304Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:59:15.823554 containerd[1568]: time="2025-08-13T01:59:15.823494961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" id:\"77305eb42546139d2db07bdbea8ebe6d89feee718261fe9e0faa95787954e31a\" pid:4631 exited_at:{seconds:1755050355 nanos:822825868}" Aug 13 01:59:15.826725 containerd[1568]: time="2025-08-13T01:59:15.826667709Z" level=info msg="StopContainer for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" with timeout 2 (s)" Aug 13 01:59:15.827339 containerd[1568]: time="2025-08-13T01:59:15.827233534Z" level=info msg="Stop container \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" with signal terminated" Aug 13 01:59:15.838211 systemd-networkd[1465]: lxc_health: Link DOWN Aug 13 01:59:15.838231 systemd-networkd[1465]: lxc_health: Lost carrier Aug 13 01:59:15.858360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1-rootfs.mount: Deactivated successfully. Aug 13 01:59:15.864636 systemd[1]: cri-containerd-9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562.scope: Deactivated successfully. Aug 13 01:59:15.865024 systemd[1]: cri-containerd-9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562.scope: Consumed 8.512s CPU time, 123M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 01:59:15.868174 containerd[1568]: time="2025-08-13T01:59:15.867956319Z" level=info msg="received exit event container_id:\"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" id:\"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" pid:3408 exited_at:{seconds:1755050355 nanos:867012829}" Aug 13 01:59:15.872778 containerd[1568]: time="2025-08-13T01:59:15.872325645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" id:\"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" pid:3408 exited_at:{seconds:1755050355 nanos:867012829}" Aug 13 01:59:15.878138 containerd[1568]: time="2025-08-13T01:59:15.878083526Z" level=info msg="StopContainer for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" returns successfully" Aug 13 01:59:15.880334 containerd[1568]: time="2025-08-13T01:59:15.880290864Z" level=info msg="StopPodSandbox for \"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\"" Aug 13 01:59:15.880614 containerd[1568]: time="2025-08-13T01:59:15.880583301Z" level=info msg="Container to stop \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:59:15.892140 systemd[1]: cri-containerd-2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d.scope: Deactivated successfully. Aug 13 01:59:15.895441 containerd[1568]: time="2025-08-13T01:59:15.895393500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" id:\"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" pid:3042 exit_status:137 exited_at:{seconds:1755050355 nanos:894981865}" Aug 13 01:59:15.913961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562-rootfs.mount: Deactivated successfully. Aug 13 01:59:15.924432 containerd[1568]: time="2025-08-13T01:59:15.923908040Z" level=info msg="StopContainer for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" returns successfully" Aug 13 01:59:15.924637 containerd[1568]: time="2025-08-13T01:59:15.924568873Z" level=info msg="StopPodSandbox for \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\"" Aug 13 01:59:15.924671 containerd[1568]: time="2025-08-13T01:59:15.924635443Z" level=info msg="Container to stop \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:59:15.924671 containerd[1568]: time="2025-08-13T01:59:15.924647513Z" level=info msg="Container to stop \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:59:15.924671 containerd[1568]: time="2025-08-13T01:59:15.924655643Z" level=info msg="Container to stop \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:59:15.924671 containerd[1568]: time="2025-08-13T01:59:15.924664073Z" level=info msg="Container to stop \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:59:15.924671 containerd[1568]: time="2025-08-13T01:59:15.924671762Z" level=info msg="Container to stop \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:59:15.934817 systemd[1]: cri-containerd-1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154.scope: Deactivated successfully. Aug 13 01:59:15.957332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d-rootfs.mount: Deactivated successfully. Aug 13 01:59:15.957922 containerd[1568]: time="2025-08-13T01:59:15.957881194Z" level=info msg="shim disconnected" id=2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d namespace=k8s.io Aug 13 01:59:15.959543 containerd[1568]: time="2025-08-13T01:59:15.958374560Z" level=warning msg="cleaning up after shim disconnected" id=2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d namespace=k8s.io Aug 13 01:59:15.960160 containerd[1568]: time="2025-08-13T01:59:15.960011113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:59:15.980106 containerd[1568]: time="2025-08-13T01:59:15.980046019Z" level=info msg="received exit event sandbox_id:\"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" exit_status:137 exited_at:{seconds:1755050355 nanos:894981865}" Aug 13 01:59:15.982547 containerd[1568]: time="2025-08-13T01:59:15.980274267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" id:\"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" pid:2912 exit_status:137 exited_at:{seconds:1755050355 nanos:938184715}" Aug 13 01:59:15.983642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d-shm.mount: Deactivated successfully. Aug 13 01:59:15.985148 containerd[1568]: time="2025-08-13T01:59:15.984982369Z" level=info msg="TearDown network for sandbox \"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" successfully" Aug 13 01:59:15.985148 containerd[1568]: time="2025-08-13T01:59:15.985014089Z" level=info msg="StopPodSandbox for \"2325ea9042fad40778b5f318c34cdb3c0c9100ca35902a4beb596811d078fd4d\" returns successfully" Aug 13 01:59:15.995004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154-rootfs.mount: Deactivated successfully. Aug 13 01:59:16.004924 containerd[1568]: time="2025-08-13T01:59:16.004809057Z" level=info msg="shim disconnected" id=1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154 namespace=k8s.io Aug 13 01:59:16.004924 containerd[1568]: time="2025-08-13T01:59:16.004869417Z" level=warning msg="cleaning up after shim disconnected" id=1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154 namespace=k8s.io Aug 13 01:59:16.004924 containerd[1568]: time="2025-08-13T01:59:16.004882567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:59:16.006524 containerd[1568]: time="2025-08-13T01:59:16.006255472Z" level=info msg="received exit event sandbox_id:\"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" exit_status:137 exited_at:{seconds:1755050355 nanos:938184715}" Aug 13 01:59:16.009098 containerd[1568]: time="2025-08-13T01:59:16.009066044Z" level=info msg="TearDown network for sandbox \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" successfully" Aug 13 01:59:16.009280 containerd[1568]: time="2025-08-13T01:59:16.009215503Z" level=info msg="StopPodSandbox for \"1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154\" returns successfully" Aug 13 01:59:16.086071 kubelet[2776]: I0813 01:59:16.086020 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-etc-cni-netd\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.086071 kubelet[2776]: I0813 01:59:16.086076 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-lib-modules\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087672 kubelet[2776]: I0813 01:59:16.086108 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4cc5d6f-9acd-479d-8967-faf3e92c7070-clustermesh-secrets\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087672 kubelet[2776]: I0813 01:59:16.086131 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwshb\" (UniqueName: \"kubernetes.io/projected/af06653a-d19b-41b2-aff8-7c6df089793a-kube-api-access-bwshb\") pod \"af06653a-d19b-41b2-aff8-7c6df089793a\" (UID: \"af06653a-d19b-41b2-aff8-7c6df089793a\") " Aug 13 01:59:16.087672 kubelet[2776]: I0813 01:59:16.086151 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg8x5\" (UniqueName: \"kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-kube-api-access-mg8x5\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087672 kubelet[2776]: I0813 01:59:16.086167 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-kernel\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087672 kubelet[2776]: I0813 01:59:16.086183 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-run\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087672 kubelet[2776]: I0813 01:59:16.086205 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-xtables-lock\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087894 kubelet[2776]: I0813 01:59:16.086222 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-cgroup\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087894 kubelet[2776]: I0813 01:59:16.086247 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af06653a-d19b-41b2-aff8-7c6df089793a-cilium-config-path\") pod \"af06653a-d19b-41b2-aff8-7c6df089793a\" (UID: \"af06653a-d19b-41b2-aff8-7c6df089793a\") " Aug 13 01:59:16.087894 kubelet[2776]: I0813 01:59:16.086264 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hostproc\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087894 kubelet[2776]: I0813 01:59:16.086281 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hubble-tls\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087894 kubelet[2776]: I0813 01:59:16.086296 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-bpf-maps\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.087894 kubelet[2776]: I0813 01:59:16.086316 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-config-path\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.088060 kubelet[2776]: I0813 01:59:16.086332 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-net\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.088060 kubelet[2776]: I0813 01:59:16.086350 2776 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cni-path\") pod \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\" (UID: \"e4cc5d6f-9acd-479d-8967-faf3e92c7070\") " Aug 13 01:59:16.088060 kubelet[2776]: I0813 01:59:16.086427 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cni-path" (OuterVolumeSpecName: "cni-path") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.088060 kubelet[2776]: I0813 01:59:16.086584 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.088060 kubelet[2776]: I0813 01:59:16.086610 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.088179 kubelet[2776]: I0813 01:59:16.087030 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.090973 kubelet[2776]: I0813 01:59:16.090868 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.090973 kubelet[2776]: I0813 01:59:16.090904 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.090973 kubelet[2776]: I0813 01:59:16.090921 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.091752 kubelet[2776]: I0813 01:59:16.091674 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af06653a-d19b-41b2-aff8-7c6df089793a-kube-api-access-bwshb" (OuterVolumeSpecName: "kube-api-access-bwshb") pod "af06653a-d19b-41b2-aff8-7c6df089793a" (UID: "af06653a-d19b-41b2-aff8-7c6df089793a"). InnerVolumeSpecName "kube-api-access-bwshb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:59:16.091752 kubelet[2776]: I0813 01:59:16.091717 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.091752 kubelet[2776]: I0813 01:59:16.091734 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hostproc" (OuterVolumeSpecName: "hostproc") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.092942 kubelet[2776]: I0813 01:59:16.092897 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:59:16.097829 kubelet[2776]: I0813 01:59:16.097754 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af06653a-d19b-41b2-aff8-7c6df089793a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af06653a-d19b-41b2-aff8-7c6df089793a" (UID: "af06653a-d19b-41b2-aff8-7c6df089793a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:59:16.098252 kubelet[2776]: I0813 01:59:16.098188 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc5d6f-9acd-479d-8967-faf3e92c7070-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:59:16.098699 kubelet[2776]: I0813 01:59:16.098662 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-kube-api-access-mg8x5" (OuterVolumeSpecName: "kube-api-access-mg8x5") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "kube-api-access-mg8x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:59:16.099767 kubelet[2776]: I0813 01:59:16.099737 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:59:16.099865 kubelet[2776]: I0813 01:59:16.099800 2776 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4cc5d6f-9acd-479d-8967-faf3e92c7070" (UID: "e4cc5d6f-9acd-479d-8967-faf3e92c7070"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187294 2776 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-lib-modules\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187343 2776 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4cc5d6f-9acd-479d-8967-faf3e92c7070-clustermesh-secrets\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187356 2776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwshb\" (UniqueName: \"kubernetes.io/projected/af06653a-d19b-41b2-aff8-7c6df089793a-kube-api-access-bwshb\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187368 2776 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg8x5\" (UniqueName: \"kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-kube-api-access-mg8x5\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187379 2776 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-kernel\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187389 2776 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-run\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187361 kubelet[2776]: I0813 01:59:16.187398 2776 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-xtables-lock\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187408 2776 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-cgroup\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187418 2776 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af06653a-d19b-41b2-aff8-7c6df089793a-cilium-config-path\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187428 2776 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-bpf-maps\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187436 2776 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hostproc\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187444 2776 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4cc5d6f-9acd-479d-8967-faf3e92c7070-hubble-tls\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187453 2776 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cilium-config-path\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187462 2776 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-host-proc-sys-net\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187774 kubelet[2776]: I0813 01:59:16.187515 2776 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-cni-path\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.187974 kubelet[2776]: I0813 01:59:16.187526 2776 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4cc5d6f-9acd-479d-8967-faf3e92c7070-etc-cni-netd\") on node \"172-236-112-106\" DevicePath \"\"" Aug 13 01:59:16.494803 systemd[1]: Removed slice kubepods-burstable-pode4cc5d6f_9acd_479d_8967_faf3e92c7070.slice - libcontainer container kubepods-burstable-pode4cc5d6f_9acd_479d_8967_faf3e92c7070.slice. Aug 13 01:59:16.494949 systemd[1]: kubepods-burstable-pode4cc5d6f_9acd_479d_8967_faf3e92c7070.slice: Consumed 8.727s CPU time, 123.5M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 01:59:16.498167 systemd[1]: Removed slice kubepods-besteffort-podaf06653a_d19b_41b2_aff8_7c6df089793a.slice - libcontainer container kubepods-besteffort-podaf06653a_d19b_41b2_aff8_7c6df089793a.slice. Aug 13 01:59:16.521908 kubelet[2776]: I0813 01:59:16.521744 2776 scope.go:117] "RemoveContainer" containerID="583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1" Aug 13 01:59:16.535479 containerd[1568]: time="2025-08-13T01:59:16.534688963Z" level=info msg="RemoveContainer for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\"" Aug 13 01:59:16.541240 containerd[1568]: time="2025-08-13T01:59:16.541205647Z" level=info msg="RemoveContainer for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" returns successfully" Aug 13 01:59:16.541895 kubelet[2776]: I0813 01:59:16.541822 2776 scope.go:117] "RemoveContainer" containerID="583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1" Aug 13 01:59:16.542639 containerd[1568]: time="2025-08-13T01:59:16.542111667Z" level=error msg="ContainerStatus for \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\": not found" Aug 13 01:59:16.544740 kubelet[2776]: E0813 01:59:16.544699 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\": not found" containerID="583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1" Aug 13 01:59:16.545597 kubelet[2776]: I0813 01:59:16.544748 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1"} err="failed to get container status \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"583b0f89eca8bc9f1d3f3d409e9ccdf967ee400b6c75702eeafac3e692ee53b1\": not found" Aug 13 01:59:16.546387 kubelet[2776]: I0813 01:59:16.546355 2776 scope.go:117] "RemoveContainer" containerID="9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562" Aug 13 01:59:16.549455 containerd[1568]: time="2025-08-13T01:59:16.549427082Z" level=info msg="RemoveContainer for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\"" Aug 13 01:59:16.557388 containerd[1568]: time="2025-08-13T01:59:16.557261462Z" level=info msg="RemoveContainer for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" returns successfully" Aug 13 01:59:16.558248 kubelet[2776]: I0813 01:59:16.558179 2776 scope.go:117] "RemoveContainer" containerID="5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13" Aug 13 01:59:16.562105 containerd[1568]: time="2025-08-13T01:59:16.562017284Z" level=info msg="RemoveContainer for \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\"" Aug 13 01:59:16.568855 containerd[1568]: time="2025-08-13T01:59:16.568746866Z" level=info msg="RemoveContainer for \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" returns successfully" Aug 13 01:59:16.569209 kubelet[2776]: I0813 01:59:16.568953 2776 scope.go:117] "RemoveContainer" containerID="c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394" Aug 13 01:59:16.570794 containerd[1568]: time="2025-08-13T01:59:16.570746315Z" level=info msg="RemoveContainer for \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\"" Aug 13 01:59:16.573788 containerd[1568]: time="2025-08-13T01:59:16.573761704Z" level=info msg="RemoveContainer for \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" returns successfully" Aug 13 01:59:16.573934 kubelet[2776]: I0813 01:59:16.573909 2776 scope.go:117] "RemoveContainer" containerID="e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a" Aug 13 01:59:16.575160 containerd[1568]: time="2025-08-13T01:59:16.575128691Z" level=info msg="RemoveContainer for \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\"" Aug 13 01:59:16.577346 containerd[1568]: time="2025-08-13T01:59:16.577321208Z" level=info msg="RemoveContainer for \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" returns successfully" Aug 13 01:59:16.577528 kubelet[2776]: I0813 01:59:16.577442 2776 scope.go:117] "RemoveContainer" containerID="5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce" Aug 13 01:59:16.578801 containerd[1568]: time="2025-08-13T01:59:16.578772784Z" level=info msg="RemoveContainer for \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\"" Aug 13 01:59:16.581143 containerd[1568]: time="2025-08-13T01:59:16.581107030Z" level=info msg="RemoveContainer for \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" returns successfully" Aug 13 01:59:16.581306 kubelet[2776]: I0813 01:59:16.581271 2776 scope.go:117] "RemoveContainer" containerID="9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562" Aug 13 01:59:16.581460 containerd[1568]: time="2025-08-13T01:59:16.581425226Z" level=error msg="ContainerStatus for \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\": not found" Aug 13 01:59:16.581659 kubelet[2776]: E0813 01:59:16.581638 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\": not found" containerID="9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562" Aug 13 01:59:16.581728 kubelet[2776]: I0813 01:59:16.581668 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562"} err="failed to get container status \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b29225ae4627cba7616d0bad3f81bb5d423758d26fa52bf4b867beff5dc3562\": not found" Aug 13 01:59:16.581728 kubelet[2776]: I0813 01:59:16.581691 2776 scope.go:117] "RemoveContainer" containerID="5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13" Aug 13 01:59:16.581906 containerd[1568]: time="2025-08-13T01:59:16.581884561Z" level=error msg="ContainerStatus for \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\": not found" Aug 13 01:59:16.582072 kubelet[2776]: E0813 01:59:16.582034 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\": not found" containerID="5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13" Aug 13 01:59:16.582105 kubelet[2776]: I0813 01:59:16.582080 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13"} err="failed to get container status \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a3810be179362ad2dab86a252bd15e6de4d66c1039ed48f58345b9e8caeac13\": not found" Aug 13 01:59:16.582105 kubelet[2776]: I0813 01:59:16.582096 2776 scope.go:117] "RemoveContainer" containerID="c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394" Aug 13 01:59:16.582299 containerd[1568]: time="2025-08-13T01:59:16.582276508Z" level=error msg="ContainerStatus for \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\": not found" Aug 13 01:59:16.582377 kubelet[2776]: E0813 01:59:16.582359 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\": not found" containerID="c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394" Aug 13 01:59:16.582377 kubelet[2776]: I0813 01:59:16.582378 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394"} err="failed to get container status \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\": rpc error: code = NotFound desc = an error occurred when try to find container \"c49e26f1db3b9b0c9dc7ecbb7a6687f4d54c731d78263502996ebd19e0cb2394\": not found" Aug 13 01:59:16.582458 kubelet[2776]: I0813 01:59:16.582390 2776 scope.go:117] "RemoveContainer" containerID="e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a" Aug 13 01:59:16.582620 containerd[1568]: time="2025-08-13T01:59:16.582558935Z" level=error msg="ContainerStatus for \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\": not found" Aug 13 01:59:16.586486 kubelet[2776]: E0813 01:59:16.584538 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\": not found" containerID="e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a" Aug 13 01:59:16.586486 kubelet[2776]: I0813 01:59:16.584584 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a"} err="failed to get container status \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3e5fa0a328229815ba5a8a3d89b50b1b064f45727f9cb876a1ea6f38bb46a9a\": not found" Aug 13 01:59:16.586486 kubelet[2776]: I0813 01:59:16.584599 2776 scope.go:117] "RemoveContainer" containerID="5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce" Aug 13 01:59:16.586486 kubelet[2776]: E0813 01:59:16.585705 2776 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\": not found" containerID="5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce" Aug 13 01:59:16.586486 kubelet[2776]: I0813 01:59:16.585721 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce"} err="failed to get container status \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\": not found" Aug 13 01:59:16.586647 containerd[1568]: time="2025-08-13T01:59:16.585576964Z" level=error msg="ContainerStatus for \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b1b5a39bfb41b67f785151979dae59c10f6fb0ef2c7a3914a18686719613fce\": not found" Aug 13 01:59:16.858532 systemd[1]: var-lib-kubelet-pods-af06653a\x2dd19b\x2d41b2\x2daff8\x2d7c6df089793a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbwshb.mount: Deactivated successfully. Aug 13 01:59:16.859161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1705078478ab458176399fbecaeb74a4c44162ef7d0f854ae8b4908c40c3c154-shm.mount: Deactivated successfully. Aug 13 01:59:16.859260 systemd[1]: var-lib-kubelet-pods-e4cc5d6f\x2d9acd\x2d479d\x2d8967\x2dfaf3e92c7070-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmg8x5.mount: Deactivated successfully. Aug 13 01:59:16.859335 systemd[1]: var-lib-kubelet-pods-e4cc5d6f\x2d9acd\x2d479d\x2d8967\x2dfaf3e92c7070-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:59:16.859416 systemd[1]: var-lib-kubelet-pods-e4cc5d6f\x2d9acd\x2d479d\x2d8967\x2dfaf3e92c7070-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:59:17.678069 kubelet[2776]: E0813 01:59:17.677975 2776 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:59:17.768502 sshd[4601]: Connection closed by 147.75.109.163 port 34516 Aug 13 01:59:17.768331 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:17.774836 systemd[1]: sshd@57-172.236.112.106:22-147.75.109.163:34516.service: Deactivated successfully. Aug 13 01:59:17.777622 systemd[1]: session-58.scope: Deactivated successfully. Aug 13 01:59:17.779731 systemd-logind[1525]: Session 58 logged out. Waiting for processes to exit. Aug 13 01:59:17.781745 systemd-logind[1525]: Removed session 58. Aug 13 01:59:17.834062 systemd[1]: Started sshd@59-172.236.112.106:22-147.75.109.163:34524.service - OpenSSH per-connection server daemon (147.75.109.163:34524). Aug 13 01:59:17.866854 sshd[4609]: maximum authentication attempts exceeded for root from 220.178.8.154 port 33236 ssh2 [preauth] Aug 13 01:59:17.867303 sshd[4609]: Disconnecting authenticating user root 220.178.8.154 port 33236: Too many authentication failures [preauth] Aug 13 01:59:17.869386 systemd[1]: sshd@58-172.236.112.106:22-220.178.8.154:33236.service: Deactivated successfully. Aug 13 01:59:18.178396 sshd[4760]: Accepted publickey for core from 147.75.109.163 port 34524 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:18.180363 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:18.187622 systemd-logind[1525]: New session 59 of user core. Aug 13 01:59:18.196716 systemd[1]: Started session-59.scope - Session 59 of User core. Aug 13 01:59:18.262708 systemd[1]: Started sshd@60-172.236.112.106:22-220.178.8.154:34048.service - OpenSSH per-connection server daemon (220.178.8.154:34048). Aug 13 01:59:18.489016 kubelet[2776]: I0813 01:59:18.488824 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af06653a-d19b-41b2-aff8-7c6df089793a" path="/var/lib/kubelet/pods/af06653a-d19b-41b2-aff8-7c6df089793a/volumes" Aug 13 01:59:18.490721 kubelet[2776]: I0813 01:59:18.490640 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" path="/var/lib/kubelet/pods/e4cc5d6f-9acd-479d-8967-faf3e92c7070/volumes" Aug 13 01:59:18.815646 kubelet[2776]: E0813 01:59:18.814546 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" containerName="clean-cilium-state" Aug 13 01:59:18.815646 kubelet[2776]: E0813 01:59:18.814611 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" containerName="cilium-agent" Aug 13 01:59:18.815646 kubelet[2776]: E0813 01:59:18.814620 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" containerName="mount-cgroup" Aug 13 01:59:18.815646 kubelet[2776]: E0813 01:59:18.814627 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" containerName="apply-sysctl-overwrites" Aug 13 01:59:18.815646 kubelet[2776]: E0813 01:59:18.814634 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" containerName="mount-bpf-fs" Aug 13 01:59:18.815646 kubelet[2776]: E0813 01:59:18.814640 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af06653a-d19b-41b2-aff8-7c6df089793a" containerName="cilium-operator" Aug 13 01:59:18.815646 kubelet[2776]: I0813 01:59:18.814685 2776 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cc5d6f-9acd-479d-8967-faf3e92c7070" containerName="cilium-agent" Aug 13 01:59:18.815646 kubelet[2776]: I0813 01:59:18.814693 2776 memory_manager.go:354] "RemoveStaleState removing state" podUID="af06653a-d19b-41b2-aff8-7c6df089793a" containerName="cilium-operator" Aug 13 01:59:18.825660 systemd[1]: Created slice kubepods-burstable-pod8b8e015b_4c56_4864_8fbf_d93312830e64.slice - libcontainer container kubepods-burstable-pod8b8e015b_4c56_4864_8fbf_d93312830e64.slice. Aug 13 01:59:18.841148 sshd[4764]: Connection closed by 147.75.109.163 port 34524 Aug 13 01:59:18.841984 sshd-session[4760]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:18.848823 systemd-logind[1525]: Session 59 logged out. Waiting for processes to exit. Aug 13 01:59:18.853313 systemd[1]: sshd@59-172.236.112.106:22-147.75.109.163:34524.service: Deactivated successfully. Aug 13 01:59:18.858207 systemd[1]: session-59.scope: Deactivated successfully. Aug 13 01:59:18.863596 systemd-logind[1525]: Removed session 59. Aug 13 01:59:18.913966 systemd[1]: Started sshd@61-172.236.112.106:22-147.75.109.163:45064.service - OpenSSH per-connection server daemon (147.75.109.163:45064). Aug 13 01:59:19.001773 kubelet[2776]: I0813 01:59:19.001289 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b8e015b-4c56-4864-8fbf-d93312830e64-cilium-config-path\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.001773 kubelet[2776]: I0813 01:59:19.001372 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pd2\" (UniqueName: \"kubernetes.io/projected/8b8e015b-4c56-4864-8fbf-d93312830e64-kube-api-access-56pd2\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.001773 kubelet[2776]: I0813 01:59:19.001419 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-cilium-run\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.001773 kubelet[2776]: I0813 01:59:19.001449 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-cni-path\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.001773 kubelet[2776]: I0813 01:59:19.001492 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b8e015b-4c56-4864-8fbf-d93312830e64-clustermesh-secrets\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.001773 kubelet[2776]: I0813 01:59:19.001534 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b8e015b-4c56-4864-8fbf-d93312830e64-hubble-tls\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002103 kubelet[2776]: I0813 01:59:19.001551 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-cilium-cgroup\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002103 kubelet[2776]: I0813 01:59:19.001568 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-bpf-maps\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002103 kubelet[2776]: I0813 01:59:19.001585 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-etc-cni-netd\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002103 kubelet[2776]: I0813 01:59:19.001602 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-lib-modules\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002103 kubelet[2776]: I0813 01:59:19.001621 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-xtables-lock\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002103 kubelet[2776]: I0813 01:59:19.001641 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b8e015b-4c56-4864-8fbf-d93312830e64-cilium-ipsec-secrets\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002234 kubelet[2776]: I0813 01:59:19.001661 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-host-proc-sys-kernel\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002234 kubelet[2776]: I0813 01:59:19.001684 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-host-proc-sys-net\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.002234 kubelet[2776]: I0813 01:59:19.001699 2776 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b8e015b-4c56-4864-8fbf-d93312830e64-hostproc\") pod \"cilium-gjxt8\" (UID: \"8b8e015b-4c56-4864-8fbf-d93312830e64\") " pod="kube-system/cilium-gjxt8" Aug 13 01:59:19.264614 sshd[4778]: Accepted publickey for core from 147.75.109.163 port 45064 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:19.266885 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:19.273837 systemd-logind[1525]: New session 60 of user core. Aug 13 01:59:19.279738 systemd[1]: Started session-60.scope - Session 60 of User core. Aug 13 01:59:19.432525 kubelet[2776]: E0813 01:59:19.431701 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:19.434045 containerd[1568]: time="2025-08-13T01:59:19.433968841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjxt8,Uid:8b8e015b-4c56-4864-8fbf-d93312830e64,Namespace:kube-system,Attempt:0,}" Aug 13 01:59:19.453912 containerd[1568]: time="2025-08-13T01:59:19.453805698Z" level=info msg="connecting to shim eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6" address="unix:///run/containerd/s/e41e096e97fdc290b32f55fcc9c5791491628c7362bdf7cd5ed47141f4e7bb74" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:59:19.482646 systemd[1]: Started cri-containerd-eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6.scope - libcontainer container eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6. Aug 13 01:59:19.513319 sshd[4784]: Connection closed by 147.75.109.163 port 45064 Aug 13 01:59:19.516386 containerd[1568]: time="2025-08-13T01:59:19.513082058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjxt8,Uid:8b8e015b-4c56-4864-8fbf-d93312830e64,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\"" Aug 13 01:59:19.517179 kubelet[2776]: E0813 01:59:19.514336 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:19.513907 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:19.518326 containerd[1568]: time="2025-08-13T01:59:19.517646651Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:59:19.524972 systemd[1]: sshd@61-172.236.112.106:22-147.75.109.163:45064.service: Deactivated successfully. Aug 13 01:59:19.529221 containerd[1568]: time="2025-08-13T01:59:19.529040574Z" level=info msg="Container de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:59:19.530162 systemd[1]: session-60.scope: Deactivated successfully. Aug 13 01:59:19.535537 systemd-logind[1525]: Session 60 logged out. Waiting for processes to exit. Aug 13 01:59:19.537583 containerd[1568]: time="2025-08-13T01:59:19.536574186Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\"" Aug 13 01:59:19.538268 containerd[1568]: time="2025-08-13T01:59:19.538239920Z" level=info msg="StartContainer for \"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\"" Aug 13 01:59:19.538650 systemd-logind[1525]: Removed session 60. Aug 13 01:59:19.540134 containerd[1568]: time="2025-08-13T01:59:19.540065011Z" level=info msg="connecting to shim de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed" address="unix:///run/containerd/s/e41e096e97fdc290b32f55fcc9c5791491628c7362bdf7cd5ed47141f4e7bb74" protocol=ttrpc version=3 Aug 13 01:59:19.570706 systemd[1]: Started cri-containerd-de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed.scope - libcontainer container de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed. Aug 13 01:59:19.580019 systemd[1]: Started sshd@62-172.236.112.106:22-147.75.109.163:45072.service - OpenSSH per-connection server daemon (147.75.109.163:45072). Aug 13 01:59:19.623790 containerd[1568]: time="2025-08-13T01:59:19.623720011Z" level=info msg="StartContainer for \"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\" returns successfully" Aug 13 01:59:19.637591 systemd[1]: cri-containerd-de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed.scope: Deactivated successfully. Aug 13 01:59:19.641329 containerd[1568]: time="2025-08-13T01:59:19.641263710Z" level=info msg="received exit event container_id:\"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\" id:\"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\" pid:4849 exited_at:{seconds:1755050359 nanos:640672206}" Aug 13 01:59:19.642008 containerd[1568]: time="2025-08-13T01:59:19.641984833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\" id:\"de20f8c95a9b93be84a210ca0a8f77cd129a79a121fe973fd256c29f67aaa5ed\" pid:4849 exited_at:{seconds:1755050359 nanos:640672206}" Aug 13 01:59:19.920625 sshd[4852]: Accepted publickey for core from 147.75.109.163 port 45072 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:59:19.922598 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:59:19.929420 systemd-logind[1525]: New session 61 of user core. Aug 13 01:59:19.935669 systemd[1]: Started session-61.scope - Session 61 of User core. Aug 13 01:59:20.563595 kubelet[2776]: E0813 01:59:20.563509 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:20.566790 containerd[1568]: time="2025-08-13T01:59:20.566739898Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:59:20.581492 containerd[1568]: time="2025-08-13T01:59:20.581400257Z" level=info msg="Container 7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:59:20.594165 containerd[1568]: time="2025-08-13T01:59:20.594033157Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\"" Aug 13 01:59:20.594912 containerd[1568]: time="2025-08-13T01:59:20.594886448Z" level=info msg="StartContainer for \"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\"" Aug 13 01:59:20.596437 containerd[1568]: time="2025-08-13T01:59:20.596293383Z" level=info msg="connecting to shim 7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98" address="unix:///run/containerd/s/e41e096e97fdc290b32f55fcc9c5791491628c7362bdf7cd5ed47141f4e7bb74" protocol=ttrpc version=3 Aug 13 01:59:20.620648 systemd[1]: Started cri-containerd-7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98.scope - libcontainer container 7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98. Aug 13 01:59:20.660668 containerd[1568]: time="2025-08-13T01:59:20.660605300Z" level=info msg="StartContainer for \"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\" returns successfully" Aug 13 01:59:20.668644 systemd[1]: cri-containerd-7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98.scope: Deactivated successfully. Aug 13 01:59:20.670496 containerd[1568]: time="2025-08-13T01:59:20.670407319Z" level=info msg="received exit event container_id:\"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\" id:\"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\" pid:4903 exited_at:{seconds:1755050360 nanos:670123803}" Aug 13 01:59:20.670811 containerd[1568]: time="2025-08-13T01:59:20.670787145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\" id:\"7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98\" pid:4903 exited_at:{seconds:1755050360 nanos:670123803}" Aug 13 01:59:20.692250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bc8b13eb28e658b26696d0c6d136e9eea69e372a48c2580eabaa3b910650c98-rootfs.mount: Deactivated successfully. Aug 13 01:59:21.039277 sshd[4766]: maximum authentication attempts exceeded for root from 220.178.8.154 port 34048 ssh2 [preauth] Aug 13 01:59:21.039277 sshd[4766]: Disconnecting authenticating user root 220.178.8.154 port 34048: Too many authentication failures [preauth] Aug 13 01:59:21.039887 systemd[1]: sshd@60-172.236.112.106:22-220.178.8.154:34048.service: Deactivated successfully. Aug 13 01:59:21.213764 kubelet[2776]: I0813 01:59:21.213700 2776 setters.go:600] "Node became not ready" node="172-236-112-106" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:59:21Z","lastTransitionTime":"2025-08-13T01:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:59:21.467157 systemd[1]: Started sshd@63-172.236.112.106:22-220.178.8.154:34654.service - OpenSSH per-connection server daemon (220.178.8.154:34654). Aug 13 01:59:21.486510 kubelet[2776]: E0813 01:59:21.486171 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:21.568733 kubelet[2776]: E0813 01:59:21.568669 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:21.573283 containerd[1568]: time="2025-08-13T01:59:21.573243676Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:59:21.585212 containerd[1568]: time="2025-08-13T01:59:21.585151563Z" level=info msg="Container 2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:59:21.591825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430439952.mount: Deactivated successfully. Aug 13 01:59:21.606128 containerd[1568]: time="2025-08-13T01:59:21.606067456Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\"" Aug 13 01:59:21.607291 containerd[1568]: time="2025-08-13T01:59:21.606953998Z" level=info msg="StartContainer for \"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\"" Aug 13 01:59:21.610272 containerd[1568]: time="2025-08-13T01:59:21.610188074Z" level=info msg="connecting to shim 2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b" address="unix:///run/containerd/s/e41e096e97fdc290b32f55fcc9c5791491628c7362bdf7cd5ed47141f4e7bb74" protocol=ttrpc version=3 Aug 13 01:59:21.637990 systemd[1]: Started cri-containerd-2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b.scope - libcontainer container 2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b. Aug 13 01:59:21.701075 systemd[1]: cri-containerd-2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b.scope: Deactivated successfully. Aug 13 01:59:21.705491 containerd[1568]: time="2025-08-13T01:59:21.705415340Z" level=info msg="received exit event container_id:\"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\" id:\"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\" pid:4954 exited_at:{seconds:1755050361 nanos:705172763}" Aug 13 01:59:21.705771 containerd[1568]: time="2025-08-13T01:59:21.705749007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\" id:\"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\" pid:4954 exited_at:{seconds:1755050361 nanos:705172763}" Aug 13 01:59:21.707641 containerd[1568]: time="2025-08-13T01:59:21.707620267Z" level=info msg="StartContainer for \"2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b\" returns successfully" Aug 13 01:59:21.740336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e47b4454af4be06f127df34e490422f6bc7a4d7dac310048543287d2f3d196b-rootfs.mount: Deactivated successfully. Aug 13 01:59:22.574609 kubelet[2776]: E0813 01:59:22.574542 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:22.581021 containerd[1568]: time="2025-08-13T01:59:22.580425230Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:59:22.599199 containerd[1568]: time="2025-08-13T01:59:22.598857369Z" level=info msg="Container b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:59:22.607419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007039605.mount: Deactivated successfully. Aug 13 01:59:22.615674 containerd[1568]: time="2025-08-13T01:59:22.615626926Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\"" Aug 13 01:59:22.616740 containerd[1568]: time="2025-08-13T01:59:22.616624765Z" level=info msg="StartContainer for \"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\"" Aug 13 01:59:22.618668 containerd[1568]: time="2025-08-13T01:59:22.617998571Z" level=info msg="connecting to shim b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05" address="unix:///run/containerd/s/e41e096e97fdc290b32f55fcc9c5791491628c7362bdf7cd5ed47141f4e7bb74" protocol=ttrpc version=3 Aug 13 01:59:22.651714 systemd[1]: Started cri-containerd-b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05.scope - libcontainer container b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05. Aug 13 01:59:22.682460 kubelet[2776]: E0813 01:59:22.682413 2776 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:59:22.694987 systemd[1]: cri-containerd-b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05.scope: Deactivated successfully. Aug 13 01:59:22.698752 containerd[1568]: time="2025-08-13T01:59:22.698699165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\" id:\"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\" pid:4992 exited_at:{seconds:1755050362 nanos:694954284}" Aug 13 01:59:22.700688 containerd[1568]: time="2025-08-13T01:59:22.700662005Z" level=info msg="received exit event container_id:\"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\" id:\"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\" pid:4992 exited_at:{seconds:1755050362 nanos:694954284}" Aug 13 01:59:22.710089 containerd[1568]: time="2025-08-13T01:59:22.709952218Z" level=info msg="StartContainer for \"b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05\" returns successfully" Aug 13 01:59:22.732737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b96c1b31abbac32e3534e9735871f0cd173b356a56859072e43f2e207647ff05-rootfs.mount: Deactivated successfully. Aug 13 01:59:23.579995 kubelet[2776]: E0813 01:59:23.579901 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:23.582791 containerd[1568]: time="2025-08-13T01:59:23.582718880Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:59:23.593833 containerd[1568]: time="2025-08-13T01:59:23.593693535Z" level=info msg="Container 43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:59:23.605313 containerd[1568]: time="2025-08-13T01:59:23.604966339Z" level=info msg="CreateContainer within sandbox \"eaf618d6b395b0a824576842de77aadda10be39ce1a6a543c00a3939bee9b6a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\"" Aug 13 01:59:23.611840 containerd[1568]: time="2025-08-13T01:59:23.611015076Z" level=info msg="StartContainer for \"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\"" Aug 13 01:59:23.614590 containerd[1568]: time="2025-08-13T01:59:23.614554479Z" level=info msg="connecting to shim 43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb" address="unix:///run/containerd/s/e41e096e97fdc290b32f55fcc9c5791491628c7362bdf7cd5ed47141f4e7bb74" protocol=ttrpc version=3 Aug 13 01:59:23.643682 systemd[1]: Started cri-containerd-43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb.scope - libcontainer container 43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb. Aug 13 01:59:23.688132 containerd[1568]: time="2025-08-13T01:59:23.688078735Z" level=info msg="StartContainer for \"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\" returns successfully" Aug 13 01:59:23.775252 containerd[1568]: time="2025-08-13T01:59:23.775205210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\" id:\"bba3e4457e8285fb3ae63b3eef1be9cc16357023caa6bbd924136ccfbec34610\" pid:5061 exited_at:{seconds:1755050363 nanos:774615046}" Aug 13 01:59:23.906061 sshd[4938]: maximum authentication attempts exceeded for root from 220.178.8.154 port 34654 ssh2 [preauth] Aug 13 01:59:23.906061 sshd[4938]: Disconnecting authenticating user root 220.178.8.154 port 34654: Too many authentication failures [preauth] Aug 13 01:59:23.910099 systemd[1]: sshd@63-172.236.112.106:22-220.178.8.154:34654.service: Deactivated successfully. Aug 13 01:59:24.174498 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 01:59:24.327782 systemd[1]: Started sshd@64-172.236.112.106:22-220.178.8.154:35340.service - OpenSSH per-connection server daemon (220.178.8.154:35340). Aug 13 01:59:24.585912 kubelet[2776]: E0813 01:59:24.585888 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:24.603879 kubelet[2776]: I0813 01:59:24.603796 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gjxt8" podStartSLOduration=6.603772267 podStartE2EDuration="6.603772267s" podCreationTimestamp="2025-08-13 01:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:59:24.603068996 +0000 UTC m=+392.273923980" watchObservedRunningTime="2025-08-13 01:59:24.603772267 +0000 UTC m=+392.274627231" Aug 13 01:59:25.587830 kubelet[2776]: E0813 01:59:25.587776 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:26.474546 containerd[1568]: time="2025-08-13T01:59:26.474249482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\" id:\"3ed86f01c919de6e8d4a1d5c3d1cd2f743423b79fd21635133a3985e7dbc97cf\" pid:5365 exit_status:1 exited_at:{seconds:1755050366 nanos:472432921}" Aug 13 01:59:26.478234 kubelet[2776]: E0813 01:59:26.478176 2776 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55940->127.0.0.1:45037: write tcp 127.0.0.1:55940->127.0.0.1:45037: write: broken pipe Aug 13 01:59:26.589950 kubelet[2776]: E0813 01:59:26.589868 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:26.691520 sshd[5129]: Received disconnect from 220.178.8.154 port 35340:11: disconnected by user [preauth] Aug 13 01:59:26.691520 sshd[5129]: Disconnected from authenticating user root 220.178.8.154 port 35340 [preauth] Aug 13 01:59:26.694224 systemd[1]: sshd@64-172.236.112.106:22-220.178.8.154:35340.service: Deactivated successfully. Aug 13 01:59:26.911807 systemd[1]: Started sshd@65-172.236.112.106:22-220.178.8.154:35826.service - OpenSSH per-connection server daemon (220.178.8.154:35826). Aug 13 01:59:27.016018 systemd-networkd[1465]: lxc_health: Link UP Aug 13 01:59:27.016460 systemd-networkd[1465]: lxc_health: Gained carrier Aug 13 01:59:27.593090 kubelet[2776]: E0813 01:59:27.593022 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:28.480525 sshd[5521]: Invalid user admin from 220.178.8.154 port 35826 Aug 13 01:59:28.583763 systemd-networkd[1465]: lxc_health: Gained IPv6LL Aug 13 01:59:28.595855 kubelet[2776]: E0813 01:59:28.595812 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:28.621350 containerd[1568]: time="2025-08-13T01:59:28.621288134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\" id:\"3eb1d18c7c1e06803b076d02deb2e7db5fe82090fe5f8b0d889fffefebc892da\" pid:5585 exited_at:{seconds:1755050368 nanos:620011938}" Aug 13 01:59:29.534159 sshd[5521]: maximum authentication attempts exceeded for invalid user admin from 220.178.8.154 port 35826 ssh2 [preauth] Aug 13 01:59:29.534159 sshd[5521]: Disconnecting invalid user admin 220.178.8.154 port 35826: Too many authentication failures [preauth] Aug 13 01:59:29.538771 systemd[1]: sshd@65-172.236.112.106:22-220.178.8.154:35826.service: Deactivated successfully. Aug 13 01:59:29.596805 kubelet[2776]: E0813 01:59:29.596750 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:30.800619 containerd[1568]: time="2025-08-13T01:59:30.800036163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\" id:\"f452132b1e5128457f89c414bb17003a6ecf69efd9cdd50a48ac9bd09c3c3955\" pid:5617 exited_at:{seconds:1755050370 nanos:799449650}" Aug 13 01:59:30.972430 systemd[1]: Started sshd@66-172.236.112.106:22-220.178.8.154:36524.service - OpenSSH per-connection server daemon (220.178.8.154:36524). Aug 13 01:59:31.486137 kubelet[2776]: E0813 01:59:31.486067 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 01:59:32.935150 containerd[1568]: time="2025-08-13T01:59:32.935080450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43a97b57dabd182c912b58b7cac9f1f86d48e6cedc745308db7403a811f581fb\" id:\"74c3c4ed75e77ea24328adf6a1a4ebbfb0b77df78229215a92ed1b3417b8b6c2\" pid:5647 exited_at:{seconds:1755050372 nanos:934119850}" Aug 13 01:59:32.992093 sshd[4883]: Connection closed by 147.75.109.163 port 45072 Aug 13 01:59:32.992982 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Aug 13 01:59:32.997860 systemd-logind[1525]: Session 61 logged out. Waiting for processes to exit. Aug 13 01:59:32.998532 systemd[1]: sshd@62-172.236.112.106:22-147.75.109.163:45072.service: Deactivated successfully. Aug 13 01:59:33.001134 systemd[1]: session-61.scope: Deactivated successfully. Aug 13 01:59:33.002885 systemd-logind[1525]: Removed session 61. Aug 13 01:59:33.417849 sshd[5628]: Invalid user admin from 220.178.8.154 port 36524 Aug 13 01:59:35.383897 sshd[5628]: maximum authentication attempts exceeded for invalid user admin from 220.178.8.154 port 36524 ssh2 [preauth] Aug 13 01:59:35.383897 sshd[5628]: Disconnecting invalid user admin 220.178.8.154 port 36524: Too many authentication failures [preauth] Aug 13 01:59:35.387942 systemd[1]: sshd@66-172.236.112.106:22-220.178.8.154:36524.service: Deactivated successfully. Aug 13 01:59:35.796052 systemd[1]: Started sshd@67-172.236.112.106:22-220.178.8.154:37734.service - OpenSSH per-connection server daemon (220.178.8.154:37734). Aug 13 01:59:37.867910 sshd[5666]: Invalid user admin from 220.178.8.154 port 37734 Aug 13 01:59:38.682247 sshd[5666]: Received disconnect from 220.178.8.154 port 37734:11: disconnected by user [preauth] Aug 13 01:59:38.682247 sshd[5666]: Disconnected from invalid user admin 220.178.8.154 port 37734 [preauth] Aug 13 01:59:38.684560 systemd[1]: sshd@67-172.236.112.106:22-220.178.8.154:37734.service: Deactivated successfully. Aug 13 01:59:38.905340 systemd[1]: Started sshd@68-172.236.112.106:22-220.178.8.154:38312.service - OpenSSH per-connection server daemon (220.178.8.154:38312). Aug 13 01:59:41.582634 sshd[5671]: Invalid user oracle from 220.178.8.154 port 38312 Aug 13 01:59:42.643691 sshd[5671]: maximum authentication attempts exceeded for invalid user oracle from 220.178.8.154 port 38312 ssh2 [preauth] Aug 13 01:59:42.643691 sshd[5671]: Disconnecting invalid user oracle 220.178.8.154 port 38312: Too many authentication failures [preauth] Aug 13 01:59:42.647044 systemd[1]: sshd@68-172.236.112.106:22-220.178.8.154:38312.service: Deactivated successfully. Aug 13 01:59:43.047710 systemd[1]: Started sshd@69-172.236.112.106:22-220.178.8.154:39142.service - OpenSSH per-connection server daemon (220.178.8.154:39142).