Aug 13 00:46:46.849511 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:46:46.849536 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:46.849544 kernel: BIOS-provided physical RAM map: Aug 13 00:46:46.849553 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 00:46:46.849558 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 00:46:46.849564 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:46:46.849570 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 00:46:46.849576 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 00:46:46.849581 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:46:46.849587 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:46:46.849592 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:46:46.849598 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:46:46.849605 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 00:46:46.849611 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:46:46.849618 kernel: NX (Execute Disable) protection: active Aug 13 00:46:46.849624 kernel: APIC: Static calls initialized Aug 13 00:46:46.849630 kernel: SMBIOS 2.8 present. Aug 13 00:46:46.849638 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 00:46:46.849644 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:46:46.849650 kernel: Hypervisor detected: KVM Aug 13 00:46:46.849656 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:46:46.849662 kernel: kvm-clock: using sched offset of 5748705280 cycles Aug 13 00:46:46.849668 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:46:46.849674 kernel: tsc: Detected 2000.000 MHz processor Aug 13 00:46:46.849681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:46:46.849687 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:46:46.849693 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 00:46:46.849701 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:46:46.849708 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:46:46.849714 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 00:46:46.849720 kernel: Using GB pages for direct mapping Aug 13 00:46:46.849726 kernel: ACPI: Early table checksum verification disabled Aug 13 00:46:46.849732 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 00:46:46.849738 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849745 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849751 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849759 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:46:46.849765 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849771 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849777 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849786 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:46.849793 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 00:46:46.849801 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 00:46:46.849807 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:46:46.849814 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 00:46:46.849820 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 00:46:46.849827 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 00:46:46.849833 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 00:46:46.849839 kernel: No NUMA configuration found Aug 13 00:46:46.849845 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 00:46:46.849854 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 00:46:46.849860 kernel: Zone ranges: Aug 13 00:46:46.849867 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:46:46.849873 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:46:46.849879 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:46:46.849885 kernel: Device empty Aug 13 00:46:46.849892 kernel: Movable zone start for each node Aug 13 00:46:46.849898 kernel: Early memory node ranges Aug 13 00:46:46.849905 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:46:46.849911 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 00:46:46.849919 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:46:46.851613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 00:46:46.851622 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:46:46.851629 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:46:46.851635 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 00:46:46.851642 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:46:46.851648 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:46:46.851654 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:46:46.851661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:46:46.851671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:46:46.851677 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:46:46.851684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:46:46.851690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:46:46.851696 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:46:46.851703 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:46:46.851709 kernel: TSC deadline timer available Aug 13 00:46:46.851716 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:46:46.851722 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:46:46.851730 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:46:46.851737 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:46:46.851743 kernel: CPU topo: Num. cores per package: 2 Aug 13 00:46:46.851749 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:46:46.851756 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:46:46.851762 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:46:46.851768 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:46:46.851775 kernel: kvm-guest: setup PV sched yield Aug 13 00:46:46.851781 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:46:46.851789 kernel: Booting paravirtualized kernel on KVM Aug 13 00:46:46.851796 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:46:46.851802 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:46:46.851809 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:46:46.851815 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:46:46.851821 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:46:46.851828 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:46:46.851834 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:46:46.851842 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:46.851850 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:46:46.851857 kernel: random: crng init done Aug 13 00:46:46.851863 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:46:46.851870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:46:46.851876 kernel: Fallback order for Node 0: 0 Aug 13 00:46:46.851882 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 00:46:46.851889 kernel: Policy zone: Normal Aug 13 00:46:46.851895 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:46:46.851903 kernel: software IO TLB: area num 2. Aug 13 00:46:46.851910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:46:46.851916 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:46:46.851942 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:46:46.851949 kernel: Dynamic Preempt: voluntary Aug 13 00:46:46.851955 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:46:46.851963 kernel: rcu: RCU event tracing is enabled. Aug 13 00:46:46.851970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:46:46.851976 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:46:46.851985 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:46:46.851991 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:46:46.851998 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:46:46.852004 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:46:46.852011 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:46.852023 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:46.852032 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:46.852039 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:46:46.852045 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:46:46.852052 kernel: Console: colour VGA+ 80x25 Aug 13 00:46:46.852059 kernel: printk: legacy console [tty0] enabled Aug 13 00:46:46.852065 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:46:46.852074 kernel: ACPI: Core revision 20240827 Aug 13 00:46:46.852081 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:46:46.852087 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:46:46.852094 kernel: x2apic enabled Aug 13 00:46:46.852101 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:46:46.852109 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:46:46.852116 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:46:46.852123 kernel: kvm-guest: setup PV IPIs Aug 13 00:46:46.852130 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:46:46.852137 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 00:46:46.852143 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 00:46:46.852150 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:46:46.852157 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:46:46.852163 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:46:46.852172 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:46:46.852179 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:46:46.852185 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:46:46.852192 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:46:46.852199 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:46:46.852205 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:46:46.852212 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:46:46.852220 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:46:46.852228 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:46:46.852235 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 00:46:46.852242 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:46:46.852248 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:46:46.852255 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:46:46.852262 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:46:46.852269 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:46:46.852275 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:46:46.852282 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 00:46:46.852291 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 00:46:46.852297 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:46:46.852304 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:46:46.852311 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:46:46.852317 kernel: landlock: Up and running. Aug 13 00:46:46.852324 kernel: SELinux: Initializing. Aug 13 00:46:46.852331 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:46.852338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:46:46.852345 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 00:46:46.852353 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:46:46.852360 kernel: ... version: 0 Aug 13 00:46:46.852366 kernel: ... bit width: 48 Aug 13 00:46:46.852373 kernel: ... generic registers: 6 Aug 13 00:46:46.852380 kernel: ... value mask: 0000ffffffffffff Aug 13 00:46:46.852386 kernel: ... max period: 00007fffffffffff Aug 13 00:46:46.852393 kernel: ... fixed-purpose events: 0 Aug 13 00:46:46.852400 kernel: ... event mask: 000000000000003f Aug 13 00:46:46.852406 kernel: signal: max sigframe size: 3376 Aug 13 00:46:46.852415 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:46:46.852421 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:46:46.852428 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:46:46.852435 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:46:46.852441 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:46:46.852448 kernel: .... node #0, CPUs: #1 Aug 13 00:46:46.852455 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:46:46.852461 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 00:46:46.852468 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227288K reserved, 0K cma-reserved) Aug 13 00:46:46.852477 kernel: devtmpfs: initialized Aug 13 00:46:46.852484 kernel: x86/mm: Memory block size: 128MB Aug 13 00:46:46.852490 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:46:46.852497 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:46:46.852504 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:46:46.852510 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:46:46.852517 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:46:46.852524 kernel: audit: type=2000 audit(1755046004.900:1): state=initialized audit_enabled=0 res=1 Aug 13 00:46:46.852530 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:46:46.852539 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:46:46.852546 kernel: cpuidle: using governor menu Aug 13 00:46:46.852552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:46:46.852559 kernel: dca service started, version 1.12.1 Aug 13 00:46:46.852566 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 00:46:46.852572 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:46:46.852579 kernel: PCI: Using configuration type 1 for base access Aug 13 00:46:46.852586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:46:46.852592 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:46:46.852601 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:46:46.852608 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:46:46.852614 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:46:46.852621 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:46:46.852628 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:46:46.852634 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:46:46.852641 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:46:46.852648 kernel: ACPI: Interpreter enabled Aug 13 00:46:46.852654 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:46:46.852662 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:46:46.852669 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:46:46.852676 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:46:46.852683 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:46:46.852689 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:46:46.852848 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:46:46.853020 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:46:46.853137 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:46:46.853147 kernel: PCI host bridge to bus 0000:00 Aug 13 00:46:46.853285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:46:46.853391 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:46:46.853488 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:46:46.853583 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 00:46:46.853678 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:46.853777 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 00:46:46.853871 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:46:46.854257 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:46:46.854386 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:46:46.854497 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 00:46:46.854602 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 00:46:46.854707 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:46:46.854817 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:46:46.855408 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:46:46.855528 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 00:46:46.855637 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 00:46:46.855743 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:46:46.855857 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:46:46.855990 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 00:46:46.856098 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 00:46:46.856204 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:46:46.856308 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:46:46.856423 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:46:46.856528 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:46:46.856640 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 00:46:46.856749 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 00:46:46.856852 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 00:46:46.860457 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 00:46:46.860576 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 00:46:46.860587 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:46:46.860595 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:46:46.860601 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:46:46.860612 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:46:46.860619 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:46:46.860625 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:46:46.860632 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:46:46.860639 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:46:46.860646 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:46:46.860652 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:46:46.860659 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:46:46.860666 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:46:46.860674 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:46:46.860681 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:46:46.860688 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:46:46.860695 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:46:46.860701 kernel: iommu: Default domain type: Translated Aug 13 00:46:46.860708 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:46:46.860715 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:46:46.860721 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:46:46.860728 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 00:46:46.860737 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 00:46:46.860844 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:46:46.860971 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:46:46.861078 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:46:46.861087 kernel: vgaarb: loaded Aug 13 00:46:46.861094 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:46:46.861101 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:46:46.861108 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:46:46.861115 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:46:46.861125 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:46:46.861132 kernel: pnp: PnP ACPI init Aug 13 00:46:46.861247 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:46:46.861257 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:46:46.861264 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:46:46.861271 kernel: NET: Registered PF_INET protocol family Aug 13 00:46:46.861278 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:46:46.861285 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:46:46.861294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:46:46.861301 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:46:46.861308 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:46:46.861314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:46:46.861321 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:46.861328 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:46:46.861335 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:46:46.861341 kernel: NET: Registered PF_XDP protocol family Aug 13 00:46:46.861439 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:46:46.861538 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:46:46.861633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:46:46.861728 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 00:46:46.861822 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:46:46.861916 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 00:46:46.861948 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:46:46.861956 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:46:46.861962 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 00:46:46.861973 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 00:46:46.861980 kernel: Initialise system trusted keyrings Aug 13 00:46:46.861986 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:46:46.861994 kernel: Key type asymmetric registered Aug 13 00:46:46.862000 kernel: Asymmetric key parser 'x509' registered Aug 13 00:46:46.862007 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:46:46.862014 kernel: io scheduler mq-deadline registered Aug 13 00:46:46.862020 kernel: io scheduler kyber registered Aug 13 00:46:46.862027 kernel: io scheduler bfq registered Aug 13 00:46:46.862036 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:46:46.862043 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:46:46.862050 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:46:46.862056 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:46:46.862063 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:46:46.862070 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:46:46.862076 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:46:46.862083 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:46:46.862090 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:46:46.862207 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:46:46.862308 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:46:46.862407 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:46:46 UTC (1755046006) Aug 13 00:46:46.862511 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:46:46.862521 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:46:46.862527 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:46:46.862534 kernel: Segment Routing with IPv6 Aug 13 00:46:46.862541 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:46:46.862550 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:46:46.862557 kernel: Key type dns_resolver registered Aug 13 00:46:46.862563 kernel: IPI shorthand broadcast: enabled Aug 13 00:46:46.862570 kernel: sched_clock: Marking stable (2726003319, 222468953)->(2985593717, -37121445) Aug 13 00:46:46.862577 kernel: registered taskstats version 1 Aug 13 00:46:46.862584 kernel: Loading compiled-in X.509 certificates Aug 13 00:46:46.862590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:46:46.862597 kernel: Demotion targets for Node 0: null Aug 13 00:46:46.862604 kernel: Key type .fscrypt registered Aug 13 00:46:46.862612 kernel: Key type fscrypt-provisioning registered Aug 13 00:46:46.862619 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:46:46.862625 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:46:46.862632 kernel: ima: No architecture policies found Aug 13 00:46:46.862639 kernel: clk: Disabling unused clocks Aug 13 00:46:46.862645 kernel: Warning: unable to open an initial console. Aug 13 00:46:46.862652 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:46:46.862659 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:46:46.862668 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:46:46.862674 kernel: Run /init as init process Aug 13 00:46:46.862681 kernel: with arguments: Aug 13 00:46:46.862687 kernel: /init Aug 13 00:46:46.862694 kernel: with environment: Aug 13 00:46:46.862701 kernel: HOME=/ Aug 13 00:46:46.862720 kernel: TERM=linux Aug 13 00:46:46.862729 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:46:46.862737 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:46:46.862748 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:46.862756 systemd[1]: Detected virtualization kvm. Aug 13 00:46:46.862764 systemd[1]: Detected architecture x86-64. Aug 13 00:46:46.862771 systemd[1]: Running in initrd. Aug 13 00:46:46.862778 systemd[1]: No hostname configured, using default hostname. Aug 13 00:46:46.862785 systemd[1]: Hostname set to . Aug 13 00:46:46.862793 systemd[1]: Initializing machine ID from random generator. Aug 13 00:46:46.862802 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:46:46.862809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:46.862816 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:46.862824 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:46:46.862832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:46.862839 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:46:46.862847 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:46:46.862857 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:46:46.862865 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:46:46.862872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:46.862880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:46.862887 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:46.862894 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:46.862902 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:46.862909 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:46.862916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:46.863954 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:46.863963 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:46:46.863971 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:46:46.863978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:46.863986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:46.863993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:46.864001 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:46.864011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:46:46.864018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:46.864026 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:46:46.864034 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:46:46.864041 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:46:46.864049 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:46.864058 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:46.864065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:46.864073 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:46.864081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:46.864109 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 00:46:46.864129 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:46:46.864137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:46.864145 systemd-journald[206]: Journal started Aug 13 00:46:46.864164 systemd-journald[206]: Runtime Journal (/run/log/journal/41f838dd5eee4cbaac4acbbb3e32c9fa) is 8M, max 78.5M, 70.5M free. Aug 13 00:46:46.847418 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 00:46:46.890558 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:46.895002 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:46:46.895872 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 00:46:46.941530 kernel: Bridge firewalling registered Aug 13 00:46:46.942148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:46.943612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:46.944342 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:46.948008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:46.950022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:46.952189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:46.963034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:46.974043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:46.974789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:46.980393 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:46:46.981981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:46.983198 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:46:46.986874 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:46.993018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:47.005890 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:47.031239 systemd-resolved[245]: Positive Trust Anchors: Aug 13 00:46:47.031865 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:47.031893 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:47.034770 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 00:46:47.037814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:47.038622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:47.081952 kernel: SCSI subsystem initialized Aug 13 00:46:47.090983 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:46:47.100949 kernel: iscsi: registered transport (tcp) Aug 13 00:46:47.119130 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:46:47.119172 kernel: QLogic iSCSI HBA Driver Aug 13 00:46:47.135014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:47.150164 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:47.152364 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:47.190502 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:47.192872 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:46:47.239946 kernel: raid6: avx2x4 gen() 32806 MB/s Aug 13 00:46:47.257944 kernel: raid6: avx2x2 gen() 31652 MB/s Aug 13 00:46:47.276259 kernel: raid6: avx2x1 gen() 23233 MB/s Aug 13 00:46:47.276274 kernel: raid6: using algorithm avx2x4 gen() 32806 MB/s Aug 13 00:46:47.295212 kernel: raid6: .... xor() 5407 MB/s, rmw enabled Aug 13 00:46:47.295237 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:46:47.313950 kernel: xor: automatically using best checksumming function avx Aug 13 00:46:47.439949 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:46:47.446243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:47.448118 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:47.479045 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 00:46:47.483771 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:47.486468 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:46:47.509147 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 00:46:47.531156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:47.532728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:47.595206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:47.599407 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:46:47.645947 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 00:46:47.654940 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:46:47.654988 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:46:47.672961 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:46:47.803272 kernel: libata version 3.00 loaded. Aug 13 00:46:47.803337 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:46:47.826958 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:46:47.829105 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:46:47.835279 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 00:46:47.835573 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 00:46:47.835705 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:46:47.861868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:47.864672 kernel: scsi host1: ahci Aug 13 00:46:47.864851 kernel: scsi host2: ahci Aug 13 00:46:47.862413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:47.865053 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:47.870301 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 00:46:47.870540 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 00:46:47.870281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:47.879987 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:46:47.880149 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 00:46:47.880284 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:46:47.880417 kernel: scsi host3: ahci Aug 13 00:46:47.880555 kernel: scsi host4: ahci Aug 13 00:46:47.879781 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:47.890782 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:46:47.890797 kernel: GPT:9289727 != 9297919 Aug 13 00:46:47.890806 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:46:47.890815 kernel: GPT:9289727 != 9297919 Aug 13 00:46:47.890824 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:46:47.890832 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:47.890841 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:46:47.893253 kernel: scsi host5: ahci Aug 13 00:46:47.893288 kernel: AES CTR mode by8 optimization enabled Aug 13 00:46:47.902950 kernel: scsi host6: ahci Aug 13 00:46:47.903222 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 00:46:47.903236 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 00:46:47.903246 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 00:46:47.903256 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 00:46:47.926835 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 00:46:47.930316 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 00:46:47.989975 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:46:48.023150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:48.038216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:46:48.046398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:46:48.053025 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:46:48.053626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:46:48.056732 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:46:48.078170 disk-uuid[620]: Primary Header is updated. Aug 13 00:46:48.078170 disk-uuid[620]: Secondary Entries is updated. Aug 13 00:46:48.078170 disk-uuid[620]: Secondary Header is updated. Aug 13 00:46:48.088950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:48.104108 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:48.235957 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:48.236028 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:48.238716 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:48.243955 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:48.246890 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:48.246915 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:46:48.267815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:48.269275 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:48.270251 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:48.271530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:48.273632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:46:48.296361 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:49.101953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:46:49.103368 disk-uuid[621]: The operation has completed successfully. Aug 13 00:46:49.156242 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:46:49.156351 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:46:49.179026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:46:49.190672 sh[656]: Success Aug 13 00:46:49.208191 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:46:49.208214 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:46:49.210378 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:46:49.218946 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 00:46:49.259876 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:46:49.263003 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:46:49.272361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:46:49.286985 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:46:49.287027 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (668) Aug 13 00:46:49.287949 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:46:49.290119 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:49.292810 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:46:49.300041 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:46:49.300844 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:49.301716 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:46:49.302336 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:46:49.313011 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:46:49.330980 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (691) Aug 13 00:46:49.336788 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:49.336815 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:49.336825 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:46:49.344941 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:49.345559 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:46:49.347728 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:46:49.418254 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:49.421798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:49.451236 ignition[754]: Ignition 2.21.0 Aug 13 00:46:49.451248 ignition[754]: Stage: fetch-offline Aug 13 00:46:49.451274 ignition[754]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:49.451283 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:49.451356 ignition[754]: parsed url from cmdline: "" Aug 13 00:46:49.451360 ignition[754]: no config URL provided Aug 13 00:46:49.451364 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:49.456908 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:49.451372 ignition[754]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:49.451377 ignition[754]: failed to fetch config: resource requires networking Aug 13 00:46:49.452311 ignition[754]: Ignition finished successfully Aug 13 00:46:49.465505 systemd-networkd[839]: lo: Link UP Aug 13 00:46:49.465516 systemd-networkd[839]: lo: Gained carrier Aug 13 00:46:49.466915 systemd-networkd[839]: Enumeration completed Aug 13 00:46:49.467273 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:49.467277 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:49.470441 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:49.472000 systemd[1]: Reached target network.target - Network. Aug 13 00:46:49.472091 systemd-networkd[839]: eth0: Link UP Aug 13 00:46:49.472252 systemd-networkd[839]: eth0: Gained carrier Aug 13 00:46:49.472261 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:49.475057 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:46:49.497477 ignition[847]: Ignition 2.21.0 Aug 13 00:46:49.497492 ignition[847]: Stage: fetch Aug 13 00:46:49.497610 ignition[847]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:49.497620 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:49.497685 ignition[847]: parsed url from cmdline: "" Aug 13 00:46:49.497688 ignition[847]: no config URL provided Aug 13 00:46:49.497693 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:49.497701 ignition[847]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:49.497729 ignition[847]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 00:46:49.497855 ignition[847]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:46:49.698734 ignition[847]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 00:46:49.698999 ignition[847]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:46:50.008996 systemd-networkd[839]: eth0: DHCPv4 address 172.234.29.69/24, gateway 172.234.29.1 acquired from 23.40.197.137 Aug 13 00:46:50.100171 ignition[847]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 00:46:50.209452 ignition[847]: PUT result: OK Aug 13 00:46:50.209516 ignition[847]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 00:46:50.340183 ignition[847]: GET result: OK Aug 13 00:46:50.340275 ignition[847]: parsing config with SHA512: 0b2b7acffe084948734e80ad218d6eb9b91b801bcb17dde68a678597a74d60937afa225d95d7868d71fd655846cbe2b5a2b7ed5b0ee5f4fcbb9f6bbd1d0fc734 Aug 13 00:46:50.346876 unknown[847]: fetched base config from "system" Aug 13 00:46:50.346890 unknown[847]: fetched base config from "system" Aug 13 00:46:50.347351 ignition[847]: fetch: fetch complete Aug 13 00:46:50.346896 unknown[847]: fetched user config from "akamai" Aug 13 00:46:50.347357 ignition[847]: fetch: fetch passed Aug 13 00:46:50.347402 ignition[847]: Ignition finished successfully Aug 13 00:46:50.350851 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:46:50.373974 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:46:50.403197 ignition[855]: Ignition 2.21.0 Aug 13 00:46:50.403211 ignition[855]: Stage: kargs Aug 13 00:46:50.403338 ignition[855]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:50.403351 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:50.405467 ignition[855]: kargs: kargs passed Aug 13 00:46:50.405511 ignition[855]: Ignition finished successfully Aug 13 00:46:50.407218 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:46:50.409093 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:46:50.439819 ignition[862]: Ignition 2.21.0 Aug 13 00:46:50.439830 ignition[862]: Stage: disks Aug 13 00:46:50.439975 ignition[862]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:50.439987 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:50.443692 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:46:50.441025 ignition[862]: disks: disks passed Aug 13 00:46:50.441069 ignition[862]: Ignition finished successfully Aug 13 00:46:50.445489 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:50.446667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:46:50.447765 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:50.449018 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:50.450273 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:50.452401 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:46:50.476066 systemd-fsck[871]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:46:50.480420 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:46:50.483890 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:46:50.592979 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:46:50.593905 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:46:50.595185 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:50.597856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:50.599671 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:46:50.602202 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:46:50.602799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:46:50.602826 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:50.613110 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:46:50.616615 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:46:50.623440 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (879) Aug 13 00:46:50.623486 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:50.627836 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:50.627862 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:46:50.633081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:50.673109 initrd-setup-root[903]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:46:50.677794 initrd-setup-root[910]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:46:50.682310 initrd-setup-root[917]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:46:50.686053 initrd-setup-root[924]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:46:50.765904 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:50.767819 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:46:50.769617 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:46:50.783966 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:46:50.786536 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:50.802886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:46:50.814574 ignition[992]: INFO : Ignition 2.21.0 Aug 13 00:46:50.814574 ignition[992]: INFO : Stage: mount Aug 13 00:46:50.815814 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:50.815814 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:50.815814 ignition[992]: INFO : mount: mount passed Aug 13 00:46:50.815814 ignition[992]: INFO : Ignition finished successfully Aug 13 00:46:50.816823 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:46:50.820003 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:46:51.036165 systemd-networkd[839]: eth0: Gained IPv6LL Aug 13 00:46:51.596004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:51.617999 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1004) Aug 13 00:46:51.621357 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:51.621382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:51.623129 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 00:46:51.629049 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:51.655567 ignition[1021]: INFO : Ignition 2.21.0 Aug 13 00:46:51.655567 ignition[1021]: INFO : Stage: files Aug 13 00:46:51.658215 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:51.658215 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:51.658215 ignition[1021]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:46:51.660344 ignition[1021]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:46:51.660344 ignition[1021]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:46:51.662021 ignition[1021]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:46:51.662021 ignition[1021]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:46:51.662021 ignition[1021]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:46:51.661550 unknown[1021]: wrote ssh authorized keys file for user: core Aug 13 00:46:51.665396 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:46:51.665396 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:51.851979 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:46:52.405272 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:46:52.406713 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:46:52.406713 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:52.540466 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:46:52.619476 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:52.620536 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:52.627265 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:52.627265 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:52.627265 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:52.627265 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:52.627265 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:52.627265 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:46:53.019707 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:46:53.463499 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:53.463499 ignition[1021]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:46:53.467094 ignition[1021]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:53.477912 ignition[1021]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:53.477912 ignition[1021]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:53.477912 ignition[1021]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:53.477912 ignition[1021]: INFO : files: files passed Aug 13 00:46:53.477912 ignition[1021]: INFO : Ignition finished successfully Aug 13 00:46:53.470787 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:46:53.475100 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:46:53.480297 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:46:53.490571 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:46:53.492073 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:46:53.502982 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:53.502982 initrd-setup-root-after-ignition[1050]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:53.505898 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:53.507286 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:53.509361 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:46:53.511239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:46:53.560130 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:46:53.560272 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:46:53.561953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:46:53.563236 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:46:53.564662 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:46:53.565555 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:46:53.586698 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:53.589442 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:46:53.607654 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:53.608378 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:53.609749 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:46:53.611030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:46:53.611213 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:53.612602 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:46:53.613522 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:46:53.614761 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:46:53.616029 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:53.617212 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:53.618545 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:53.619852 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:46:53.621112 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:53.622496 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:46:53.623724 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:46:53.625088 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:46:53.626240 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:46:53.626373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:53.627820 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:53.628756 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:53.629838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:46:53.629966 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:53.631194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:46:53.631368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:53.633021 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:46:53.633170 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:53.634003 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:46:53.634181 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:46:53.637041 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:46:53.640145 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:46:53.641035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:46:53.642067 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:53.644510 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:46:53.644692 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:53.657567 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:46:53.658415 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:46:53.682639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:46:53.692883 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:46:53.693062 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:46:53.696329 ignition[1074]: INFO : Ignition 2.21.0 Aug 13 00:46:53.697034 ignition[1074]: INFO : Stage: umount Aug 13 00:46:53.697814 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:53.697814 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:46:53.699537 ignition[1074]: INFO : umount: umount passed Aug 13 00:46:53.699537 ignition[1074]: INFO : Ignition finished successfully Aug 13 00:46:53.700209 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:46:53.700331 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:46:53.701798 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:46:53.701893 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:46:53.702865 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:46:53.702939 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:46:53.704043 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:46:53.704100 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:46:53.705156 systemd[1]: Stopped target network.target - Network. Aug 13 00:46:53.706241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:46:53.706305 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:53.707399 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:46:53.708454 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:46:53.708519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:53.709618 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:46:53.710688 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:46:53.711780 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:46:53.711833 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:53.713095 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:46:53.713145 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:53.714491 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:46:53.714552 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:46:53.715590 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:46:53.715647 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:53.716824 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:46:53.716890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:53.718351 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:46:53.719717 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:46:53.726755 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:46:53.726904 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:46:53.731714 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:46:53.731987 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:46:53.732093 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:46:53.733915 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:46:53.734400 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:46:53.735582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:46:53.735621 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:53.737458 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:46:53.738858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:46:53.738906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:53.740913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:46:53.740975 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:53.743029 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:46:53.743075 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:53.743819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:46:53.743862 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:53.745036 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:53.748601 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:46:53.748660 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:53.762344 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:46:53.762672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:53.763990 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:46:53.764054 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:53.764970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:46:53.765006 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:53.766658 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:46:53.766705 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:53.768356 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:46:53.768400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:53.769538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:46:53.769585 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:53.771541 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:46:53.773246 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:46:53.773297 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:53.775713 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:46:53.775763 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:53.777008 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:46:53.777052 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:53.778837 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:46:53.778880 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:53.779673 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:53.779715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:53.786001 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:46:53.786054 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 00:46:53.786098 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:46:53.786139 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:53.786485 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:46:53.786581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:46:53.789108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:46:53.789215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:46:53.791093 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:46:53.793072 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:46:53.820110 systemd[1]: Switching root. Aug 13 00:46:53.852374 systemd-journald[206]: Journal stopped Aug 13 00:46:54.878953 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 00:46:54.880962 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:46:54.880978 kernel: SELinux: policy capability open_perms=1 Aug 13 00:46:54.881008 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:46:54.881017 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:46:54.881026 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:46:54.881035 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:46:54.881044 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:46:54.881053 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:46:54.881061 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:46:54.881072 kernel: audit: type=1403 audit(1755046014.002:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:46:54.881083 systemd[1]: Successfully loaded SELinux policy in 55.258ms. Aug 13 00:46:54.881093 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.833ms. Aug 13 00:46:54.881104 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:54.881114 systemd[1]: Detected virtualization kvm. Aug 13 00:46:54.881126 systemd[1]: Detected architecture x86-64. Aug 13 00:46:54.881135 systemd[1]: Detected first boot. Aug 13 00:46:54.881145 systemd[1]: Initializing machine ID from random generator. Aug 13 00:46:54.881154 zram_generator::config[1116]: No configuration found. Aug 13 00:46:54.881164 kernel: Guest personality initialized and is inactive Aug 13 00:46:54.881173 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:46:54.881182 kernel: Initialized host personality Aug 13 00:46:54.881193 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:46:54.881202 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:46:54.881212 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:46:54.881222 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:46:54.881231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:46:54.881241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:54.881250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:46:54.881262 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:46:54.881272 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:46:54.881281 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:46:54.881291 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:46:54.881301 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:46:54.881311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:46:54.881321 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:46:54.881332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:54.881343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:54.881353 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:46:54.881362 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:46:54.881375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:46:54.881385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:54.881395 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:46:54.881405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:54.881416 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:54.881426 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:46:54.881436 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:46:54.881446 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:54.881455 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:46:54.881465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:54.881475 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:54.881484 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:54.881496 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:54.881506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:46:54.881515 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:46:54.881525 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:46:54.881535 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:54.881547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:54.881557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:54.881567 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:46:54.881577 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:46:54.881586 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:46:54.881596 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:46:54.881606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:54.881616 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:46:54.881627 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:46:54.881637 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:46:54.881647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:46:54.881657 systemd[1]: Reached target machines.target - Containers. Aug 13 00:46:54.881666 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:46:54.881676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:54.881686 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:54.881696 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:46:54.881707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:54.881717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:54.881727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:54.881737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:46:54.881746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:54.881756 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:46:54.881768 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:46:54.881777 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:46:54.881787 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:46:54.881799 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:46:54.881809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:54.881819 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:54.881828 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:54.881838 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:54.881848 kernel: fuse: init (API version 7.41) Aug 13 00:46:54.881857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:46:54.881867 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:46:54.881879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:54.881889 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:46:54.881901 systemd[1]: Stopped verity-setup.service. Aug 13 00:46:54.881911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:54.883940 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:46:54.883973 kernel: loop: module loaded Aug 13 00:46:54.883983 kernel: ACPI: bus type drm_connector registered Aug 13 00:46:54.883994 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:46:54.884029 systemd-journald[1207]: Collecting audit messages is disabled. Aug 13 00:46:54.884049 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:46:54.884061 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:46:54.884071 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:46:54.884082 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:46:54.884094 systemd-journald[1207]: Journal started Aug 13 00:46:54.884113 systemd-journald[1207]: Runtime Journal (/run/log/journal/e621d23a0ee74666b27b6acadfe262ef) is 8M, max 78.5M, 70.5M free. Aug 13 00:46:54.542023 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:46:54.557488 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:46:54.558119 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:46:54.886573 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:54.889618 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:46:54.890521 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:54.891494 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:46:54.891803 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:46:54.892680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:54.893039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:54.893885 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:54.894189 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:54.895230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:54.895485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:54.896443 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:46:54.896631 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:46:54.897513 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:54.897786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:54.898902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:54.900266 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:54.901233 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:46:54.902187 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:46:54.916763 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:54.920020 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:46:54.923011 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:46:54.923632 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:46:54.923660 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:54.926651 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:46:54.936029 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:46:54.936652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:54.939028 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:46:54.942656 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:46:54.943258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:54.946028 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:46:54.946605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:54.950849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:54.953218 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:46:54.963300 systemd-journald[1207]: Time spent on flushing to /var/log/journal/e621d23a0ee74666b27b6acadfe262ef is 30.758ms for 1002 entries. Aug 13 00:46:54.963300 systemd-journald[1207]: System Journal (/var/log/journal/e621d23a0ee74666b27b6acadfe262ef) is 8M, max 195.6M, 187.6M free. Aug 13 00:46:54.998896 systemd-journald[1207]: Received client request to flush runtime journal. Aug 13 00:46:54.958038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:54.961649 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:46:54.962652 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:46:54.970798 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:46:54.975600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:46:54.982068 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:46:55.007939 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 00:46:55.004479 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:46:55.022466 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:46:55.038183 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:46:55.046057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:55.068310 kernel: loop1: detected capacity change from 0 to 113872 Aug 13 00:46:55.070747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:55.076334 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Aug 13 00:46:55.076886 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Aug 13 00:46:55.088805 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:55.093126 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:46:55.105393 kernel: loop2: detected capacity change from 0 to 8 Aug 13 00:46:55.121947 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 00:46:55.154039 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:46:55.157301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:55.169947 kernel: loop4: detected capacity change from 0 to 221472 Aug 13 00:46:55.181236 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 00:46:55.181254 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Aug 13 00:46:55.186531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:55.195983 kernel: loop5: detected capacity change from 0 to 113872 Aug 13 00:46:55.214058 kernel: loop6: detected capacity change from 0 to 8 Aug 13 00:46:55.214103 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 00:46:55.232686 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 00:46:55.233844 (sd-merge)[1267]: Merged extensions into '/usr'. Aug 13 00:46:55.238397 systemd[1]: Reload requested from client PID 1241 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:46:55.238483 systemd[1]: Reloading... Aug 13 00:46:55.338968 zram_generator::config[1291]: No configuration found. Aug 13 00:46:55.476096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:55.495455 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:46:55.545267 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:46:55.545820 systemd[1]: Reloading finished in 306 ms. Aug 13 00:46:55.561221 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:46:55.562534 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:46:55.575052 systemd[1]: Starting ensure-sysext.service... Aug 13 00:46:55.578043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:55.604989 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:46:55.605073 systemd[1]: Reloading... Aug 13 00:46:55.622868 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:46:55.624212 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:46:55.624498 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:46:55.624724 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:46:55.626396 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:46:55.626709 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 13 00:46:55.626876 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 13 00:46:55.634624 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:55.634675 systemd-tmpfiles[1339]: Skipping /boot Aug 13 00:46:55.649549 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:55.649567 systemd-tmpfiles[1339]: Skipping /boot Aug 13 00:46:55.686969 zram_generator::config[1366]: No configuration found. Aug 13 00:46:55.778495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:55.848132 systemd[1]: Reloading finished in 242 ms. Aug 13 00:46:55.871775 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:46:55.879729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:55.888172 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:55.891075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:46:55.899469 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:46:55.902060 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:55.905061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:55.908352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:46:55.911455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:55.911880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:55.914159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:55.918647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:55.922157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:55.922777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:55.922860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:55.922968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:55.928724 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:46:55.931789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:55.931975 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:55.932119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:55.932190 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:55.932260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:55.935772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:55.936150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:55.942168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:55.943396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:55.943468 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:55.943561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:55.952803 systemd[1]: Finished ensure-sysext.service. Aug 13 00:46:55.957466 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:46:55.958334 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:55.958551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:55.971110 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:46:55.977257 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:46:55.978744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:55.979277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:55.980743 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:55.981117 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:55.984563 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:46:55.988464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:55.988669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:55.988725 systemd-udevd[1416]: Using default interface naming scheme 'v255'. Aug 13 00:46:55.995404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:55.995470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:56.020736 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:46:56.035042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:46:56.036796 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:46:56.039153 augenrules[1452]: No rules Aug 13 00:46:56.040445 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:56.040666 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:56.043592 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:46:56.045433 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:56.051042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:56.174982 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:46:56.248165 systemd-resolved[1414]: Positive Trust Anchors: Aug 13 00:46:56.248185 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:56.248213 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:56.255739 systemd-resolved[1414]: Defaulting to hostname 'linux'. Aug 13 00:46:56.258358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:56.259710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:56.262674 systemd-networkd[1463]: lo: Link UP Aug 13 00:46:56.262894 systemd-networkd[1463]: lo: Gained carrier Aug 13 00:46:56.264524 systemd-networkd[1463]: Enumeration completed Aug 13 00:46:56.264656 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:56.265016 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:56.265084 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:56.265725 systemd[1]: Reached target network.target - Network. Aug 13 00:46:56.265942 systemd-networkd[1463]: eth0: Link UP Aug 13 00:46:56.266153 systemd-networkd[1463]: eth0: Gained carrier Aug 13 00:46:56.266209 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:56.269084 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:46:56.269972 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:46:56.274873 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:46:56.286520 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:46:56.288074 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:56.288671 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:46:56.289664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:46:56.290704 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:46:56.291678 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:46:56.292888 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:46:56.292935 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:56.293821 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:46:56.294881 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:46:56.296056 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:46:56.297977 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:56.300142 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:46:56.303540 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:46:56.309363 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:46:56.311372 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:46:56.312521 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:46:56.322154 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:46:56.324382 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:46:56.325876 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:46:56.328726 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:46:56.332126 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:56.333494 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:56.335093 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:56.335135 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:56.337424 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:46:56.341367 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:46:56.344556 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:46:56.373758 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:46:56.375231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:46:56.380035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:46:56.381033 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:46:56.384099 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:46:56.392128 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:46:56.401052 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:46:56.405130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:46:56.411594 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:46:56.428541 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:46:56.429812 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:46:56.430444 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:46:56.432089 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:46:56.436141 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:46:56.441727 jq[1512]: false Aug 13 00:46:56.443755 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:46:56.449828 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:46:56.450879 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:46:56.451244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:46:56.451475 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:46:56.455304 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Aug 13 00:46:56.455147 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Aug 13 00:46:56.468001 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Aug 13 00:46:56.468001 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:46:56.468001 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Aug 13 00:46:56.465587 oslogin_cache_refresh[1514]: Failure getting users, quitting Aug 13 00:46:56.465601 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:46:56.465640 oslogin_cache_refresh[1514]: Refreshing group entry cache Aug 13 00:46:56.473021 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Aug 13 00:46:56.473021 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:46:56.472169 oslogin_cache_refresh[1514]: Failure getting groups, quitting Aug 13 00:46:56.472179 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:46:56.475341 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:46:56.477245 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:46:56.490498 jq[1524]: true Aug 13 00:46:56.495346 update_engine[1523]: I20250813 00:46:56.495288 1523 main.cc:92] Flatcar Update Engine starting Aug 13 00:46:56.520617 extend-filesystems[1513]: Found /dev/sda6 Aug 13 00:46:56.521731 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:46:56.523434 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:46:56.540110 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:46:56.557943 jq[1546]: true Aug 13 00:46:56.559160 extend-filesystems[1513]: Found /dev/sda9 Aug 13 00:46:56.562783 tar[1533]: linux-amd64/helm Aug 13 00:46:56.571536 extend-filesystems[1513]: Checking size of /dev/sda9 Aug 13 00:46:56.584117 dbus-daemon[1510]: [system] SELinux support is enabled Aug 13 00:46:56.584261 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:46:56.588396 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:46:56.588419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:46:56.590970 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:46:56.590990 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:46:56.632823 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:46:56.635659 update_engine[1523]: I20250813 00:46:56.635102 1523 update_check_scheduler.cc:74] Next update check in 4m22s Aug 13 00:46:56.636117 extend-filesystems[1513]: Resized partition /dev/sda9 Aug 13 00:46:56.638177 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:46:56.647861 extend-filesystems[1565]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:46:56.663993 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 00:46:56.669873 bash[1580]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:46:56.670081 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 00:46:56.671954 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:46:56.682116 systemd[1]: Starting sshkeys.service... Aug 13 00:46:56.694032 coreos-metadata[1509]: Aug 13 00:46:56.685 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:46:56.694377 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:46:56.694377 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:46:56.694377 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 00:46:56.696702 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:46:56.703037 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Aug 13 00:46:56.698361 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:46:56.707895 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:46:56.712595 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:46:56.716745 systemd-logind[1522]: New seat seat0. Aug 13 00:46:56.722213 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:46:56.732969 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:46:56.828009 systemd-networkd[1463]: eth0: DHCPv4 address 172.234.29.69/24, gateway 172.234.29.1 acquired from 23.40.197.137 Aug 13 00:46:56.829772 dbus-daemon[1510]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1463 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:46:56.830692 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Aug 13 00:46:56.833954 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:46:56.840073 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:46:56.841448 containerd[1542]: time="2025-08-13T00:46:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:46:56.851386 containerd[1542]: time="2025-08-13T00:46:56.851337784Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:46:56.877257 coreos-metadata[1591]: Aug 13 00:46:56.877 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:46:56.887350 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:46:56.909214 containerd[1542]: time="2025-08-13T00:46:56.909168765Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.26µs" Aug 13 00:46:56.909214 containerd[1542]: time="2025-08-13T00:46:56.909206255Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:46:56.909278 containerd[1542]: time="2025-08-13T00:46:56.909225885Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:46:56.909404 containerd[1542]: time="2025-08-13T00:46:56.909376475Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:46:56.909404 containerd[1542]: time="2025-08-13T00:46:56.909401265Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:46:56.909451 containerd[1542]: time="2025-08-13T00:46:56.909425305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909518 containerd[1542]: time="2025-08-13T00:46:56.909492644Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909518 containerd[1542]: time="2025-08-13T00:46:56.909512834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909751 containerd[1542]: time="2025-08-13T00:46:56.909720044Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909751 containerd[1542]: time="2025-08-13T00:46:56.909741774Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909794 containerd[1542]: time="2025-08-13T00:46:56.909752254Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909794 containerd[1542]: time="2025-08-13T00:46:56.909760794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:46:56.909876 containerd[1542]: time="2025-08-13T00:46:56.909849684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:46:56.912815 containerd[1542]: time="2025-08-13T00:46:56.912779053Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:46:56.912865 containerd[1542]: time="2025-08-13T00:46:56.912823413Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:46:56.912865 containerd[1542]: time="2025-08-13T00:46:56.912834093Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:46:56.912995 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:46:56.917534 containerd[1542]: time="2025-08-13T00:46:56.917499940Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:46:56.918006 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:46:56.920305 containerd[1542]: time="2025-08-13T00:46:56.920269579Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:46:56.920382 containerd[1542]: time="2025-08-13T00:46:56.920356149Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:46:56.926708 containerd[1542]: time="2025-08-13T00:46:56.926672596Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:46:56.926754 containerd[1542]: time="2025-08-13T00:46:56.926736216Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:46:56.926754 containerd[1542]: time="2025-08-13T00:46:56.926752496Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:46:56.926824 containerd[1542]: time="2025-08-13T00:46:56.926770206Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:46:56.926850 containerd[1542]: time="2025-08-13T00:46:56.926823156Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:46:56.926850 containerd[1542]: time="2025-08-13T00:46:56.926837176Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:46:56.926883 containerd[1542]: time="2025-08-13T00:46:56.926850756Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:46:56.926883 containerd[1542]: time="2025-08-13T00:46:56.926863676Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:46:56.926883 containerd[1542]: time="2025-08-13T00:46:56.926873686Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:46:56.926883 containerd[1542]: time="2025-08-13T00:46:56.926883186Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:46:56.926996 containerd[1542]: time="2025-08-13T00:46:56.926892606Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:46:56.926996 containerd[1542]: time="2025-08-13T00:46:56.926905306Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927030736Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927056096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927070856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927081666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927092756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927108666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927119476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927128816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927142256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927156396Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927167446Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927226396Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927241266Z" level=info msg="Start snapshots syncer" Aug 13 00:46:56.927429 containerd[1542]: time="2025-08-13T00:46:56.927274256Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:46:56.927683 containerd[1542]: time="2025-08-13T00:46:56.927523835Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:46:56.927683 containerd[1542]: time="2025-08-13T00:46:56.927581875Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:46:56.929213 containerd[1542]: time="2025-08-13T00:46:56.929167385Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:46:56.929334 containerd[1542]: time="2025-08-13T00:46:56.929305775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:46:56.929361 containerd[1542]: time="2025-08-13T00:46:56.929335755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:46:56.929361 containerd[1542]: time="2025-08-13T00:46:56.929347915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:46:56.929361 containerd[1542]: time="2025-08-13T00:46:56.929357815Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:46:56.929422 containerd[1542]: time="2025-08-13T00:46:56.929369065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:46:56.929422 containerd[1542]: time="2025-08-13T00:46:56.929379685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:46:56.929422 containerd[1542]: time="2025-08-13T00:46:56.929389045Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:46:56.929422 containerd[1542]: time="2025-08-13T00:46:56.929411175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:46:56.929422 containerd[1542]: time="2025-08-13T00:46:56.929421025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:46:56.929502 containerd[1542]: time="2025-08-13T00:46:56.929430724Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:46:56.930170 containerd[1542]: time="2025-08-13T00:46:56.930139884Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:46:56.930170 containerd[1542]: time="2025-08-13T00:46:56.930166984Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:46:56.930244 containerd[1542]: time="2025-08-13T00:46:56.930176404Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:46:56.930244 containerd[1542]: time="2025-08-13T00:46:56.930241194Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:46:56.930282 containerd[1542]: time="2025-08-13T00:46:56.930249354Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:46:56.930282 containerd[1542]: time="2025-08-13T00:46:56.930264724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:46:56.930282 containerd[1542]: time="2025-08-13T00:46:56.930275694Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:46:56.930341 containerd[1542]: time="2025-08-13T00:46:56.930293634Z" level=info msg="runtime interface created" Aug 13 00:46:56.930341 containerd[1542]: time="2025-08-13T00:46:56.930299194Z" level=info msg="created NRI interface" Aug 13 00:46:56.930341 containerd[1542]: time="2025-08-13T00:46:56.930306774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:46:56.930341 containerd[1542]: time="2025-08-13T00:46:56.930318674Z" level=info msg="Connect containerd service" Aug 13 00:46:56.930406 containerd[1542]: time="2025-08-13T00:46:56.930344854Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:46:56.937951 containerd[1542]: time="2025-08-13T00:46:56.937666000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:46:56.968016 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:46:56.975101 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:46:57.014193 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:46:57.022156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:58.249796 systemd-resolved[1414]: Clock change detected. Flushing caches. Aug 13 00:46:58.252503 systemd-timesyncd[1429]: Contacted time server 137.110.222.27:123 (0.flatcar.pool.ntp.org). Aug 13 00:46:58.252568 systemd-timesyncd[1429]: Initial clock synchronization to Wed 2025-08-13 00:46:58.249451 UTC. Aug 13 00:46:58.257906 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:46:58.258428 coreos-metadata[1591]: Aug 13 00:46:58.257 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 00:46:58.263044 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:46:58.263313 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:46:58.268422 systemd-logind[1522]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:46:58.268513 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:46:58.272125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:46:58.286916 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:46:58.291772 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:46:58.322226 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:46:58.342800 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:46:58.346001 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:46:58.350467 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:46:58.351163 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:46:58.379111 containerd[1542]: time="2025-08-13T00:46:58.379068546Z" level=info msg="Start subscribing containerd event" Aug 13 00:46:58.379286 containerd[1542]: time="2025-08-13T00:46:58.379249436Z" level=info msg="Start recovering state" Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381171205Z" level=info msg="Start event monitor" Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381293105Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381427795Z" level=info msg="Start streaming server" Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381406845Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381518305Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381547925Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381677375Z" level=info msg="runtime interface starting up..." Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.381683465Z" level=info msg="starting plugins..." Aug 13 00:46:58.383188 containerd[1542]: time="2025-08-13T00:46:58.382165084Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:46:58.383206 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:46:58.385415 containerd[1542]: time="2025-08-13T00:46:58.384178133Z" level=info msg="containerd successfully booted in 0.331073s" Aug 13 00:46:58.419734 coreos-metadata[1591]: Aug 13 00:46:58.419 INFO Fetch successful Aug 13 00:46:58.588025 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:46:58.588337 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:46:58.590892 dbus-daemon[1510]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1598 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:46:58.592369 update-ssh-keys[1653]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:46:58.603038 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:46:58.605228 systemd[1]: Finished sshkeys.service. Aug 13 00:46:58.617738 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:46:58.714426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:58.741625 polkitd[1657]: Started polkitd version 126 Aug 13 00:46:58.746042 polkitd[1657]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:46:58.746391 polkitd[1657]: Loading rules from directory /run/polkit-1/rules.d Aug 13 00:46:58.746472 polkitd[1657]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:46:58.746726 polkitd[1657]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 00:46:58.746783 polkitd[1657]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:46:58.746869 polkitd[1657]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:46:58.747468 polkitd[1657]: Finished loading, compiling and executing 2 rules Aug 13 00:46:58.747872 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:46:58.748629 dbus-daemon[1510]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:46:58.749411 polkitd[1657]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:46:58.759675 systemd-hostnamed[1598]: Hostname set to <172-234-29-69> (transient) Aug 13 00:46:58.759729 systemd-resolved[1414]: System hostname changed to '172-234-29-69'. Aug 13 00:46:58.780022 tar[1533]: linux-amd64/LICENSE Aug 13 00:46:58.780022 tar[1533]: linux-amd64/README.md Aug 13 00:46:58.795885 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:46:58.907794 coreos-metadata[1509]: Aug 13 00:46:58.907 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:46:59.014438 coreos-metadata[1509]: Aug 13 00:46:59.014 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 00:46:59.224475 systemd-networkd[1463]: eth0: Gained IPv6LL Aug 13 00:46:59.227202 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:46:59.229080 coreos-metadata[1509]: Aug 13 00:46:59.229 INFO Fetch successful Aug 13 00:46:59.229380 coreos-metadata[1509]: Aug 13 00:46:59.229 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 00:46:59.229722 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:46:59.233674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:59.236509 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:46:59.258507 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:46:59.515824 coreos-metadata[1509]: Aug 13 00:46:59.515 INFO Fetch successful Aug 13 00:46:59.608142 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:46:59.609921 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:47:00.097736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:00.098891 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:47:00.100279 systemd[1]: Startup finished in 2.811s (kernel) + 7.348s (initrd) + 4.938s (userspace) = 15.098s. Aug 13 00:47:00.142150 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:00.627814 kubelet[1708]: E0813 00:47:00.627719 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:00.632359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:00.632585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:00.633041 systemd[1]: kubelet.service: Consumed 841ms CPU time, 263.6M memory peak. Aug 13 00:47:01.673979 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:47:01.675527 systemd[1]: Started sshd@0-172.234.29.69:22-147.75.109.163:35020.service - OpenSSH per-connection server daemon (147.75.109.163:35020). Aug 13 00:47:02.017397 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 35020 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:02.019490 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:02.026493 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:47:02.027966 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:47:02.036281 systemd-logind[1522]: New session 1 of user core. Aug 13 00:47:02.050318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:47:02.053772 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:47:02.066576 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:47:02.068703 systemd-logind[1522]: New session c1 of user core. Aug 13 00:47:02.204329 systemd[1724]: Queued start job for default target default.target. Aug 13 00:47:02.216500 systemd[1724]: Created slice app.slice - User Application Slice. Aug 13 00:47:02.216528 systemd[1724]: Reached target paths.target - Paths. Aug 13 00:47:02.216569 systemd[1724]: Reached target timers.target - Timers. Aug 13 00:47:02.218337 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:47:02.229259 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:47:02.229391 systemd[1724]: Reached target sockets.target - Sockets. Aug 13 00:47:02.229427 systemd[1724]: Reached target basic.target - Basic System. Aug 13 00:47:02.229465 systemd[1724]: Reached target default.target - Main User Target. Aug 13 00:47:02.229493 systemd[1724]: Startup finished in 155ms. Aug 13 00:47:02.229653 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:47:02.231987 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:47:02.492521 systemd[1]: Started sshd@1-172.234.29.69:22-147.75.109.163:35028.service - OpenSSH per-connection server daemon (147.75.109.163:35028). Aug 13 00:47:02.841179 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 35028 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:02.842871 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:02.849611 systemd-logind[1522]: New session 2 of user core. Aug 13 00:47:02.855411 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:47:03.087794 sshd[1737]: Connection closed by 147.75.109.163 port 35028 Aug 13 00:47:03.088717 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:03.093402 systemd[1]: sshd@1-172.234.29.69:22-147.75.109.163:35028.service: Deactivated successfully. Aug 13 00:47:03.096113 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:47:03.098664 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:47:03.099865 systemd-logind[1522]: Removed session 2. Aug 13 00:47:03.147973 systemd[1]: Started sshd@2-172.234.29.69:22-147.75.109.163:35032.service - OpenSSH per-connection server daemon (147.75.109.163:35032). Aug 13 00:47:03.501850 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 35032 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:03.504041 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:03.511463 systemd-logind[1522]: New session 3 of user core. Aug 13 00:47:03.516436 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:47:03.743315 sshd[1745]: Connection closed by 147.75.109.163 port 35032 Aug 13 00:47:03.744566 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:03.749630 systemd[1]: sshd@2-172.234.29.69:22-147.75.109.163:35032.service: Deactivated successfully. Aug 13 00:47:03.751497 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:47:03.752568 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:47:03.753727 systemd-logind[1522]: Removed session 3. Aug 13 00:47:03.808011 systemd[1]: Started sshd@3-172.234.29.69:22-147.75.109.163:35034.service - OpenSSH per-connection server daemon (147.75.109.163:35034). Aug 13 00:47:04.148050 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 35034 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:04.150411 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:04.156870 systemd-logind[1522]: New session 4 of user core. Aug 13 00:47:04.162502 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:47:04.395221 sshd[1753]: Connection closed by 147.75.109.163 port 35034 Aug 13 00:47:04.395909 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:04.400974 systemd[1]: sshd@3-172.234.29.69:22-147.75.109.163:35034.service: Deactivated successfully. Aug 13 00:47:04.406161 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:47:04.407579 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:47:04.408814 systemd-logind[1522]: Removed session 4. Aug 13 00:47:04.455111 systemd[1]: Started sshd@4-172.234.29.69:22-147.75.109.163:35038.service - OpenSSH per-connection server daemon (147.75.109.163:35038). Aug 13 00:47:04.785640 sshd[1759]: Accepted publickey for core from 147.75.109.163 port 35038 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:04.787357 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:04.792712 systemd-logind[1522]: New session 5 of user core. Aug 13 00:47:04.797441 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:47:04.992384 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:47:04.992684 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:05.011538 sudo[1762]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:05.061741 sshd[1761]: Connection closed by 147.75.109.163 port 35038 Aug 13 00:47:05.062942 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:05.068154 systemd[1]: sshd@4-172.234.29.69:22-147.75.109.163:35038.service: Deactivated successfully. Aug 13 00:47:05.070056 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:47:05.070795 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:47:05.073235 systemd-logind[1522]: Removed session 5. Aug 13 00:47:05.133383 systemd[1]: Started sshd@5-172.234.29.69:22-147.75.109.163:35044.service - OpenSSH per-connection server daemon (147.75.109.163:35044). Aug 13 00:47:05.479848 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 35044 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:05.482078 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:05.489323 systemd-logind[1522]: New session 6 of user core. Aug 13 00:47:05.498424 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:47:05.681074 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:47:05.681409 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:05.686827 sudo[1772]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:05.692986 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:47:05.693291 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:05.703842 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:47:05.742934 augenrules[1794]: No rules Aug 13 00:47:05.744393 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:47:05.744679 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:47:05.746163 sudo[1771]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:05.797336 sshd[1770]: Connection closed by 147.75.109.163 port 35044 Aug 13 00:47:05.798039 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:05.804279 systemd[1]: sshd@5-172.234.29.69:22-147.75.109.163:35044.service: Deactivated successfully. Aug 13 00:47:05.807524 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:47:05.808921 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:47:05.810684 systemd-logind[1522]: Removed session 6. Aug 13 00:47:05.865327 systemd[1]: Started sshd@6-172.234.29.69:22-147.75.109.163:35048.service - OpenSSH per-connection server daemon (147.75.109.163:35048). Aug 13 00:47:06.223108 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 35048 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:47:06.225474 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:06.236062 systemd-logind[1522]: New session 7 of user core. Aug 13 00:47:06.242408 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:47:06.425823 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:47:06.426136 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:47:06.718766 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:47:06.729618 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:47:06.916760 dockerd[1824]: time="2025-08-13T00:47:06.916519906Z" level=info msg="Starting up" Aug 13 00:47:06.917605 dockerd[1824]: time="2025-08-13T00:47:06.917587796Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:47:06.947388 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2575386003-merged.mount: Deactivated successfully. Aug 13 00:47:06.973242 dockerd[1824]: time="2025-08-13T00:47:06.972601828Z" level=info msg="Loading containers: start." Aug 13 00:47:06.983321 kernel: Initializing XFRM netlink socket Aug 13 00:47:07.202313 systemd-networkd[1463]: docker0: Link UP Aug 13 00:47:07.204867 dockerd[1824]: time="2025-08-13T00:47:07.204844462Z" level=info msg="Loading containers: done." Aug 13 00:47:07.216825 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2137056577-merged.mount: Deactivated successfully. Aug 13 00:47:07.218405 dockerd[1824]: time="2025-08-13T00:47:07.218372915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:47:07.218473 dockerd[1824]: time="2025-08-13T00:47:07.218425305Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:47:07.218530 dockerd[1824]: time="2025-08-13T00:47:07.218515025Z" level=info msg="Initializing buildkit" Aug 13 00:47:07.235747 dockerd[1824]: time="2025-08-13T00:47:07.235682716Z" level=info msg="Completed buildkit initialization" Aug 13 00:47:07.241541 dockerd[1824]: time="2025-08-13T00:47:07.241521064Z" level=info msg="Daemon has completed initialization" Aug 13 00:47:07.241666 dockerd[1824]: time="2025-08-13T00:47:07.241629733Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:47:07.241714 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:47:07.894436 containerd[1542]: time="2025-08-13T00:47:07.894365207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:47:08.727798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200454986.mount: Deactivated successfully. Aug 13 00:47:09.764483 containerd[1542]: time="2025-08-13T00:47:09.764422682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:09.765383 containerd[1542]: time="2025-08-13T00:47:09.765147791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 00:47:09.766140 containerd[1542]: time="2025-08-13T00:47:09.766118451Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:09.768193 containerd[1542]: time="2025-08-13T00:47:09.768164660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:09.768983 containerd[1542]: time="2025-08-13T00:47:09.768962760Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.874544233s" Aug 13 00:47:09.769626 containerd[1542]: time="2025-08-13T00:47:09.769228219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:47:09.770610 containerd[1542]: time="2025-08-13T00:47:09.770588439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:47:10.654413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:47:10.657577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:10.847484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:10.859747 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:47:10.898010 kubelet[2091]: E0813 00:47:10.897969 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:47:10.903609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:47:10.903789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:47:10.904129 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.5M memory peak. Aug 13 00:47:11.297418 containerd[1542]: time="2025-08-13T00:47:11.297242325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:11.298415 containerd[1542]: time="2025-08-13T00:47:11.298206595Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 00:47:11.298958 containerd[1542]: time="2025-08-13T00:47:11.298922344Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:11.301073 containerd[1542]: time="2025-08-13T00:47:11.301030733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:11.301978 containerd[1542]: time="2025-08-13T00:47:11.301954223Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.531340074s" Aug 13 00:47:11.302400 containerd[1542]: time="2025-08-13T00:47:11.302202573Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:47:11.303284 containerd[1542]: time="2025-08-13T00:47:11.303241512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:47:12.495036 containerd[1542]: time="2025-08-13T00:47:12.494979576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:12.496038 containerd[1542]: time="2025-08-13T00:47:12.495796206Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 00:47:12.496653 containerd[1542]: time="2025-08-13T00:47:12.496621165Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:12.498765 containerd[1542]: time="2025-08-13T00:47:12.498727504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:12.499600 containerd[1542]: time="2025-08-13T00:47:12.499576754Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.196303942s" Aug 13 00:47:12.499688 containerd[1542]: time="2025-08-13T00:47:12.499670264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:47:12.500555 containerd[1542]: time="2025-08-13T00:47:12.500519053Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:47:13.711793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214851426.mount: Deactivated successfully. Aug 13 00:47:14.044473 containerd[1542]: time="2025-08-13T00:47:14.044193901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:14.045470 containerd[1542]: time="2025-08-13T00:47:14.045432571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 00:47:14.047330 containerd[1542]: time="2025-08-13T00:47:14.046724740Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:14.049905 containerd[1542]: time="2025-08-13T00:47:14.049876719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:14.051533 containerd[1542]: time="2025-08-13T00:47:14.051508698Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.550959415s" Aug 13 00:47:14.051649 containerd[1542]: time="2025-08-13T00:47:14.051631508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:47:14.052425 containerd[1542]: time="2025-08-13T00:47:14.052393737Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:47:14.754627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038023466.mount: Deactivated successfully. Aug 13 00:47:15.361551 containerd[1542]: time="2025-08-13T00:47:15.361488733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:15.362478 containerd[1542]: time="2025-08-13T00:47:15.362439062Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:47:15.366314 containerd[1542]: time="2025-08-13T00:47:15.366143450Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:15.367890 containerd[1542]: time="2025-08-13T00:47:15.367858179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:15.368846 containerd[1542]: time="2025-08-13T00:47:15.368820129Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.316325092s" Aug 13 00:47:15.368925 containerd[1542]: time="2025-08-13T00:47:15.368910299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:47:15.369760 containerd[1542]: time="2025-08-13T00:47:15.369707278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:47:16.025677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount413467119.mount: Deactivated successfully. Aug 13 00:47:16.029582 containerd[1542]: time="2025-08-13T00:47:16.029540508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:16.030470 containerd[1542]: time="2025-08-13T00:47:16.030446008Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:47:16.033337 containerd[1542]: time="2025-08-13T00:47:16.032946217Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:16.034553 containerd[1542]: time="2025-08-13T00:47:16.034517336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:47:16.035640 containerd[1542]: time="2025-08-13T00:47:16.034994666Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 665.259068ms" Aug 13 00:47:16.035640 containerd[1542]: time="2025-08-13T00:47:16.035020346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:47:16.035640 containerd[1542]: time="2025-08-13T00:47:16.035378736Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:47:16.741738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189563316.mount: Deactivated successfully. Aug 13 00:47:18.199911 containerd[1542]: time="2025-08-13T00:47:18.199856093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:18.201593 containerd[1542]: time="2025-08-13T00:47:18.201487662Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 00:47:18.202412 containerd[1542]: time="2025-08-13T00:47:18.202381002Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:18.204995 containerd[1542]: time="2025-08-13T00:47:18.204955721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:18.206333 containerd[1542]: time="2025-08-13T00:47:18.205840130Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.170442334s" Aug 13 00:47:18.206333 containerd[1542]: time="2025-08-13T00:47:18.205871010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:47:19.929956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:19.930166 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.5M memory peak. Aug 13 00:47:19.934030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:19.963412 systemd[1]: Reload requested from client PID 2249 ('systemctl') (unit session-7.scope)... Aug 13 00:47:19.963543 systemd[1]: Reloading... Aug 13 00:47:20.096364 zram_generator::config[2292]: No configuration found. Aug 13 00:47:20.209801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:20.318628 systemd[1]: Reloading finished in 354 ms. Aug 13 00:47:20.382232 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:47:20.382373 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:47:20.382710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:20.382771 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.2M memory peak. Aug 13 00:47:20.384843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:20.581426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:20.590741 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:20.626208 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:20.626564 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:20.626614 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:20.626756 kubelet[2347]: I0813 00:47:20.626725 2347 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:20.805610 kubelet[2347]: I0813 00:47:20.805541 2347 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:47:20.805610 kubelet[2347]: I0813 00:47:20.805580 2347 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:20.805982 kubelet[2347]: I0813 00:47:20.805951 2347 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:47:20.833376 kubelet[2347]: E0813 00:47:20.833010 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.29.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.29.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:20.835749 kubelet[2347]: I0813 00:47:20.835731 2347 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:20.844681 kubelet[2347]: I0813 00:47:20.844644 2347 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:47:20.849940 kubelet[2347]: I0813 00:47:20.849920 2347 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:20.850677 kubelet[2347]: I0813 00:47:20.850644 2347 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:47:20.850864 kubelet[2347]: I0813 00:47:20.850822 2347 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:20.851057 kubelet[2347]: I0813 00:47:20.850853 2347 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-29-69","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:20.851375 kubelet[2347]: I0813 00:47:20.851062 2347 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:20.851375 kubelet[2347]: I0813 00:47:20.851267 2347 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:47:20.851462 kubelet[2347]: I0813 00:47:20.851439 2347 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:20.854386 kubelet[2347]: I0813 00:47:20.854346 2347 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:47:20.854421 kubelet[2347]: I0813 00:47:20.854391 2347 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:20.854442 kubelet[2347]: I0813 00:47:20.854437 2347 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:47:20.854470 kubelet[2347]: I0813 00:47:20.854459 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:20.861720 kubelet[2347]: W0813 00:47:20.861435 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.29.69:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-69&limit=500&resourceVersion=0": dial tcp 172.234.29.69:6443: connect: connection refused Aug 13 00:47:20.861720 kubelet[2347]: E0813 00:47:20.861480 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.29.69:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-69&limit=500&resourceVersion=0\": dial tcp 172.234.29.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:20.861868 kubelet[2347]: I0813 00:47:20.861839 2347 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:47:20.862088 kubelet[2347]: W0813 00:47:20.862040 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.29.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.29.69:6443: connect: connection refused Aug 13 00:47:20.862172 kubelet[2347]: E0813 00:47:20.862158 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.29.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.29.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:20.862366 kubelet[2347]: I0813 00:47:20.862212 2347 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:20.863032 kubelet[2347]: W0813 00:47:20.862988 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:47:20.866255 kubelet[2347]: I0813 00:47:20.865513 2347 server.go:1274] "Started kubelet" Aug 13 00:47:20.867339 kubelet[2347]: I0813 00:47:20.867273 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:20.867691 kubelet[2347]: I0813 00:47:20.867660 2347 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:20.867778 kubelet[2347]: I0813 00:47:20.867745 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:20.869150 kubelet[2347]: I0813 00:47:20.869133 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:20.869685 kubelet[2347]: I0813 00:47:20.869654 2347 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:47:20.878619 kubelet[2347]: I0813 00:47:20.878581 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:20.880351 kubelet[2347]: I0813 00:47:20.880322 2347 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:47:20.880567 kubelet[2347]: E0813 00:47:20.880531 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:20.881126 kubelet[2347]: I0813 00:47:20.881094 2347 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:47:20.881194 kubelet[2347]: I0813 00:47:20.881168 2347 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:20.889990 kubelet[2347]: E0813 00:47:20.888831 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.29.69:6443/api/v1/namespaces/default/events\": dial tcp 172.234.29.69:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-29-69.185b2d13563e437e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-29-69,UID:172-234-29-69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-29-69,},FirstTimestamp:2025-08-13 00:47:20.86548979 +0000 UTC m=+0.271070426,LastTimestamp:2025-08-13 00:47:20.86548979 +0000 UTC m=+0.271070426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-29-69,}" Aug 13 00:47:20.894768 kubelet[2347]: W0813 00:47:20.893905 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.29.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.29.69:6443: connect: connection refused Aug 13 00:47:20.894768 kubelet[2347]: E0813 00:47:20.893955 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.29.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.29.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:20.894768 kubelet[2347]: E0813 00:47:20.894043 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-69?timeout=10s\": dial tcp 172.234.29.69:6443: connect: connection refused" interval="200ms" Aug 13 00:47:20.894768 kubelet[2347]: I0813 00:47:20.894452 2347 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:20.894768 kubelet[2347]: I0813 00:47:20.894522 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:20.897404 kubelet[2347]: I0813 00:47:20.897279 2347 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:20.904369 kubelet[2347]: I0813 00:47:20.904063 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:20.905553 kubelet[2347]: I0813 00:47:20.905537 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:20.905615 kubelet[2347]: I0813 00:47:20.905606 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:47:20.905685 kubelet[2347]: I0813 00:47:20.905676 2347 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:47:20.905789 kubelet[2347]: E0813 00:47:20.905764 2347 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:20.916436 kubelet[2347]: W0813 00:47:20.916388 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.29.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.29.69:6443: connect: connection refused Aug 13 00:47:20.916555 kubelet[2347]: E0813 00:47:20.916535 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.29.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.29.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:20.921435 kubelet[2347]: E0813 00:47:20.921375 2347 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:20.928814 kubelet[2347]: I0813 00:47:20.928786 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:47:20.929134 kubelet[2347]: I0813 00:47:20.928896 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:20.929134 kubelet[2347]: I0813 00:47:20.928917 2347 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:20.930872 kubelet[2347]: I0813 00:47:20.930859 2347 policy_none.go:49] "None policy: Start" Aug 13 00:47:20.931642 kubelet[2347]: I0813 00:47:20.931602 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:47:20.931642 kubelet[2347]: I0813 00:47:20.931642 2347 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:20.939294 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:47:20.952370 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:47:20.956091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:47:20.968294 kubelet[2347]: I0813 00:47:20.968255 2347 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:20.968565 kubelet[2347]: I0813 00:47:20.968523 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:20.968855 kubelet[2347]: I0813 00:47:20.968754 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:20.969179 kubelet[2347]: I0813 00:47:20.969093 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:20.971459 kubelet[2347]: E0813 00:47:20.971410 2347 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-29-69\" not found" Aug 13 00:47:21.017386 systemd[1]: Created slice kubepods-burstable-podd457df60158022c6dd89e86681eac010.slice - libcontainer container kubepods-burstable-podd457df60158022c6dd89e86681eac010.slice. Aug 13 00:47:21.031286 systemd[1]: Created slice kubepods-burstable-pod9522e501dbf625a4a5d70a80c7d3d40b.slice - libcontainer container kubepods-burstable-pod9522e501dbf625a4a5d70a80c7d3d40b.slice. Aug 13 00:47:21.045511 systemd[1]: Created slice kubepods-burstable-pod3ab9091b4db9555010c8b582706b7dae.slice - libcontainer container kubepods-burstable-pod3ab9091b4db9555010c8b582706b7dae.slice. Aug 13 00:47:21.072155 kubelet[2347]: I0813 00:47:21.072107 2347 kubelet_node_status.go:72] "Attempting to register node" node="172-234-29-69" Aug 13 00:47:21.072927 kubelet[2347]: E0813 00:47:21.072875 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.29.69:6443/api/v1/nodes\": dial tcp 172.234.29.69:6443: connect: connection refused" node="172-234-29-69" Aug 13 00:47:21.082442 kubelet[2347]: I0813 00:47:21.082333 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:21.082442 kubelet[2347]: I0813 00:47:21.082371 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9522e501dbf625a4a5d70a80c7d3d40b-kubeconfig\") pod \"kube-scheduler-172-234-29-69\" (UID: \"9522e501dbf625a4a5d70a80c7d3d40b\") " pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:47:21.082442 kubelet[2347]: I0813 00:47:21.082396 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ab9091b4db9555010c8b582706b7dae-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-29-69\" (UID: \"3ab9091b4db9555010c8b582706b7dae\") " pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:21.082442 kubelet[2347]: I0813 00:47:21.082422 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-flexvolume-dir\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:21.082442 kubelet[2347]: I0813 00:47:21.082442 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-k8s-certs\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:21.082684 kubelet[2347]: I0813 00:47:21.082460 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-kubeconfig\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:21.082684 kubelet[2347]: I0813 00:47:21.082477 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-ca-certs\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:21.082684 kubelet[2347]: I0813 00:47:21.082495 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ab9091b4db9555010c8b582706b7dae-ca-certs\") pod \"kube-apiserver-172-234-29-69\" (UID: \"3ab9091b4db9555010c8b582706b7dae\") " pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:21.082684 kubelet[2347]: I0813 00:47:21.082520 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ab9091b4db9555010c8b582706b7dae-k8s-certs\") pod \"kube-apiserver-172-234-29-69\" (UID: \"3ab9091b4db9555010c8b582706b7dae\") " pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:21.095625 kubelet[2347]: E0813 00:47:21.095537 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-69?timeout=10s\": dial tcp 172.234.29.69:6443: connect: connection refused" interval="400ms" Aug 13 00:47:21.276081 kubelet[2347]: I0813 00:47:21.276026 2347 kubelet_node_status.go:72] "Attempting to register node" node="172-234-29-69" Aug 13 00:47:21.276491 kubelet[2347]: E0813 00:47:21.276457 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.29.69:6443/api/v1/nodes\": dial tcp 172.234.29.69:6443: connect: connection refused" node="172-234-29-69" Aug 13 00:47:21.329081 kubelet[2347]: E0813 00:47:21.329056 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.330754 containerd[1542]: time="2025-08-13T00:47:21.330612627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-29-69,Uid:d457df60158022c6dd89e86681eac010,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:21.346916 kubelet[2347]: E0813 00:47:21.345053 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.346972 containerd[1542]: time="2025-08-13T00:47:21.346615559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-29-69,Uid:9522e501dbf625a4a5d70a80c7d3d40b,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:21.348082 kubelet[2347]: E0813 00:47:21.348047 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.355092 containerd[1542]: time="2025-08-13T00:47:21.355035145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-29-69,Uid:3ab9091b4db9555010c8b582706b7dae,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:21.359468 containerd[1542]: time="2025-08-13T00:47:21.359425823Z" level=info msg="connecting to shim 792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2" address="unix:///run/containerd/s/b422a8e72bbbaefd700ad4a3649158a841349893e9c800ba4015fffb7bdbd30e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:21.407009 containerd[1542]: time="2025-08-13T00:47:21.406963119Z" level=info msg="connecting to shim 793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0" address="unix:///run/containerd/s/94acc84390c2dc95fde4010528338581d0717557afbb9e6e7bee4868fda740cd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:21.411105 containerd[1542]: time="2025-08-13T00:47:21.411078087Z" level=info msg="connecting to shim 7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941" address="unix:///run/containerd/s/c3fbe430d76b3025787d99ea09d22ace5cad9cc66f0d6c149cdb48cf2e8cf128" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:21.433604 systemd[1]: Started cri-containerd-792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2.scope - libcontainer container 792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2. Aug 13 00:47:21.451442 systemd[1]: Started cri-containerd-7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941.scope - libcontainer container 7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941. Aug 13 00:47:21.469043 systemd[1]: Started cri-containerd-793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0.scope - libcontainer container 793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0. Aug 13 00:47:21.497116 kubelet[2347]: E0813 00:47:21.497050 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.29.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-29-69?timeout=10s\": dial tcp 172.234.29.69:6443: connect: connection refused" interval="800ms" Aug 13 00:47:21.527698 containerd[1542]: time="2025-08-13T00:47:21.527591379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-29-69,Uid:d457df60158022c6dd89e86681eac010,Namespace:kube-system,Attempt:0,} returns sandbox id \"792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2\"" Aug 13 00:47:21.529082 kubelet[2347]: E0813 00:47:21.529058 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.532400 containerd[1542]: time="2025-08-13T00:47:21.532358936Z" level=info msg="CreateContainer within sandbox \"792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:47:21.538474 containerd[1542]: time="2025-08-13T00:47:21.538441063Z" level=info msg="Container ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:21.543678 containerd[1542]: time="2025-08-13T00:47:21.543618701Z" level=info msg="CreateContainer within sandbox \"792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b\"" Aug 13 00:47:21.545581 containerd[1542]: time="2025-08-13T00:47:21.544534780Z" level=info msg="StartContainer for \"ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b\"" Aug 13 00:47:21.545922 containerd[1542]: time="2025-08-13T00:47:21.545889190Z" level=info msg="connecting to shim ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b" address="unix:///run/containerd/s/b422a8e72bbbaefd700ad4a3649158a841349893e9c800ba4015fffb7bdbd30e" protocol=ttrpc version=3 Aug 13 00:47:21.576748 systemd[1]: Started cri-containerd-ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b.scope - libcontainer container ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b. Aug 13 00:47:21.592096 containerd[1542]: time="2025-08-13T00:47:21.592050457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-29-69,Uid:3ab9091b4db9555010c8b582706b7dae,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941\"" Aug 13 00:47:21.594234 kubelet[2347]: E0813 00:47:21.593878 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.597769 containerd[1542]: time="2025-08-13T00:47:21.597660174Z" level=info msg="CreateContainer within sandbox \"7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:47:21.611006 containerd[1542]: time="2025-08-13T00:47:21.610968657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-29-69,Uid:9522e501dbf625a4a5d70a80c7d3d40b,Namespace:kube-system,Attempt:0,} returns sandbox id \"793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0\"" Aug 13 00:47:21.611837 kubelet[2347]: E0813 00:47:21.611707 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.614235 containerd[1542]: time="2025-08-13T00:47:21.614214255Z" level=info msg="CreateContainer within sandbox \"793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:47:21.619028 containerd[1542]: time="2025-08-13T00:47:21.619009123Z" level=info msg="Container ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:21.624591 containerd[1542]: time="2025-08-13T00:47:21.624555260Z" level=info msg="Container a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:21.627544 containerd[1542]: time="2025-08-13T00:47:21.627524199Z" level=info msg="CreateContainer within sandbox \"7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69\"" Aug 13 00:47:21.629353 containerd[1542]: time="2025-08-13T00:47:21.629335618Z" level=info msg="StartContainer for \"ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69\"" Aug 13 00:47:21.630767 containerd[1542]: time="2025-08-13T00:47:21.630745907Z" level=info msg="connecting to shim ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69" address="unix:///run/containerd/s/c3fbe430d76b3025787d99ea09d22ace5cad9cc66f0d6c149cdb48cf2e8cf128" protocol=ttrpc version=3 Aug 13 00:47:21.632609 containerd[1542]: time="2025-08-13T00:47:21.632587906Z" level=info msg="CreateContainer within sandbox \"793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049\"" Aug 13 00:47:21.633131 containerd[1542]: time="2025-08-13T00:47:21.633112746Z" level=info msg="StartContainer for \"a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049\"" Aug 13 00:47:21.636322 containerd[1542]: time="2025-08-13T00:47:21.636274674Z" level=info msg="connecting to shim a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049" address="unix:///run/containerd/s/94acc84390c2dc95fde4010528338581d0717557afbb9e6e7bee4868fda740cd" protocol=ttrpc version=3 Aug 13 00:47:21.672483 containerd[1542]: time="2025-08-13T00:47:21.672462446Z" level=info msg="StartContainer for \"ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b\" returns successfully" Aug 13 00:47:21.678097 kubelet[2347]: I0813 00:47:21.678077 2347 kubelet_node_status.go:72] "Attempting to register node" node="172-234-29-69" Aug 13 00:47:21.678771 systemd[1]: Started cri-containerd-a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049.scope - libcontainer container a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049. Aug 13 00:47:21.679214 kubelet[2347]: E0813 00:47:21.679142 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.29.69:6443/api/v1/nodes\": dial tcp 172.234.29.69:6443: connect: connection refused" node="172-234-29-69" Aug 13 00:47:21.693837 systemd[1]: Started cri-containerd-ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69.scope - libcontainer container ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69. Aug 13 00:47:21.768817 containerd[1542]: time="2025-08-13T00:47:21.768760038Z" level=info msg="StartContainer for \"a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049\" returns successfully" Aug 13 00:47:21.793956 kubelet[2347]: W0813 00:47:21.793872 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.29.69:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-69&limit=500&resourceVersion=0": dial tcp 172.234.29.69:6443: connect: connection refused Aug 13 00:47:21.794007 kubelet[2347]: E0813 00:47:21.793958 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.29.69:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-29-69&limit=500&resourceVersion=0\": dial tcp 172.234.29.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:47:21.818313 containerd[1542]: time="2025-08-13T00:47:21.818257283Z" level=info msg="StartContainer for \"ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69\" returns successfully" Aug 13 00:47:21.930656 kubelet[2347]: E0813 00:47:21.930618 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.935666 kubelet[2347]: E0813 00:47:21.935527 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:21.938225 kubelet[2347]: E0813 00:47:21.938200 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:22.482323 kubelet[2347]: I0813 00:47:22.481406 2347 kubelet_node_status.go:72] "Attempting to register node" node="172-234-29-69" Aug 13 00:47:22.903244 kubelet[2347]: E0813 00:47:22.902904 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-29-69\" not found" node="172-234-29-69" Aug 13 00:47:22.939424 kubelet[2347]: E0813 00:47:22.939324 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:22.939424 kubelet[2347]: E0813 00:47:22.939390 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:22.984322 kubelet[2347]: I0813 00:47:22.982353 2347 kubelet_node_status.go:75] "Successfully registered node" node="172-234-29-69" Aug 13 00:47:22.984322 kubelet[2347]: E0813 00:47:22.982377 2347 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-234-29-69\": node \"172-234-29-69\" not found" Aug 13 00:47:22.993603 kubelet[2347]: E0813 00:47:22.993578 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.093923 kubelet[2347]: E0813 00:47:23.093887 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.194552 kubelet[2347]: E0813 00:47:23.194371 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.295143 kubelet[2347]: E0813 00:47:23.295099 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.395478 kubelet[2347]: E0813 00:47:23.395426 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.496202 kubelet[2347]: E0813 00:47:23.496075 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.596595 kubelet[2347]: E0813 00:47:23.596547 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.697205 kubelet[2347]: E0813 00:47:23.697132 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.797742 kubelet[2347]: E0813 00:47:23.797676 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.898264 kubelet[2347]: E0813 00:47:23.898204 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:23.999232 kubelet[2347]: E0813 00:47:23.999172 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:24.100201 kubelet[2347]: E0813 00:47:24.099917 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-29-69\" not found" Aug 13 00:47:24.867397 kubelet[2347]: I0813 00:47:24.867365 2347 apiserver.go:52] "Watching apiserver" Aug 13 00:47:24.882127 kubelet[2347]: I0813 00:47:24.882111 2347 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:47:25.031342 systemd[1]: Reload requested from client PID 2615 ('systemctl') (unit session-7.scope)... Aug 13 00:47:25.031360 systemd[1]: Reloading... Aug 13 00:47:25.125344 zram_generator::config[2665]: No configuration found. Aug 13 00:47:25.201872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:25.319111 systemd[1]: Reloading finished in 287 ms. Aug 13 00:47:25.342534 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:25.356517 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:47:25.356797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:25.356845 systemd[1]: kubelet.service: Consumed 669ms CPU time, 129.1M memory peak. Aug 13 00:47:25.359530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:25.542463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:25.552211 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:25.601041 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:25.601041 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:25.601041 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:25.601420 kubelet[2711]: I0813 00:47:25.601108 2711 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:25.607320 kubelet[2711]: I0813 00:47:25.607099 2711 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:47:25.607320 kubelet[2711]: I0813 00:47:25.607121 2711 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:25.607464 kubelet[2711]: I0813 00:47:25.607450 2711 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:47:25.608702 kubelet[2711]: I0813 00:47:25.608685 2711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:47:25.610501 kubelet[2711]: I0813 00:47:25.610475 2711 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:25.616628 kubelet[2711]: I0813 00:47:25.616600 2711 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:47:25.622203 kubelet[2711]: I0813 00:47:25.622179 2711 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:25.622457 kubelet[2711]: I0813 00:47:25.622443 2711 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:47:25.622665 kubelet[2711]: I0813 00:47:25.622644 2711 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:25.622861 kubelet[2711]: I0813 00:47:25.622717 2711 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-29-69","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:25.622990 kubelet[2711]: I0813 00:47:25.622979 2711 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:25.623037 kubelet[2711]: I0813 00:47:25.623029 2711 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:47:25.623105 kubelet[2711]: I0813 00:47:25.623096 2711 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:25.623314 kubelet[2711]: I0813 00:47:25.623249 2711 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:47:25.623314 kubelet[2711]: I0813 00:47:25.623264 2711 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:25.624035 kubelet[2711]: I0813 00:47:25.623965 2711 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:47:25.624035 kubelet[2711]: I0813 00:47:25.623981 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:25.629536 kubelet[2711]: I0813 00:47:25.629211 2711 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:47:25.629610 kubelet[2711]: I0813 00:47:25.629580 2711 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:25.629958 kubelet[2711]: I0813 00:47:25.629931 2711 server.go:1274] "Started kubelet" Aug 13 00:47:25.632778 kubelet[2711]: I0813 00:47:25.632755 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:25.637946 kubelet[2711]: I0813 00:47:25.637924 2711 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:25.638963 kubelet[2711]: I0813 00:47:25.638947 2711 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:47:25.639707 kubelet[2711]: I0813 00:47:25.639684 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:25.639910 kubelet[2711]: I0813 00:47:25.639897 2711 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:25.640175 kubelet[2711]: I0813 00:47:25.640134 2711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:25.641249 kubelet[2711]: I0813 00:47:25.641237 2711 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:47:25.641395 kubelet[2711]: I0813 00:47:25.641384 2711 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:47:25.641553 kubelet[2711]: I0813 00:47:25.641542 2711 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:25.643167 kubelet[2711]: I0813 00:47:25.642929 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:25.644586 kubelet[2711]: I0813 00:47:25.644571 2711 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:25.644640 kubelet[2711]: I0813 00:47:25.644632 2711 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:25.647616 kubelet[2711]: E0813 00:47:25.647586 2711 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:25.651017 kubelet[2711]: I0813 00:47:25.650981 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:25.653524 kubelet[2711]: I0813 00:47:25.653499 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:25.653524 kubelet[2711]: I0813 00:47:25.653523 2711 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:47:25.653595 kubelet[2711]: I0813 00:47:25.653546 2711 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:47:25.653595 kubelet[2711]: E0813 00:47:25.653591 2711 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:25.689095 kubelet[2711]: I0813 00:47:25.689072 2711 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:47:25.689212 kubelet[2711]: I0813 00:47:25.689201 2711 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:25.689265 kubelet[2711]: I0813 00:47:25.689256 2711 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:25.689454 kubelet[2711]: I0813 00:47:25.689435 2711 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:47:25.689515 kubelet[2711]: I0813 00:47:25.689494 2711 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:47:25.689567 kubelet[2711]: I0813 00:47:25.689559 2711 policy_none.go:49] "None policy: Start" Aug 13 00:47:25.697783 kubelet[2711]: I0813 00:47:25.697733 2711 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:47:25.697873 kubelet[2711]: I0813 00:47:25.697825 2711 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:25.698019 kubelet[2711]: I0813 00:47:25.698002 2711 state_mem.go:75] "Updated machine memory state" Aug 13 00:47:25.703204 kubelet[2711]: I0813 00:47:25.703179 2711 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:25.703376 kubelet[2711]: I0813 00:47:25.703360 2711 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:25.703425 kubelet[2711]: I0813 00:47:25.703375 2711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:25.703850 kubelet[2711]: I0813 00:47:25.703833 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:25.817798 kubelet[2711]: I0813 00:47:25.817689 2711 kubelet_node_status.go:72] "Attempting to register node" node="172-234-29-69" Aug 13 00:47:25.824175 kubelet[2711]: I0813 00:47:25.824100 2711 kubelet_node_status.go:111] "Node was previously registered" node="172-234-29-69" Aug 13 00:47:25.824175 kubelet[2711]: I0813 00:47:25.824163 2711 kubelet_node_status.go:75] "Successfully registered node" node="172-234-29-69" Aug 13 00:47:25.942958 kubelet[2711]: I0813 00:47:25.942889 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9522e501dbf625a4a5d70a80c7d3d40b-kubeconfig\") pod \"kube-scheduler-172-234-29-69\" (UID: \"9522e501dbf625a4a5d70a80c7d3d40b\") " pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:47:25.942958 kubelet[2711]: I0813 00:47:25.942927 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ab9091b4db9555010c8b582706b7dae-k8s-certs\") pod \"kube-apiserver-172-234-29-69\" (UID: \"3ab9091b4db9555010c8b582706b7dae\") " pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:25.942958 kubelet[2711]: I0813 00:47:25.942948 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ab9091b4db9555010c8b582706b7dae-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-29-69\" (UID: \"3ab9091b4db9555010c8b582706b7dae\") " pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:25.943175 kubelet[2711]: I0813 00:47:25.942973 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-ca-certs\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:25.943175 kubelet[2711]: I0813 00:47:25.942988 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-flexvolume-dir\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:25.943175 kubelet[2711]: I0813 00:47:25.943001 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-k8s-certs\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:25.943175 kubelet[2711]: I0813 00:47:25.943015 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:25.943175 kubelet[2711]: I0813 00:47:25.943029 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ab9091b4db9555010c8b582706b7dae-ca-certs\") pod \"kube-apiserver-172-234-29-69\" (UID: \"3ab9091b4db9555010c8b582706b7dae\") " pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:25.943294 kubelet[2711]: I0813 00:47:25.943041 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d457df60158022c6dd89e86681eac010-kubeconfig\") pod \"kube-controller-manager-172-234-29-69\" (UID: \"d457df60158022c6dd89e86681eac010\") " pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:26.037076 sudo[2743]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:47:26.037529 sudo[2743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:47:26.066446 kubelet[2711]: E0813 00:47:26.066365 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:26.067434 kubelet[2711]: E0813 00:47:26.067323 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:26.067434 kubelet[2711]: E0813 00:47:26.067375 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:26.530293 sudo[2743]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:26.624882 kubelet[2711]: I0813 00:47:26.624845 2711 apiserver.go:52] "Watching apiserver" Aug 13 00:47:26.642311 kubelet[2711]: I0813 00:47:26.642280 2711 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:47:26.673037 kubelet[2711]: E0813 00:47:26.673012 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:26.680146 kubelet[2711]: E0813 00:47:26.680122 2711 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-234-29-69\" already exists" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:26.680575 kubelet[2711]: E0813 00:47:26.680455 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:26.685268 kubelet[2711]: E0813 00:47:26.685212 2711 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-234-29-69\" already exists" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:26.685648 kubelet[2711]: E0813 00:47:26.685627 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:26.702753 kubelet[2711]: I0813 00:47:26.702507 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-29-69" podStartSLOduration=1.702497001 podStartE2EDuration="1.702497001s" podCreationTimestamp="2025-08-13 00:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:26.702022951 +0000 UTC m=+1.144561529" watchObservedRunningTime="2025-08-13 00:47:26.702497001 +0000 UTC m=+1.145035579" Aug 13 00:47:26.714449 kubelet[2711]: I0813 00:47:26.714368 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-29-69" podStartSLOduration=1.7143582849999999 podStartE2EDuration="1.714358285s" podCreationTimestamp="2025-08-13 00:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:26.708518878 +0000 UTC m=+1.151057456" watchObservedRunningTime="2025-08-13 00:47:26.714358285 +0000 UTC m=+1.156896863" Aug 13 00:47:27.674178 kubelet[2711]: E0813 00:47:27.674144 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:27.675394 kubelet[2711]: E0813 00:47:27.674646 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:27.942220 sudo[1806]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:27.993405 sshd[1805]: Connection closed by 147.75.109.163 port 35048 Aug 13 00:47:27.993911 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:27.998811 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:47:27.999471 systemd[1]: sshd@6-172.234.29.69:22-147.75.109.163:35048.service: Deactivated successfully. Aug 13 00:47:28.004510 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:47:28.004794 systemd[1]: session-7.scope: Consumed 3.693s CPU time, 267M memory peak. Aug 13 00:47:28.007895 systemd-logind[1522]: Removed session 7. Aug 13 00:47:28.676110 kubelet[2711]: E0813 00:47:28.676073 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:28.794505 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:47:29.107346 kubelet[2711]: E0813 00:47:29.107182 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:29.859654 kubelet[2711]: I0813 00:47:29.859627 2711 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:47:29.860288 containerd[1542]: time="2025-08-13T00:47:29.860254511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:47:29.860583 kubelet[2711]: I0813 00:47:29.860532 2711 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:47:30.735598 kubelet[2711]: I0813 00:47:30.735546 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-29-69" podStartSLOduration=5.735512714 podStartE2EDuration="5.735512714s" podCreationTimestamp="2025-08-13 00:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:26.714800585 +0000 UTC m=+1.157339163" watchObservedRunningTime="2025-08-13 00:47:30.735512714 +0000 UTC m=+5.178051292" Aug 13 00:47:30.743588 systemd[1]: Created slice kubepods-besteffort-podcdf0a4cf_b96d_4aa6_afd3_51499022755f.slice - libcontainer container kubepods-besteffort-podcdf0a4cf_b96d_4aa6_afd3_51499022755f.slice. Aug 13 00:47:30.759542 systemd[1]: Created slice kubepods-burstable-pod8df6994e_8a85_41b6_8da6_9a30b65a07d4.slice - libcontainer container kubepods-burstable-pod8df6994e_8a85_41b6_8da6_9a30b65a07d4.slice. Aug 13 00:47:30.773292 kubelet[2711]: I0813 00:47:30.773262 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf0a4cf-b96d-4aa6-afd3-51499022755f-xtables-lock\") pod \"kube-proxy-tzrbj\" (UID: \"cdf0a4cf-b96d-4aa6-afd3-51499022755f\") " pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:30.774208 kubelet[2711]: I0813 00:47:30.774185 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hubble-tls\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774251 kubelet[2711]: I0813 00:47:30.774214 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cdf0a4cf-b96d-4aa6-afd3-51499022755f-kube-proxy\") pod \"kube-proxy-tzrbj\" (UID: \"cdf0a4cf-b96d-4aa6-afd3-51499022755f\") " pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:30.774280 kubelet[2711]: I0813 00:47:30.774265 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-run\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774314 kubelet[2711]: I0813 00:47:30.774280 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-kernel\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774341 kubelet[2711]: I0813 00:47:30.774330 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-etc-cni-netd\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774362 kubelet[2711]: I0813 00:47:30.774345 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf0a4cf-b96d-4aa6-afd3-51499022755f-lib-modules\") pod \"kube-proxy-tzrbj\" (UID: \"cdf0a4cf-b96d-4aa6-afd3-51499022755f\") " pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:30.774362 kubelet[2711]: I0813 00:47:30.774356 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-lib-modules\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774406 kubelet[2711]: I0813 00:47:30.774368 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-bpf-maps\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774447 kubelet[2711]: I0813 00:47:30.774429 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hostproc\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774510 kubelet[2711]: I0813 00:47:30.774449 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-config-path\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774533 kubelet[2711]: I0813 00:47:30.774515 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghgm\" (UniqueName: \"kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-kube-api-access-6ghgm\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774590 kubelet[2711]: I0813 00:47:30.774574 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9d2\" (UniqueName: \"kubernetes.io/projected/cdf0a4cf-b96d-4aa6-afd3-51499022755f-kube-api-access-wl9d2\") pod \"kube-proxy-tzrbj\" (UID: \"cdf0a4cf-b96d-4aa6-afd3-51499022755f\") " pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:30.774618 kubelet[2711]: I0813 00:47:30.774596 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cni-path\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774664 kubelet[2711]: I0813 00:47:30.774648 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8df6994e-8a85-41b6-8da6-9a30b65a07d4-clustermesh-secrets\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774687 kubelet[2711]: I0813 00:47:30.774667 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-net\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.774687 kubelet[2711]: I0813 00:47:30.774680 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-cgroup\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.775311 kubelet[2711]: I0813 00:47:30.774728 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-xtables-lock\") pod \"cilium-h9fdf\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " pod="kube-system/cilium-h9fdf" Aug 13 00:47:30.938665 systemd[1]: Created slice kubepods-besteffort-pod90281167_bc96_4ee7_975d_6bd06c3bd885.slice - libcontainer container kubepods-besteffort-pod90281167_bc96_4ee7_975d_6bd06c3bd885.slice. Aug 13 00:47:30.976806 kubelet[2711]: I0813 00:47:30.976785 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90281167-bc96-4ee7-975d-6bd06c3bd885-cilium-config-path\") pod \"cilium-operator-5d85765b45-r5fdz\" (UID: \"90281167-bc96-4ee7-975d-6bd06c3bd885\") " pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:47:30.977106 kubelet[2711]: I0813 00:47:30.976815 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhwvf\" (UniqueName: \"kubernetes.io/projected/90281167-bc96-4ee7-975d-6bd06c3bd885-kube-api-access-vhwvf\") pod \"cilium-operator-5d85765b45-r5fdz\" (UID: \"90281167-bc96-4ee7-975d-6bd06c3bd885\") " pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:47:31.054210 kubelet[2711]: E0813 00:47:31.053787 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.054819 containerd[1542]: time="2025-08-13T00:47:31.054286002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzrbj,Uid:cdf0a4cf-b96d-4aa6-afd3-51499022755f,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:31.065531 kubelet[2711]: E0813 00:47:31.064979 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.066176 containerd[1542]: time="2025-08-13T00:47:31.066134261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9fdf,Uid:8df6994e-8a85-41b6-8da6-9a30b65a07d4,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:31.072627 containerd[1542]: time="2025-08-13T00:47:31.072572982Z" level=info msg="connecting to shim a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e" address="unix:///run/containerd/s/9d3975d94cf242b950dbaf1363080302b730c9894b74da31725694bf2c7020e9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:31.098727 containerd[1542]: time="2025-08-13T00:47:31.098388523Z" level=info msg="connecting to shim 68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0" address="unix:///run/containerd/s/e59bb171a2e69ac09978844359e9a65bfc48235531ad9007c570e8f31aca3c60" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:31.098548 systemd[1]: Started cri-containerd-a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e.scope - libcontainer container a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e. Aug 13 00:47:31.127553 systemd[1]: Started cri-containerd-68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0.scope - libcontainer container 68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0. Aug 13 00:47:31.136460 containerd[1542]: time="2025-08-13T00:47:31.136428300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzrbj,Uid:cdf0a4cf-b96d-4aa6-afd3-51499022755f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e\"" Aug 13 00:47:31.138346 kubelet[2711]: E0813 00:47:31.137824 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.139954 containerd[1542]: time="2025-08-13T00:47:31.139917293Z" level=info msg="CreateContainer within sandbox \"a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:47:31.150370 containerd[1542]: time="2025-08-13T00:47:31.150339393Z" level=info msg="Container 5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:31.157093 containerd[1542]: time="2025-08-13T00:47:31.157053172Z" level=info msg="CreateContainer within sandbox \"a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876\"" Aug 13 00:47:31.157541 containerd[1542]: time="2025-08-13T00:47:31.157507878Z" level=info msg="StartContainer for \"5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876\"" Aug 13 00:47:31.160281 containerd[1542]: time="2025-08-13T00:47:31.160251177Z" level=info msg="connecting to shim 5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876" address="unix:///run/containerd/s/9d3975d94cf242b950dbaf1363080302b730c9894b74da31725694bf2c7020e9" protocol=ttrpc version=3 Aug 13 00:47:31.169612 containerd[1542]: time="2025-08-13T00:47:31.169490436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9fdf,Uid:8df6994e-8a85-41b6-8da6-9a30b65a07d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\"" Aug 13 00:47:31.173504 kubelet[2711]: E0813 00:47:31.173445 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.175105 containerd[1542]: time="2025-08-13T00:47:31.175068033Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:47:31.202435 systemd[1]: Started cri-containerd-5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876.scope - libcontainer container 5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876. Aug 13 00:47:31.242605 kubelet[2711]: E0813 00:47:31.242554 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.244155 containerd[1542]: time="2025-08-13T00:47:31.244120951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r5fdz,Uid:90281167-bc96-4ee7-975d-6bd06c3bd885,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:31.245132 containerd[1542]: time="2025-08-13T00:47:31.245100244Z" level=info msg="StartContainer for \"5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876\" returns successfully" Aug 13 00:47:31.261880 containerd[1542]: time="2025-08-13T00:47:31.261710096Z" level=info msg="connecting to shim 5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8" address="unix:///run/containerd/s/a821f1b182031fd466961cd92b8cbde715230aa8951044ed4e12f3be9779a8f7" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:31.289656 systemd[1]: Started cri-containerd-5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8.scope - libcontainer container 5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8. Aug 13 00:47:31.348243 containerd[1542]: time="2025-08-13T00:47:31.347998951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-r5fdz,Uid:90281167-bc96-4ee7-975d-6bd06c3bd885,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\"" Aug 13 00:47:31.349557 kubelet[2711]: E0813 00:47:31.349511 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.683057 kubelet[2711]: E0813 00:47:31.682778 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:31.698873 kubelet[2711]: I0813 00:47:31.698816 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tzrbj" podStartSLOduration=1.6987983309999999 podStartE2EDuration="1.698798331s" podCreationTimestamp="2025-08-13 00:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:31.698062077 +0000 UTC m=+6.140600675" watchObservedRunningTime="2025-08-13 00:47:31.698798331 +0000 UTC m=+6.141336909" Aug 13 00:47:32.405676 kubelet[2711]: E0813 00:47:32.405627 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:32.690163 kubelet[2711]: E0813 00:47:32.690018 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:34.801792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338263794.mount: Deactivated successfully. Aug 13 00:47:35.753544 kubelet[2711]: I0813 00:47:35.753013 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:35.753544 kubelet[2711]: I0813 00:47:35.753259 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:47:35.755746 kubelet[2711]: I0813 00:47:35.755651 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:47:35.766267 kubelet[2711]: I0813 00:47:35.766250 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:35.766769 kubelet[2711]: I0813 00:47:35.766665 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69","kube-system/kube-proxy-tzrbj"] Aug 13 00:47:35.766769 kubelet[2711]: E0813 00:47:35.766701 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:47:35.766769 kubelet[2711]: E0813 00:47:35.766711 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:47:35.766769 kubelet[2711]: E0813 00:47:35.766723 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:35.766769 kubelet[2711]: E0813 00:47:35.766731 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:35.766769 kubelet[2711]: E0813 00:47:35.766739 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:47:35.766769 kubelet[2711]: E0813 00:47:35.766747 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:35.766769 kubelet[2711]: I0813 00:47:35.766757 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:47:36.348009 containerd[1542]: time="2025-08-13T00:47:36.347965399Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:36.348842 containerd[1542]: time="2025-08-13T00:47:36.348641254Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:47:36.349256 containerd[1542]: time="2025-08-13T00:47:36.349228821Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:36.350541 containerd[1542]: time="2025-08-13T00:47:36.350505695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.175410432s" Aug 13 00:47:36.350541 containerd[1542]: time="2025-08-13T00:47:36.350538204Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:47:36.351995 containerd[1542]: time="2025-08-13T00:47:36.351727087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:47:36.353239 containerd[1542]: time="2025-08-13T00:47:36.353204089Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:47:36.360990 containerd[1542]: time="2025-08-13T00:47:36.360956804Z" level=info msg="Container ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:36.367515 containerd[1542]: time="2025-08-13T00:47:36.367489637Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\"" Aug 13 00:47:36.367952 containerd[1542]: time="2025-08-13T00:47:36.367920745Z" level=info msg="StartContainer for \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\"" Aug 13 00:47:36.369279 containerd[1542]: time="2025-08-13T00:47:36.369255667Z" level=info msg="connecting to shim ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1" address="unix:///run/containerd/s/e59bb171a2e69ac09978844359e9a65bfc48235531ad9007c570e8f31aca3c60" protocol=ttrpc version=3 Aug 13 00:47:36.397426 systemd[1]: Started cri-containerd-ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1.scope - libcontainer container ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1. Aug 13 00:47:36.427501 containerd[1542]: time="2025-08-13T00:47:36.427449865Z" level=info msg="StartContainer for \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" returns successfully" Aug 13 00:47:36.439502 systemd[1]: cri-containerd-ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1.scope: Deactivated successfully. Aug 13 00:47:36.443196 containerd[1542]: time="2025-08-13T00:47:36.443169135Z" level=info msg="received exit event container_id:\"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" id:\"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" pid:3121 exited_at:{seconds:1755046056 nanos:442766957}" Aug 13 00:47:36.443384 containerd[1542]: time="2025-08-13T00:47:36.443352404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" id:\"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" pid:3121 exited_at:{seconds:1755046056 nanos:442766957}" Aug 13 00:47:36.469094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1-rootfs.mount: Deactivated successfully. Aug 13 00:47:36.696810 kubelet[2711]: E0813 00:47:36.696778 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:36.699781 containerd[1542]: time="2025-08-13T00:47:36.699737940Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:47:36.709476 containerd[1542]: time="2025-08-13T00:47:36.709095376Z" level=info msg="Container 876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:36.716731 containerd[1542]: time="2025-08-13T00:47:36.716706332Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\"" Aug 13 00:47:36.718396 containerd[1542]: time="2025-08-13T00:47:36.718361354Z" level=info msg="StartContainer for \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\"" Aug 13 00:47:36.721067 containerd[1542]: time="2025-08-13T00:47:36.721027618Z" level=info msg="connecting to shim 876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29" address="unix:///run/containerd/s/e59bb171a2e69ac09978844359e9a65bfc48235531ad9007c570e8f31aca3c60" protocol=ttrpc version=3 Aug 13 00:47:36.742446 systemd[1]: Started cri-containerd-876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29.scope - libcontainer container 876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29. Aug 13 00:47:36.773399 containerd[1542]: time="2025-08-13T00:47:36.773290440Z" level=info msg="StartContainer for \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" returns successfully" Aug 13 00:47:36.792289 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:47:36.793218 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:47:36.794173 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:47:36.798357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:47:36.798584 systemd[1]: cri-containerd-876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29.scope: Deactivated successfully. Aug 13 00:47:36.803695 containerd[1542]: time="2025-08-13T00:47:36.803657936Z" level=info msg="received exit event container_id:\"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" id:\"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" pid:3165 exited_at:{seconds:1755046056 nanos:802901180}" Aug 13 00:47:36.803928 containerd[1542]: time="2025-08-13T00:47:36.803834815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" id:\"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" pid:3165 exited_at:{seconds:1755046056 nanos:802901180}" Aug 13 00:47:36.825288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:47:37.239523 containerd[1542]: time="2025-08-13T00:47:37.239484794Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:37.240286 containerd[1542]: time="2025-08-13T00:47:37.240253450Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:47:37.241171 containerd[1542]: time="2025-08-13T00:47:37.240977056Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:37.242047 containerd[1542]: time="2025-08-13T00:47:37.242020211Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 890.264394ms" Aug 13 00:47:37.242121 containerd[1542]: time="2025-08-13T00:47:37.242107100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:47:37.244909 containerd[1542]: time="2025-08-13T00:47:37.244851545Z" level=info msg="CreateContainer within sandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:47:37.250888 containerd[1542]: time="2025-08-13T00:47:37.250864112Z" level=info msg="Container 7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:37.254653 containerd[1542]: time="2025-08-13T00:47:37.254613522Z" level=info msg="CreateContainer within sandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\"" Aug 13 00:47:37.255268 containerd[1542]: time="2025-08-13T00:47:37.255198389Z" level=info msg="StartContainer for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\"" Aug 13 00:47:37.256271 containerd[1542]: time="2025-08-13T00:47:37.256233394Z" level=info msg="connecting to shim 7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e" address="unix:///run/containerd/s/a821f1b182031fd466961cd92b8cbde715230aa8951044ed4e12f3be9779a8f7" protocol=ttrpc version=3 Aug 13 00:47:37.275426 systemd[1]: Started cri-containerd-7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e.scope - libcontainer container 7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e. Aug 13 00:47:37.312963 containerd[1542]: time="2025-08-13T00:47:37.312931378Z" level=info msg="StartContainer for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" returns successfully" Aug 13 00:47:37.701132 kubelet[2711]: E0813 00:47:37.701086 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:37.705484 kubelet[2711]: E0813 00:47:37.705467 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:37.708619 containerd[1542]: time="2025-08-13T00:47:37.708577837Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:47:37.717105 kubelet[2711]: I0813 00:47:37.717039 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-r5fdz" podStartSLOduration=1.825216793 podStartE2EDuration="7.716845172s" podCreationTimestamp="2025-08-13 00:47:30 +0000 UTC" firstStartedPulling="2025-08-13 00:47:31.351357956 +0000 UTC m=+5.793896534" lastFinishedPulling="2025-08-13 00:47:37.242986335 +0000 UTC m=+11.685524913" observedRunningTime="2025-08-13 00:47:37.71366753 +0000 UTC m=+12.156206108" watchObservedRunningTime="2025-08-13 00:47:37.716845172 +0000 UTC m=+12.159383760" Aug 13 00:47:37.724238 containerd[1542]: time="2025-08-13T00:47:37.724206373Z" level=info msg="Container 81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:37.736886 containerd[1542]: time="2025-08-13T00:47:37.736852265Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\"" Aug 13 00:47:37.737164 containerd[1542]: time="2025-08-13T00:47:37.737136093Z" level=info msg="StartContainer for \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\"" Aug 13 00:47:37.738404 containerd[1542]: time="2025-08-13T00:47:37.738346006Z" level=info msg="connecting to shim 81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6" address="unix:///run/containerd/s/e59bb171a2e69ac09978844359e9a65bfc48235531ad9007c570e8f31aca3c60" protocol=ttrpc version=3 Aug 13 00:47:37.776774 systemd[1]: Started cri-containerd-81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6.scope - libcontainer container 81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6. Aug 13 00:47:37.888405 containerd[1542]: time="2025-08-13T00:47:37.888359098Z" level=info msg="StartContainer for \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" returns successfully" Aug 13 00:47:37.897971 systemd[1]: cri-containerd-81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6.scope: Deactivated successfully. Aug 13 00:47:37.900778 containerd[1542]: time="2025-08-13T00:47:37.900745572Z" level=info msg="received exit event container_id:\"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" id:\"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" pid:3262 exited_at:{seconds:1755046057 nanos:899587518}" Aug 13 00:47:37.901007 containerd[1542]: time="2025-08-13T00:47:37.900894981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" id:\"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" pid:3262 exited_at:{seconds:1755046057 nanos:899587518}" Aug 13 00:47:37.937260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6-rootfs.mount: Deactivated successfully. Aug 13 00:47:38.214318 kubelet[2711]: E0813 00:47:38.213435 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:38.716393 kubelet[2711]: E0813 00:47:38.715446 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:38.716393 kubelet[2711]: E0813 00:47:38.715780 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:38.719108 containerd[1542]: time="2025-08-13T00:47:38.719002744Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:47:38.732777 containerd[1542]: time="2025-08-13T00:47:38.732037037Z" level=info msg="Container 0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:38.740196 containerd[1542]: time="2025-08-13T00:47:38.740169586Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\"" Aug 13 00:47:38.740806 containerd[1542]: time="2025-08-13T00:47:38.740788653Z" level=info msg="StartContainer for \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\"" Aug 13 00:47:38.741832 containerd[1542]: time="2025-08-13T00:47:38.741808239Z" level=info msg="connecting to shim 0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df" address="unix:///run/containerd/s/e59bb171a2e69ac09978844359e9a65bfc48235531ad9007c570e8f31aca3c60" protocol=ttrpc version=3 Aug 13 00:47:38.766525 systemd[1]: Started cri-containerd-0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df.scope - libcontainer container 0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df. Aug 13 00:47:38.801075 systemd[1]: cri-containerd-0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df.scope: Deactivated successfully. Aug 13 00:47:38.803576 containerd[1542]: time="2025-08-13T00:47:38.801990412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" id:\"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" pid:3300 exited_at:{seconds:1755046058 nanos:801413835}" Aug 13 00:47:38.803576 containerd[1542]: time="2025-08-13T00:47:38.803273206Z" level=info msg="received exit event container_id:\"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" id:\"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" pid:3300 exited_at:{seconds:1755046058 nanos:801413835}" Aug 13 00:47:38.812920 containerd[1542]: time="2025-08-13T00:47:38.812884277Z" level=info msg="StartContainer for \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" returns successfully" Aug 13 00:47:38.827679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df-rootfs.mount: Deactivated successfully. Aug 13 00:47:39.114937 kubelet[2711]: E0813 00:47:39.114259 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:39.719598 kubelet[2711]: E0813 00:47:39.719545 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:39.722746 containerd[1542]: time="2025-08-13T00:47:39.722679721Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:47:39.743444 containerd[1542]: time="2025-08-13T00:47:39.741374601Z" level=info msg="Container 037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:39.744035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923666493.mount: Deactivated successfully. Aug 13 00:47:39.748585 containerd[1542]: time="2025-08-13T00:47:39.748560636Z" level=info msg="CreateContainer within sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\"" Aug 13 00:47:39.749318 containerd[1542]: time="2025-08-13T00:47:39.749004325Z" level=info msg="StartContainer for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\"" Aug 13 00:47:39.750198 containerd[1542]: time="2025-08-13T00:47:39.750175699Z" level=info msg="connecting to shim 037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581" address="unix:///run/containerd/s/e59bb171a2e69ac09978844359e9a65bfc48235531ad9007c570e8f31aca3c60" protocol=ttrpc version=3 Aug 13 00:47:39.778426 systemd[1]: Started cri-containerd-037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581.scope - libcontainer container 037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581. Aug 13 00:47:39.816440 containerd[1542]: time="2025-08-13T00:47:39.816382741Z" level=info msg="StartContainer for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" returns successfully" Aug 13 00:47:39.881454 containerd[1542]: time="2025-08-13T00:47:39.881409060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" id:\"c5d0c26863bc0faa1f9fc8503af110f3ce8c985f66391d54c517e0fc76b3e302\" pid:3368 exited_at:{seconds:1755046059 nanos:880721123}" Aug 13 00:47:39.922275 kubelet[2711]: I0813 00:47:39.922170 2711 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:47:40.725655 kubelet[2711]: E0813 00:47:40.725469 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:40.739323 kubelet[2711]: I0813 00:47:40.739223 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h9fdf" podStartSLOduration=5.561940445 podStartE2EDuration="10.739201225s" podCreationTimestamp="2025-08-13 00:47:30 +0000 UTC" firstStartedPulling="2025-08-13 00:47:31.174205979 +0000 UTC m=+5.616744557" lastFinishedPulling="2025-08-13 00:47:36.351466759 +0000 UTC m=+10.794005337" observedRunningTime="2025-08-13 00:47:40.737292994 +0000 UTC m=+15.179831572" watchObservedRunningTime="2025-08-13 00:47:40.739201225 +0000 UTC m=+15.181739803" Aug 13 00:47:41.728138 kubelet[2711]: E0813 00:47:41.728064 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:41.992006 systemd-networkd[1463]: cilium_host: Link UP Aug 13 00:47:41.992170 systemd-networkd[1463]: cilium_net: Link UP Aug 13 00:47:41.993424 systemd-networkd[1463]: cilium_host: Gained carrier Aug 13 00:47:41.994430 systemd-networkd[1463]: cilium_net: Gained carrier Aug 13 00:47:42.093663 systemd-networkd[1463]: cilium_vxlan: Link UP Aug 13 00:47:42.093676 systemd-networkd[1463]: cilium_vxlan: Gained carrier Aug 13 00:47:42.307520 kernel: NET: Registered PF_ALG protocol family Aug 13 00:47:42.617834 systemd-networkd[1463]: cilium_net: Gained IPv6LL Aug 13 00:47:42.680871 systemd-networkd[1463]: cilium_host: Gained IPv6LL Aug 13 00:47:42.730448 kubelet[2711]: E0813 00:47:42.730409 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:42.987576 systemd-networkd[1463]: lxc_health: Link UP Aug 13 00:47:42.990573 systemd-networkd[1463]: lxc_health: Gained carrier Aug 13 00:47:43.006201 update_engine[1523]: I20250813 00:47:43.004946 1523 update_attempter.cc:509] Updating boot flags... Aug 13 00:47:43.386618 systemd-networkd[1463]: cilium_vxlan: Gained IPv6LL Aug 13 00:47:43.732390 kubelet[2711]: E0813 00:47:43.732275 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:44.733503 kubelet[2711]: E0813 00:47:44.733451 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:44.792545 systemd-networkd[1463]: lxc_health: Gained IPv6LL Aug 13 00:47:45.735488 kubelet[2711]: E0813 00:47:45.735428 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:47:45.788628 kubelet[2711]: I0813 00:47:45.788604 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:45.788927 kubelet[2711]: I0813 00:47:45.788728 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:47:45.790690 kubelet[2711]: I0813 00:47:45.790676 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:47:45.803117 kubelet[2711]: I0813 00:47:45.802889 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:45.803117 kubelet[2711]: I0813 00:47:45.802955 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/cilium-h9fdf","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:47:45.803117 kubelet[2711]: E0813 00:47:45.802984 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:47:45.803117 kubelet[2711]: E0813 00:47:45.802994 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:45.803117 kubelet[2711]: E0813 00:47:45.803003 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:45.803117 kubelet[2711]: E0813 00:47:45.803010 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:45.803117 kubelet[2711]: E0813 00:47:45.803020 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:47:45.803117 kubelet[2711]: E0813 00:47:45.803027 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:47:45.803117 kubelet[2711]: I0813 00:47:45.803036 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:47:55.820758 kubelet[2711]: I0813 00:47:55.820717 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:55.821691 kubelet[2711]: I0813 00:47:55.821366 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:47:55.823211 kubelet[2711]: I0813 00:47:55.823171 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:47:55.834511 kubelet[2711]: I0813 00:47:55.834467 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:47:55.834675 kubelet[2711]: I0813 00:47:55.834593 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:47:55.834675 kubelet[2711]: E0813 00:47:55.834636 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:47:55.834675 kubelet[2711]: E0813 00:47:55.834650 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:47:55.834675 kubelet[2711]: E0813 00:47:55.834661 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:47:55.834675 kubelet[2711]: E0813 00:47:55.834673 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:47:55.834798 kubelet[2711]: E0813 00:47:55.834685 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:47:55.834798 kubelet[2711]: E0813 00:47:55.834696 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:47:55.834798 kubelet[2711]: I0813 00:47:55.834706 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:05.847620 kubelet[2711]: I0813 00:48:05.847584 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:05.847620 kubelet[2711]: I0813 00:48:05.847615 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:48:05.849164 kubelet[2711]: I0813 00:48:05.849148 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:05.858084 kubelet[2711]: I0813 00:48:05.858068 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:05.858163 kubelet[2711]: I0813 00:48:05.858143 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:48:05.858199 kubelet[2711]: E0813 00:48:05.858175 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:48:05.858199 kubelet[2711]: E0813 00:48:05.858185 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:48:05.858199 kubelet[2711]: E0813 00:48:05.858193 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:48:05.858199 kubelet[2711]: E0813 00:48:05.858200 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:48:05.858314 kubelet[2711]: E0813 00:48:05.858208 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:48:05.858314 kubelet[2711]: E0813 00:48:05.858215 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:48:05.858314 kubelet[2711]: I0813 00:48:05.858224 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:15.872647 kubelet[2711]: I0813 00:48:15.872597 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:15.872647 kubelet[2711]: I0813 00:48:15.872635 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:48:15.874319 kubelet[2711]: I0813 00:48:15.874285 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:15.887908 kubelet[2711]: I0813 00:48:15.887893 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:15.887990 kubelet[2711]: I0813 00:48:15.887968 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:48:15.888020 kubelet[2711]: E0813 00:48:15.887999 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:48:15.888020 kubelet[2711]: E0813 00:48:15.888011 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:48:15.888020 kubelet[2711]: E0813 00:48:15.888019 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:48:15.888113 kubelet[2711]: E0813 00:48:15.888026 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:48:15.888113 kubelet[2711]: E0813 00:48:15.888033 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:48:15.888113 kubelet[2711]: E0813 00:48:15.888040 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:48:15.888113 kubelet[2711]: I0813 00:48:15.888048 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:25.904863 kubelet[2711]: I0813 00:48:25.904833 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:25.904863 kubelet[2711]: I0813 00:48:25.904870 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:25.905291 kubelet[2711]: I0813 00:48:25.904965 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:48:25.905291 kubelet[2711]: E0813 00:48:25.904993 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:48:25.905291 kubelet[2711]: E0813 00:48:25.905003 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:48:25.905291 kubelet[2711]: E0813 00:48:25.905011 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:48:25.905291 kubelet[2711]: E0813 00:48:25.905019 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:48:25.905291 kubelet[2711]: E0813 00:48:25.905026 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:48:25.905291 kubelet[2711]: E0813 00:48:25.905035 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:48:25.905291 kubelet[2711]: I0813 00:48:25.905043 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:35.920520 kubelet[2711]: I0813 00:48:35.920481 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:35.920520 kubelet[2711]: I0813 00:48:35.920521 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:48:35.922887 kubelet[2711]: I0813 00:48:35.922873 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:35.932548 kubelet[2711]: I0813 00:48:35.932520 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:35.932628 kubelet[2711]: I0813 00:48:35.932609 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:48:35.932659 kubelet[2711]: E0813 00:48:35.932640 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:48:35.932659 kubelet[2711]: E0813 00:48:35.932651 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:48:35.932723 kubelet[2711]: E0813 00:48:35.932660 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:48:35.932723 kubelet[2711]: E0813 00:48:35.932668 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:48:35.932723 kubelet[2711]: E0813 00:48:35.932684 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:48:35.932723 kubelet[2711]: E0813 00:48:35.932691 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:48:35.932723 kubelet[2711]: I0813 00:48:35.932700 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:40.655263 kubelet[2711]: E0813 00:48:40.655200 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:48:41.655088 kubelet[2711]: E0813 00:48:41.654409 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:48:45.949440 kubelet[2711]: I0813 00:48:45.949414 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:45.950371 kubelet[2711]: I0813 00:48:45.949457 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:48:45.951500 kubelet[2711]: I0813 00:48:45.951488 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:45.960612 kubelet[2711]: I0813 00:48:45.960590 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:45.960698 kubelet[2711]: I0813 00:48:45.960682 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:48:45.960729 kubelet[2711]: E0813 00:48:45.960711 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:48:45.960729 kubelet[2711]: E0813 00:48:45.960722 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:48:45.960729 kubelet[2711]: E0813 00:48:45.960729 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:48:45.960816 kubelet[2711]: E0813 00:48:45.960737 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:48:45.960816 kubelet[2711]: E0813 00:48:45.960745 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:48:45.960816 kubelet[2711]: E0813 00:48:45.960751 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:48:45.960816 kubelet[2711]: I0813 00:48:45.960760 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:48:51.656263 kubelet[2711]: E0813 00:48:51.655752 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:48:55.655586 kubelet[2711]: E0813 00:48:55.655508 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:48:55.977911 kubelet[2711]: I0813 00:48:55.977588 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:55.977911 kubelet[2711]: I0813 00:48:55.977645 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:48:55.980156 kubelet[2711]: I0813 00:48:55.980102 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:48:55.998340 kubelet[2711]: I0813 00:48:55.998293 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:48:55.998426 kubelet[2711]: I0813 00:48:55.998413 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:48:55.998478 kubelet[2711]: E0813 00:48:55.998453 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:48:55.998478 kubelet[2711]: E0813 00:48:55.998478 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:48:55.998566 kubelet[2711]: E0813 00:48:55.998487 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:48:55.998566 kubelet[2711]: E0813 00:48:55.998497 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:48:55.998566 kubelet[2711]: E0813 00:48:55.998505 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:48:55.998566 kubelet[2711]: E0813 00:48:55.998514 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:48:55.998566 kubelet[2711]: I0813 00:48:55.998523 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:02.655091 kubelet[2711]: E0813 00:49:02.654951 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:49:06.012730 kubelet[2711]: I0813 00:49:06.012692 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:06.012730 kubelet[2711]: I0813 00:49:06.012727 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:49:06.014975 kubelet[2711]: I0813 00:49:06.014913 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:06.025950 kubelet[2711]: I0813 00:49:06.025776 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:06.025950 kubelet[2711]: I0813 00:49:06.025853 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-proxy-tzrbj","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:49:06.025950 kubelet[2711]: E0813 00:49:06.025881 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:49:06.025950 kubelet[2711]: E0813 00:49:06.025894 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:49:06.025950 kubelet[2711]: E0813 00:49:06.025902 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:49:06.025950 kubelet[2711]: E0813 00:49:06.025910 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:49:06.025950 kubelet[2711]: E0813 00:49:06.025918 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:49:06.025950 kubelet[2711]: E0813 00:49:06.025927 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:49:06.025950 kubelet[2711]: I0813 00:49:06.025936 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:09.655163 kubelet[2711]: E0813 00:49:09.654938 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:49:16.046055 kubelet[2711]: I0813 00:49:16.045987 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:16.046055 kubelet[2711]: I0813 00:49:16.046034 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:49:16.048998 kubelet[2711]: I0813 00:49:16.048966 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:16.060869 kubelet[2711]: I0813 00:49:16.060846 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:16.060973 kubelet[2711]: I0813 00:49:16.060941 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:49:16.061009 kubelet[2711]: E0813 00:49:16.060976 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:49:16.061009 kubelet[2711]: E0813 00:49:16.060989 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:49:16.061009 kubelet[2711]: E0813 00:49:16.060998 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:49:16.061009 kubelet[2711]: E0813 00:49:16.061007 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:49:16.061103 kubelet[2711]: E0813 00:49:16.061015 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:49:16.061103 kubelet[2711]: E0813 00:49:16.061026 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:49:16.061103 kubelet[2711]: I0813 00:49:16.061035 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:24.158117 systemd[1]: Started sshd@7-172.234.29.69:22-147.75.109.163:58142.service - OpenSSH per-connection server daemon (147.75.109.163:58142). Aug 13 00:49:24.493732 sshd[3815]: Accepted publickey for core from 147.75.109.163 port 58142 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:24.495566 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:24.502787 systemd-logind[1522]: New session 8 of user core. Aug 13 00:49:24.506526 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:49:24.814506 sshd[3817]: Connection closed by 147.75.109.163 port 58142 Aug 13 00:49:24.815866 sshd-session[3815]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:24.821491 systemd[1]: sshd@7-172.234.29.69:22-147.75.109.163:58142.service: Deactivated successfully. Aug 13 00:49:24.824110 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:49:24.825398 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:49:24.827771 systemd-logind[1522]: Removed session 8. Aug 13 00:49:26.079476 kubelet[2711]: I0813 00:49:26.079431 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:26.079476 kubelet[2711]: I0813 00:49:26.079492 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:26.080361 kubelet[2711]: I0813 00:49:26.079592 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-proxy-tzrbj","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:49:26.080361 kubelet[2711]: E0813 00:49:26.079634 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:49:26.080361 kubelet[2711]: E0813 00:49:26.079648 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:49:26.080361 kubelet[2711]: E0813 00:49:26.079658 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:49:26.080361 kubelet[2711]: E0813 00:49:26.079667 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:49:26.080361 kubelet[2711]: E0813 00:49:26.079676 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:49:26.080361 kubelet[2711]: E0813 00:49:26.079685 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:49:26.080361 kubelet[2711]: I0813 00:49:26.079694 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:29.875738 systemd[1]: Started sshd@8-172.234.29.69:22-147.75.109.163:55466.service - OpenSSH per-connection server daemon (147.75.109.163:55466). Aug 13 00:49:30.201765 sshd[3831]: Accepted publickey for core from 147.75.109.163 port 55466 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:30.205894 sshd-session[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:30.214907 systemd-logind[1522]: New session 9 of user core. Aug 13 00:49:30.219427 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:49:30.514132 sshd[3833]: Connection closed by 147.75.109.163 port 55466 Aug 13 00:49:30.514887 sshd-session[3831]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:30.524733 systemd[1]: sshd@8-172.234.29.69:22-147.75.109.163:55466.service: Deactivated successfully. Aug 13 00:49:30.529616 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:49:30.532389 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:49:30.535160 systemd-logind[1522]: Removed session 9. Aug 13 00:49:35.593119 systemd[1]: Started sshd@9-172.234.29.69:22-147.75.109.163:55468.service - OpenSSH per-connection server daemon (147.75.109.163:55468). Aug 13 00:49:35.946584 sshd[3847]: Accepted publickey for core from 147.75.109.163 port 55468 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:35.948284 sshd-session[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:35.955007 systemd-logind[1522]: New session 10 of user core. Aug 13 00:49:35.958519 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:49:36.095239 kubelet[2711]: I0813 00:49:36.095181 2711 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:36.095239 kubelet[2711]: I0813 00:49:36.095219 2711 container_gc.go:88] "Attempting to delete unused containers" Aug 13 00:49:36.098818 kubelet[2711]: I0813 00:49:36.098800 2711 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:49:36.106312 kubelet[2711]: I0813 00:49:36.104945 2711 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 00:49:36.107008 containerd[1542]: time="2025-08-13T00:49:36.106976421Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:49:36.108458 containerd[1542]: time="2025-08-13T00:49:36.108436576Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:49:36.109123 containerd[1542]: time="2025-08-13T00:49:36.109093464Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 00:49:36.109647 containerd[1542]: time="2025-08-13T00:49:36.109616302Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 00:49:36.109891 containerd[1542]: time="2025-08-13T00:49:36.109690331Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:49:36.109933 kubelet[2711]: I0813 00:49:36.109825 2711 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 00:49:36.109985 containerd[1542]: time="2025-08-13T00:49:36.109957381Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:49:36.110760 containerd[1542]: time="2025-08-13T00:49:36.110731538Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:49:36.111215 containerd[1542]: time="2025-08-13T00:49:36.111198917Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 00:49:36.111632 containerd[1542]: time="2025-08-13T00:49:36.111609595Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 00:49:36.111754 containerd[1542]: time="2025-08-13T00:49:36.111673085Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:49:36.126746 kubelet[2711]: I0813 00:49:36.126717 2711 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:49:36.126837 kubelet[2711]: I0813 00:49:36.126809 2711 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-r5fdz","kube-system/cilium-h9fdf","kube-system/kube-proxy-tzrbj","kube-system/kube-controller-manager-172-234-29-69","kube-system/kube-apiserver-172-234-29-69","kube-system/kube-scheduler-172-234-29-69"] Aug 13 00:49:36.126870 kubelet[2711]: E0813 00:49:36.126845 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-r5fdz" Aug 13 00:49:36.126870 kubelet[2711]: E0813 00:49:36.126856 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h9fdf" Aug 13 00:49:36.126870 kubelet[2711]: E0813 00:49:36.126866 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tzrbj" Aug 13 00:49:36.126938 kubelet[2711]: E0813 00:49:36.126875 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-29-69" Aug 13 00:49:36.126938 kubelet[2711]: E0813 00:49:36.126884 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-29-69" Aug 13 00:49:36.126938 kubelet[2711]: E0813 00:49:36.126892 2711 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-29-69" Aug 13 00:49:36.126938 kubelet[2711]: I0813 00:49:36.126900 2711 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 00:49:36.275390 sshd[3849]: Connection closed by 147.75.109.163 port 55468 Aug 13 00:49:36.275551 sshd-session[3847]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:36.282195 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:49:36.283604 systemd[1]: sshd@9-172.234.29.69:22-147.75.109.163:55468.service: Deactivated successfully. Aug 13 00:49:36.286678 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:49:36.290209 systemd-logind[1522]: Removed session 10. Aug 13 00:49:41.341230 systemd[1]: Started sshd@10-172.234.29.69:22-147.75.109.163:44512.service - OpenSSH per-connection server daemon (147.75.109.163:44512). Aug 13 00:49:41.692330 sshd[3862]: Accepted publickey for core from 147.75.109.163 port 44512 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:41.694292 sshd-session[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:41.700763 systemd-logind[1522]: New session 11 of user core. Aug 13 00:49:41.705451 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:49:42.013633 sshd[3864]: Connection closed by 147.75.109.163 port 44512 Aug 13 00:49:42.014870 sshd-session[3862]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:42.021027 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:49:42.021769 systemd[1]: sshd@10-172.234.29.69:22-147.75.109.163:44512.service: Deactivated successfully. Aug 13 00:49:42.024391 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:49:42.026774 systemd-logind[1522]: Removed session 11. Aug 13 00:49:42.075000 systemd[1]: Started sshd@11-172.234.29.69:22-147.75.109.163:44516.service - OpenSSH per-connection server daemon (147.75.109.163:44516). Aug 13 00:49:42.415875 sshd[3877]: Accepted publickey for core from 147.75.109.163 port 44516 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:42.417801 sshd-session[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:42.425139 systemd-logind[1522]: New session 12 of user core. Aug 13 00:49:42.435479 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:49:42.758663 sshd[3879]: Connection closed by 147.75.109.163 port 44516 Aug 13 00:49:42.759422 sshd-session[3877]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:42.765714 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:49:42.766758 systemd[1]: sshd@11-172.234.29.69:22-147.75.109.163:44516.service: Deactivated successfully. Aug 13 00:49:42.769137 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:49:42.771281 systemd-logind[1522]: Removed session 12. Aug 13 00:49:42.826690 systemd[1]: Started sshd@12-172.234.29.69:22-147.75.109.163:44532.service - OpenSSH per-connection server daemon (147.75.109.163:44532). Aug 13 00:49:43.174873 sshd[3889]: Accepted publickey for core from 147.75.109.163 port 44532 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:43.176835 sshd-session[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:43.183384 systemd-logind[1522]: New session 13 of user core. Aug 13 00:49:43.190423 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:49:43.491729 sshd[3891]: Connection closed by 147.75.109.163 port 44532 Aug 13 00:49:43.492480 sshd-session[3889]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:43.499156 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:49:43.499662 systemd[1]: sshd@12-172.234.29.69:22-147.75.109.163:44532.service: Deactivated successfully. Aug 13 00:49:43.502166 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:49:43.504526 systemd-logind[1522]: Removed session 13. Aug 13 00:49:45.655041 kubelet[2711]: E0813 00:49:45.654605 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:49:48.552478 systemd[1]: Started sshd@13-172.234.29.69:22-147.75.109.163:49836.service - OpenSSH per-connection server daemon (147.75.109.163:49836). Aug 13 00:49:48.892350 sshd[3911]: Accepted publickey for core from 147.75.109.163 port 49836 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:48.894121 sshd-session[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:48.899271 systemd-logind[1522]: New session 14 of user core. Aug 13 00:49:48.906462 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:49:49.194276 sshd[3913]: Connection closed by 147.75.109.163 port 49836 Aug 13 00:49:49.194959 sshd-session[3911]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:49.199890 systemd[1]: sshd@13-172.234.29.69:22-147.75.109.163:49836.service: Deactivated successfully. Aug 13 00:49:49.206459 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:49:49.207838 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:49:49.210063 systemd-logind[1522]: Removed session 14. Aug 13 00:49:54.263652 systemd[1]: Started sshd@14-172.234.29.69:22-147.75.109.163:49842.service - OpenSSH per-connection server daemon (147.75.109.163:49842). Aug 13 00:49:54.615779 sshd[3925]: Accepted publickey for core from 147.75.109.163 port 49842 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:54.617868 sshd-session[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:54.623369 systemd-logind[1522]: New session 15 of user core. Aug 13 00:49:54.628459 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:49:54.925002 sshd[3927]: Connection closed by 147.75.109.163 port 49842 Aug 13 00:49:54.925818 sshd-session[3925]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:54.931157 systemd[1]: sshd@14-172.234.29.69:22-147.75.109.163:49842.service: Deactivated successfully. Aug 13 00:49:54.933996 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:49:54.935483 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:49:54.937374 systemd-logind[1522]: Removed session 15. Aug 13 00:49:54.988184 systemd[1]: Started sshd@15-172.234.29.69:22-147.75.109.163:49846.service - OpenSSH per-connection server daemon (147.75.109.163:49846). Aug 13 00:49:55.334073 sshd[3938]: Accepted publickey for core from 147.75.109.163 port 49846 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:55.336254 sshd-session[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:55.345646 systemd-logind[1522]: New session 16 of user core. Aug 13 00:49:55.352549 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:49:55.679889 sshd[3940]: Connection closed by 147.75.109.163 port 49846 Aug 13 00:49:55.680525 sshd-session[3938]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:55.685987 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:49:55.687209 systemd[1]: sshd@15-172.234.29.69:22-147.75.109.163:49846.service: Deactivated successfully. Aug 13 00:49:55.689957 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:49:55.691777 systemd-logind[1522]: Removed session 16. Aug 13 00:49:55.736257 systemd[1]: Started sshd@16-172.234.29.69:22-147.75.109.163:49850.service - OpenSSH per-connection server daemon (147.75.109.163:49850). Aug 13 00:49:56.078061 sshd[3950]: Accepted publickey for core from 147.75.109.163 port 49850 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:56.080029 sshd-session[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:56.085685 systemd-logind[1522]: New session 17 of user core. Aug 13 00:49:56.097443 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:49:57.470373 sshd[3952]: Connection closed by 147.75.109.163 port 49850 Aug 13 00:49:57.471168 sshd-session[3950]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:57.476232 systemd[1]: sshd@16-172.234.29.69:22-147.75.109.163:49850.service: Deactivated successfully. Aug 13 00:49:57.479196 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:49:57.479453 systemd[1]: session-17.scope: Consumed 476ms CPU time, 64.3M memory peak. Aug 13 00:49:57.480407 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:49:57.482326 systemd-logind[1522]: Removed session 17. Aug 13 00:49:57.545993 systemd[1]: Started sshd@17-172.234.29.69:22-147.75.109.163:49858.service - OpenSSH per-connection server daemon (147.75.109.163:49858). Aug 13 00:49:57.897130 sshd[3969]: Accepted publickey for core from 147.75.109.163 port 49858 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:57.899058 sshd-session[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:57.904150 systemd-logind[1522]: New session 18 of user core. Aug 13 00:49:57.910453 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:49:58.321186 sshd[3971]: Connection closed by 147.75.109.163 port 49858 Aug 13 00:49:58.321832 sshd-session[3969]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:58.326487 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:49:58.327390 systemd[1]: sshd@17-172.234.29.69:22-147.75.109.163:49858.service: Deactivated successfully. Aug 13 00:49:58.329282 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:49:58.331787 systemd-logind[1522]: Removed session 18. Aug 13 00:49:58.387418 systemd[1]: Started sshd@18-172.234.29.69:22-147.75.109.163:38598.service - OpenSSH per-connection server daemon (147.75.109.163:38598). Aug 13 00:49:58.726861 sshd[3981]: Accepted publickey for core from 147.75.109.163 port 38598 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:49:58.728954 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:58.734353 systemd-logind[1522]: New session 19 of user core. Aug 13 00:49:58.744460 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:49:59.028806 sshd[3983]: Connection closed by 147.75.109.163 port 38598 Aug 13 00:49:59.029790 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:59.034711 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:49:59.035362 systemd[1]: sshd@18-172.234.29.69:22-147.75.109.163:38598.service: Deactivated successfully. Aug 13 00:49:59.038606 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:49:59.040501 systemd-logind[1522]: Removed session 19. Aug 13 00:50:00.654645 kubelet[2711]: E0813 00:50:00.654606 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:50:03.655352 kubelet[2711]: E0813 00:50:03.655015 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:50:04.093754 systemd[1]: Started sshd@19-172.234.29.69:22-147.75.109.163:38614.service - OpenSSH per-connection server daemon (147.75.109.163:38614). Aug 13 00:50:04.439158 sshd[4000]: Accepted publickey for core from 147.75.109.163 port 38614 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:04.440889 sshd-session[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:04.445173 systemd-logind[1522]: New session 20 of user core. Aug 13 00:50:04.450573 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:50:04.744419 sshd[4002]: Connection closed by 147.75.109.163 port 38614 Aug 13 00:50:04.745409 sshd-session[4000]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:04.749128 systemd[1]: sshd@19-172.234.29.69:22-147.75.109.163:38614.service: Deactivated successfully. Aug 13 00:50:04.751787 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:50:04.754447 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:50:04.755907 systemd-logind[1522]: Removed session 20. Aug 13 00:50:09.805670 systemd[1]: Started sshd@20-172.234.29.69:22-147.75.109.163:50844.service - OpenSSH per-connection server daemon (147.75.109.163:50844). Aug 13 00:50:10.148238 sshd[4021]: Accepted publickey for core from 147.75.109.163 port 50844 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:10.148741 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:10.154117 systemd-logind[1522]: New session 21 of user core. Aug 13 00:50:10.161470 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:50:10.449159 sshd[4023]: Connection closed by 147.75.109.163 port 50844 Aug 13 00:50:10.449754 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:10.454099 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:50:10.454720 systemd[1]: sshd@20-172.234.29.69:22-147.75.109.163:50844.service: Deactivated successfully. Aug 13 00:50:10.456940 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:50:10.459107 systemd-logind[1522]: Removed session 21. Aug 13 00:50:13.655370 kubelet[2711]: E0813 00:50:13.655099 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:50:15.521421 systemd[1]: Started sshd@21-172.234.29.69:22-147.75.109.163:50854.service - OpenSSH per-connection server daemon (147.75.109.163:50854). Aug 13 00:50:15.862606 sshd[4035]: Accepted publickey for core from 147.75.109.163 port 50854 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:15.863998 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:15.869279 systemd-logind[1522]: New session 22 of user core. Aug 13 00:50:15.873437 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:50:16.161131 sshd[4037]: Connection closed by 147.75.109.163 port 50854 Aug 13 00:50:16.161875 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:16.167685 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:50:16.168502 systemd[1]: sshd@21-172.234.29.69:22-147.75.109.163:50854.service: Deactivated successfully. Aug 13 00:50:16.170403 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:50:16.171897 systemd-logind[1522]: Removed session 22. Aug 13 00:50:19.655346 kubelet[2711]: E0813 00:50:19.654954 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:50:21.222135 systemd[1]: Started sshd@22-172.234.29.69:22-147.75.109.163:49732.service - OpenSSH per-connection server daemon (147.75.109.163:49732). Aug 13 00:50:21.550345 sshd[4049]: Accepted publickey for core from 147.75.109.163 port 49732 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:21.551939 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:21.557457 systemd-logind[1522]: New session 23 of user core. Aug 13 00:50:21.561421 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:50:21.846137 sshd[4052]: Connection closed by 147.75.109.163 port 49732 Aug 13 00:50:21.846894 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:21.850777 systemd[1]: sshd@22-172.234.29.69:22-147.75.109.163:49732.service: Deactivated successfully. Aug 13 00:50:21.853059 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:50:21.854549 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:50:21.856160 systemd-logind[1522]: Removed session 23. Aug 13 00:50:26.914528 systemd[1]: Started sshd@23-172.234.29.69:22-147.75.109.163:49748.service - OpenSSH per-connection server daemon (147.75.109.163:49748). Aug 13 00:50:27.254060 sshd[4066]: Accepted publickey for core from 147.75.109.163 port 49748 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:27.255499 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:27.260994 systemd-logind[1522]: New session 24 of user core. Aug 13 00:50:27.271432 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:50:27.559449 sshd[4068]: Connection closed by 147.75.109.163 port 49748 Aug 13 00:50:27.560276 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:27.565461 systemd[1]: sshd@23-172.234.29.69:22-147.75.109.163:49748.service: Deactivated successfully. Aug 13 00:50:27.568357 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:50:27.569369 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:50:27.574014 systemd-logind[1522]: Removed session 24. Aug 13 00:50:29.655359 kubelet[2711]: E0813 00:50:29.654787 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:50:32.630746 systemd[1]: Started sshd@24-172.234.29.69:22-147.75.109.163:51634.service - OpenSSH per-connection server daemon (147.75.109.163:51634). Aug 13 00:50:32.970908 sshd[4082]: Accepted publickey for core from 147.75.109.163 port 51634 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:32.972686 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:32.977484 systemd-logind[1522]: New session 25 of user core. Aug 13 00:50:32.982424 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:50:33.277858 sshd[4084]: Connection closed by 147.75.109.163 port 51634 Aug 13 00:50:33.278646 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:33.283862 systemd[1]: sshd@24-172.234.29.69:22-147.75.109.163:51634.service: Deactivated successfully. Aug 13 00:50:33.286407 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:50:33.287189 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:50:33.289245 systemd-logind[1522]: Removed session 25. Aug 13 00:50:38.351638 systemd[1]: Started sshd@25-172.234.29.69:22-147.75.109.163:49586.service - OpenSSH per-connection server daemon (147.75.109.163:49586). Aug 13 00:50:38.693328 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 49586 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:38.694986 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:38.700550 systemd-logind[1522]: New session 26 of user core. Aug 13 00:50:38.705504 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:50:38.992743 sshd[4098]: Connection closed by 147.75.109.163 port 49586 Aug 13 00:50:38.993649 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:38.997664 systemd-logind[1522]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:50:38.998431 systemd[1]: sshd@25-172.234.29.69:22-147.75.109.163:49586.service: Deactivated successfully. Aug 13 00:50:39.000243 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:50:39.002450 systemd-logind[1522]: Removed session 26. Aug 13 00:50:44.063448 systemd[1]: Started sshd@26-172.234.29.69:22-147.75.109.163:49598.service - OpenSSH per-connection server daemon (147.75.109.163:49598). Aug 13 00:50:44.413338 sshd[4109]: Accepted publickey for core from 147.75.109.163 port 49598 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:44.414281 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:44.418976 systemd-logind[1522]: New session 27 of user core. Aug 13 00:50:44.431436 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:50:44.712365 sshd[4111]: Connection closed by 147.75.109.163 port 49598 Aug 13 00:50:44.713073 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:44.716343 systemd[1]: sshd@26-172.234.29.69:22-147.75.109.163:49598.service: Deactivated successfully. Aug 13 00:50:44.718227 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:50:44.720074 systemd-logind[1522]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:50:44.725197 systemd-logind[1522]: Removed session 27. Aug 13 00:50:48.654224 kubelet[2711]: E0813 00:50:48.654136 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:50:49.770100 systemd[1]: Started sshd@27-172.234.29.69:22-147.75.109.163:48602.service - OpenSSH per-connection server daemon (147.75.109.163:48602). Aug 13 00:50:50.096971 sshd[4123]: Accepted publickey for core from 147.75.109.163 port 48602 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:50.098318 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:50.103292 systemd-logind[1522]: New session 28 of user core. Aug 13 00:50:50.108413 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:50:50.390462 sshd[4125]: Connection closed by 147.75.109.163 port 48602 Aug 13 00:50:50.391155 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:50.394909 systemd-logind[1522]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:50:50.395872 systemd[1]: sshd@27-172.234.29.69:22-147.75.109.163:48602.service: Deactivated successfully. Aug 13 00:50:50.397886 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:50:50.399325 systemd-logind[1522]: Removed session 28. Aug 13 00:50:55.452589 systemd[1]: Started sshd@28-172.234.29.69:22-147.75.109.163:48616.service - OpenSSH per-connection server daemon (147.75.109.163:48616). Aug 13 00:50:55.785149 sshd[4137]: Accepted publickey for core from 147.75.109.163 port 48616 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:50:55.786646 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:50:55.794516 systemd-logind[1522]: New session 29 of user core. Aug 13 00:50:55.798896 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:50:56.075036 sshd[4141]: Connection closed by 147.75.109.163 port 48616 Aug 13 00:50:56.075761 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Aug 13 00:50:56.079842 systemd[1]: sshd@28-172.234.29.69:22-147.75.109.163:48616.service: Deactivated successfully. Aug 13 00:50:56.082228 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:50:56.083560 systemd-logind[1522]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:50:56.085155 systemd-logind[1522]: Removed session 29. Aug 13 00:51:01.140982 systemd[1]: Started sshd@29-172.234.29.69:22-147.75.109.163:34834.service - OpenSSH per-connection server daemon (147.75.109.163:34834). Aug 13 00:51:01.492586 sshd[4153]: Accepted publickey for core from 147.75.109.163 port 34834 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:01.494083 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:01.499773 systemd-logind[1522]: New session 30 of user core. Aug 13 00:51:01.509488 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:51:01.816115 sshd[4158]: Connection closed by 147.75.109.163 port 34834 Aug 13 00:51:01.816514 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:01.822112 systemd[1]: sshd@29-172.234.29.69:22-147.75.109.163:34834.service: Deactivated successfully. Aug 13 00:51:01.824867 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:51:01.825949 systemd-logind[1522]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:51:01.827690 systemd-logind[1522]: Removed session 30. Aug 13 00:51:06.877635 systemd[1]: Started sshd@30-172.234.29.69:22-147.75.109.163:34844.service - OpenSSH per-connection server daemon (147.75.109.163:34844). Aug 13 00:51:07.212182 sshd[4170]: Accepted publickey for core from 147.75.109.163 port 34844 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:07.213770 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:07.219651 systemd-logind[1522]: New session 31 of user core. Aug 13 00:51:07.223473 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:51:07.509664 sshd[4172]: Connection closed by 147.75.109.163 port 34844 Aug 13 00:51:07.510211 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:07.515049 systemd[1]: sshd@30-172.234.29.69:22-147.75.109.163:34844.service: Deactivated successfully. Aug 13 00:51:07.517376 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:51:07.518568 systemd-logind[1522]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:51:07.519983 systemd-logind[1522]: Removed session 31. Aug 13 00:51:12.578370 systemd[1]: Started sshd@31-172.234.29.69:22-147.75.109.163:57334.service - OpenSSH per-connection server daemon (147.75.109.163:57334). Aug 13 00:51:12.921060 sshd[4184]: Accepted publickey for core from 147.75.109.163 port 57334 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:12.922442 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:12.927205 systemd-logind[1522]: New session 32 of user core. Aug 13 00:51:12.939424 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:51:13.221022 sshd[4186]: Connection closed by 147.75.109.163 port 57334 Aug 13 00:51:13.222466 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:13.225989 systemd-logind[1522]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:51:13.226744 systemd[1]: sshd@31-172.234.29.69:22-147.75.109.163:57334.service: Deactivated successfully. Aug 13 00:51:13.228757 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:51:13.230407 systemd-logind[1522]: Removed session 32. Aug 13 00:51:14.654007 kubelet[2711]: E0813 00:51:14.653973 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:51:18.280589 systemd[1]: Started sshd@32-172.234.29.69:22-147.75.109.163:51260.service - OpenSSH per-connection server daemon (147.75.109.163:51260). Aug 13 00:51:18.619939 sshd[4198]: Accepted publickey for core from 147.75.109.163 port 51260 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:18.621400 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:18.626547 systemd-logind[1522]: New session 33 of user core. Aug 13 00:51:18.629426 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:51:18.918942 sshd[4200]: Connection closed by 147.75.109.163 port 51260 Aug 13 00:51:18.919633 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:18.925087 systemd-logind[1522]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:51:18.925915 systemd[1]: sshd@32-172.234.29.69:22-147.75.109.163:51260.service: Deactivated successfully. Aug 13 00:51:18.928626 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:51:18.930127 systemd-logind[1522]: Removed session 33. Aug 13 00:51:20.002871 update_engine[1523]: I20250813 00:51:20.002811 1523 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:51:20.002871 update_engine[1523]: I20250813 00:51:20.002864 1523 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:51:20.003369 update_engine[1523]: I20250813 00:51:20.003047 1523 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:51:20.003631 update_engine[1523]: I20250813 00:51:20.003607 1523 omaha_request_params.cc:62] Current group set to beta Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003698 1523 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003711 1523 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003727 1523 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003751 1523 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003805 1523 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003814 1523 omaha_request_action.cc:272] Request: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: Aug 13 00:51:20.004325 update_engine[1523]: I20250813 00:51:20.003821 1523 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:51:20.004830 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:51:20.005031 update_engine[1523]: I20250813 00:51:20.005003 1523 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:51:20.005775 update_engine[1523]: I20250813 00:51:20.005750 1523 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:51:20.029260 update_engine[1523]: E20250813 00:51:20.029191 1523 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:51:20.029379 update_engine[1523]: I20250813 00:51:20.029346 1523 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:51:22.655064 kubelet[2711]: E0813 00:51:22.655034 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:51:23.984487 systemd[1]: Started sshd@33-172.234.29.69:22-147.75.109.163:51264.service - OpenSSH per-connection server daemon (147.75.109.163:51264). Aug 13 00:51:24.328783 sshd[4212]: Accepted publickey for core from 147.75.109.163 port 51264 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:24.331010 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:24.337725 systemd-logind[1522]: New session 34 of user core. Aug 13 00:51:24.348463 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:51:24.631911 sshd[4214]: Connection closed by 147.75.109.163 port 51264 Aug 13 00:51:24.633324 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:24.637108 systemd[1]: sshd@33-172.234.29.69:22-147.75.109.163:51264.service: Deactivated successfully. Aug 13 00:51:24.639195 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:51:24.640015 systemd-logind[1522]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:51:24.641765 systemd-logind[1522]: Removed session 34. Aug 13 00:51:27.654589 kubelet[2711]: E0813 00:51:27.654177 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:51:29.695352 systemd[1]: Started sshd@34-172.234.29.69:22-147.75.109.163:40818.service - OpenSSH per-connection server daemon (147.75.109.163:40818). Aug 13 00:51:30.006252 update_engine[1523]: I20250813 00:51:30.006114 1523 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:51:30.006654 update_engine[1523]: I20250813 00:51:30.006384 1523 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:51:30.006654 update_engine[1523]: I20250813 00:51:30.006610 1523 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:51:30.007175 update_engine[1523]: E20250813 00:51:30.007133 1523 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:51:30.007222 update_engine[1523]: I20250813 00:51:30.007179 1523 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:51:30.028493 sshd[4227]: Accepted publickey for core from 147.75.109.163 port 40818 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:30.029457 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:30.034737 systemd-logind[1522]: New session 35 of user core. Aug 13 00:51:30.038495 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:51:30.330445 sshd[4229]: Connection closed by 147.75.109.163 port 40818 Aug 13 00:51:30.331780 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:30.337087 systemd[1]: sshd@34-172.234.29.69:22-147.75.109.163:40818.service: Deactivated successfully. Aug 13 00:51:30.339927 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:51:30.341690 systemd-logind[1522]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:51:30.343098 systemd-logind[1522]: Removed session 35. Aug 13 00:51:32.654634 kubelet[2711]: E0813 00:51:32.654600 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:51:35.391644 systemd[1]: Started sshd@35-172.234.29.69:22-147.75.109.163:40820.service - OpenSSH per-connection server daemon (147.75.109.163:40820). Aug 13 00:51:35.728883 sshd[4243]: Accepted publickey for core from 147.75.109.163 port 40820 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:35.730271 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:35.735351 systemd-logind[1522]: New session 36 of user core. Aug 13 00:51:35.739497 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:51:36.026116 sshd[4245]: Connection closed by 147.75.109.163 port 40820 Aug 13 00:51:36.027469 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:36.030996 systemd[1]: sshd@35-172.234.29.69:22-147.75.109.163:40820.service: Deactivated successfully. Aug 13 00:51:36.032973 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:51:36.034009 systemd-logind[1522]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:51:36.035671 systemd-logind[1522]: Removed session 36. Aug 13 00:51:37.655419 kubelet[2711]: E0813 00:51:37.655173 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:51:40.005417 update_engine[1523]: I20250813 00:51:40.005338 1523 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:51:40.005824 update_engine[1523]: I20250813 00:51:40.005618 1523 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:51:40.005940 update_engine[1523]: I20250813 00:51:40.005888 1523 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:51:40.006685 update_engine[1523]: E20250813 00:51:40.006624 1523 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:51:40.006725 update_engine[1523]: I20250813 00:51:40.006713 1523 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:51:41.095240 systemd[1]: Started sshd@36-172.234.29.69:22-147.75.109.163:59848.service - OpenSSH per-connection server daemon (147.75.109.163:59848). Aug 13 00:51:41.441349 sshd[4257]: Accepted publickey for core from 147.75.109.163 port 59848 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:41.442709 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:41.449676 systemd-logind[1522]: New session 37 of user core. Aug 13 00:51:41.456444 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:51:41.742538 sshd[4259]: Connection closed by 147.75.109.163 port 59848 Aug 13 00:51:41.743087 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:41.747528 systemd[1]: sshd@36-172.234.29.69:22-147.75.109.163:59848.service: Deactivated successfully. Aug 13 00:51:41.750098 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:51:41.750987 systemd-logind[1522]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:51:41.752411 systemd-logind[1522]: Removed session 37. Aug 13 00:51:46.803924 systemd[1]: Started sshd@37-172.234.29.69:22-147.75.109.163:59854.service - OpenSSH per-connection server daemon (147.75.109.163:59854). Aug 13 00:51:47.143667 sshd[4271]: Accepted publickey for core from 147.75.109.163 port 59854 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:47.144849 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:47.149383 systemd-logind[1522]: New session 38 of user core. Aug 13 00:51:47.156404 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:51:47.437319 sshd[4273]: Connection closed by 147.75.109.163 port 59854 Aug 13 00:51:47.438506 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:47.442626 systemd-logind[1522]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:51:47.443628 systemd[1]: sshd@37-172.234.29.69:22-147.75.109.163:59854.service: Deactivated successfully. Aug 13 00:51:47.445927 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:51:47.447792 systemd-logind[1522]: Removed session 38. Aug 13 00:51:50.002834 update_engine[1523]: I20250813 00:51:50.002754 1523 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:51:50.003244 update_engine[1523]: I20250813 00:51:50.003023 1523 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:51:50.003244 update_engine[1523]: I20250813 00:51:50.003232 1523 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:51:50.004010 update_engine[1523]: E20250813 00:51:50.003973 1523 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:51:50.004053 update_engine[1523]: I20250813 00:51:50.004016 1523 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:51:50.004053 update_engine[1523]: I20250813 00:51:50.004025 1523 omaha_request_action.cc:617] Omaha request response: Aug 13 00:51:50.004136 update_engine[1523]: E20250813 00:51:50.004108 1523 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 00:51:50.004136 update_engine[1523]: I20250813 00:51:50.004132 1523 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 00:51:50.004188 update_engine[1523]: I20250813 00:51:50.004138 1523 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:51:50.004188 update_engine[1523]: I20250813 00:51:50.004144 1523 update_attempter.cc:306] Processing Done. Aug 13 00:51:50.004188 update_engine[1523]: E20250813 00:51:50.004158 1523 update_attempter.cc:619] Update failed. Aug 13 00:51:50.004188 update_engine[1523]: I20250813 00:51:50.004163 1523 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 00:51:50.004188 update_engine[1523]: I20250813 00:51:50.004169 1523 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 00:51:50.004188 update_engine[1523]: I20250813 00:51:50.004175 1523 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 00:51:50.004313 update_engine[1523]: I20250813 00:51:50.004242 1523 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:51:50.004313 update_engine[1523]: I20250813 00:51:50.004261 1523 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:51:50.004313 update_engine[1523]: I20250813 00:51:50.004267 1523 omaha_request_action.cc:272] Request: Aug 13 00:51:50.004313 update_engine[1523]: Aug 13 00:51:50.004313 update_engine[1523]: Aug 13 00:51:50.004313 update_engine[1523]: Aug 13 00:51:50.004313 update_engine[1523]: Aug 13 00:51:50.004313 update_engine[1523]: Aug 13 00:51:50.004313 update_engine[1523]: Aug 13 00:51:50.004313 update_engine[1523]: I20250813 00:51:50.004273 1523 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:51:50.004495 update_engine[1523]: I20250813 00:51:50.004438 1523 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:51:50.004879 update_engine[1523]: I20250813 00:51:50.004793 1523 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:51:50.005044 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 00:51:50.005964 update_engine[1523]: E20250813 00:51:50.005905 1523 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.005980 1523 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.005990 1523 omaha_request_action.cc:617] Omaha request response: Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.005998 1523 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.006003 1523 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.006008 1523 update_attempter.cc:306] Processing Done. Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.006016 1523 update_attempter.cc:310] Error event sent. Aug 13 00:51:50.006089 update_engine[1523]: I20250813 00:51:50.006026 1523 update_check_scheduler.cc:74] Next update check in 42m25s Aug 13 00:51:50.006359 locksmithd[1564]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 00:51:52.505734 systemd[1]: Started sshd@38-172.234.29.69:22-147.75.109.163:40510.service - OpenSSH per-connection server daemon (147.75.109.163:40510). Aug 13 00:51:52.848065 sshd[4286]: Accepted publickey for core from 147.75.109.163 port 40510 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:52.849721 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:52.854741 systemd-logind[1522]: New session 39 of user core. Aug 13 00:51:52.857422 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:51:53.150094 sshd[4288]: Connection closed by 147.75.109.163 port 40510 Aug 13 00:51:53.151884 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:53.156166 systemd[1]: sshd@38-172.234.29.69:22-147.75.109.163:40510.service: Deactivated successfully. Aug 13 00:51:53.158173 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:51:53.160095 systemd-logind[1522]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:51:53.161700 systemd-logind[1522]: Removed session 39. Aug 13 00:51:58.211185 systemd[1]: Started sshd@39-172.234.29.69:22-147.75.109.163:55354.service - OpenSSH per-connection server daemon (147.75.109.163:55354). Aug 13 00:51:58.553822 sshd[4301]: Accepted publickey for core from 147.75.109.163 port 55354 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:51:58.555430 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:51:58.561900 systemd-logind[1522]: New session 40 of user core. Aug 13 00:51:58.566495 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:51:58.859128 sshd[4303]: Connection closed by 147.75.109.163 port 55354 Aug 13 00:51:58.860506 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Aug 13 00:51:58.865485 systemd[1]: sshd@39-172.234.29.69:22-147.75.109.163:55354.service: Deactivated successfully. Aug 13 00:51:58.868500 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:51:58.870074 systemd-logind[1522]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:51:58.872045 systemd-logind[1522]: Removed session 40. Aug 13 00:52:03.920376 systemd[1]: Started sshd@40-172.234.29.69:22-147.75.109.163:55358.service - OpenSSH per-connection server daemon (147.75.109.163:55358). Aug 13 00:52:04.257383 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 55358 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:04.257899 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:04.263052 systemd-logind[1522]: New session 41 of user core. Aug 13 00:52:04.266471 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:52:04.546151 sshd[4319]: Connection closed by 147.75.109.163 port 55358 Aug 13 00:52:04.546888 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:04.550645 systemd-logind[1522]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:52:04.551444 systemd[1]: sshd@40-172.234.29.69:22-147.75.109.163:55358.service: Deactivated successfully. Aug 13 00:52:04.553441 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:52:04.555526 systemd-logind[1522]: Removed session 41. Aug 13 00:52:09.613514 systemd[1]: Started sshd@41-172.234.29.69:22-147.75.109.163:49478.service - OpenSSH per-connection server daemon (147.75.109.163:49478). Aug 13 00:52:09.966314 sshd[4331]: Accepted publickey for core from 147.75.109.163 port 49478 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:09.967779 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:09.973489 systemd-logind[1522]: New session 42 of user core. Aug 13 00:52:09.981440 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:52:10.285219 sshd[4333]: Connection closed by 147.75.109.163 port 49478 Aug 13 00:52:10.286093 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:10.289971 systemd[1]: sshd@41-172.234.29.69:22-147.75.109.163:49478.service: Deactivated successfully. Aug 13 00:52:10.293091 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:52:10.294760 systemd-logind[1522]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:52:10.297174 systemd-logind[1522]: Removed session 42. Aug 13 00:52:15.348471 systemd[1]: Started sshd@42-172.234.29.69:22-147.75.109.163:49492.service - OpenSSH per-connection server daemon (147.75.109.163:49492). Aug 13 00:52:15.683887 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 49492 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:15.685443 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:15.690421 systemd-logind[1522]: New session 43 of user core. Aug 13 00:52:15.698484 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:52:15.972064 sshd[4346]: Connection closed by 147.75.109.163 port 49492 Aug 13 00:52:15.972727 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:15.976853 systemd[1]: sshd@42-172.234.29.69:22-147.75.109.163:49492.service: Deactivated successfully. Aug 13 00:52:15.978945 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:52:15.979891 systemd-logind[1522]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:52:15.981626 systemd-logind[1522]: Removed session 43. Aug 13 00:52:17.655338 kubelet[2711]: E0813 00:52:17.655020 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:52:21.041503 systemd[1]: Started sshd@43-172.234.29.69:22-147.75.109.163:39138.service - OpenSSH per-connection server daemon (147.75.109.163:39138). Aug 13 00:52:21.384407 sshd[4357]: Accepted publickey for core from 147.75.109.163 port 39138 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:21.385383 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:21.390453 systemd-logind[1522]: New session 44 of user core. Aug 13 00:52:21.402454 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:52:21.527857 containerd[1542]: time="2025-08-13T00:52:21.527761805Z" level=warning msg="container event discarded" container=792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2 type=CONTAINER_CREATED_EVENT Aug 13 00:52:21.539153 containerd[1542]: time="2025-08-13T00:52:21.539082463Z" level=warning msg="container event discarded" container=792bf77fcdd13aa7ea36135ff0f62e5ddd4a6aa4611c72d3a5bc6448431c99e2 type=CONTAINER_STARTED_EVENT Aug 13 00:52:21.553424 containerd[1542]: time="2025-08-13T00:52:21.553366437Z" level=warning msg="container event discarded" container=ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b type=CONTAINER_CREATED_EVENT Aug 13 00:52:21.602655 containerd[1542]: time="2025-08-13T00:52:21.602603119Z" level=warning msg="container event discarded" container=7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941 type=CONTAINER_CREATED_EVENT Aug 13 00:52:21.602655 containerd[1542]: time="2025-08-13T00:52:21.602643219Z" level=warning msg="container event discarded" container=7c7f735336402d5ed593ed7c9bea3de4def12034a19eee026f79e221435c6941 type=CONTAINER_STARTED_EVENT Aug 13 00:52:21.621899 containerd[1542]: time="2025-08-13T00:52:21.621823431Z" level=warning msg="container event discarded" container=793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0 type=CONTAINER_CREATED_EVENT Aug 13 00:52:21.621899 containerd[1542]: time="2025-08-13T00:52:21.621884541Z" level=warning msg="container event discarded" container=793edb8665df509c13c17dc8212b7dd1d650744faadef47b29a58e1c949580b0 type=CONTAINER_STARTED_EVENT Aug 13 00:52:21.636686 containerd[1542]: time="2025-08-13T00:52:21.636577612Z" level=warning msg="container event discarded" container=ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69 type=CONTAINER_CREATED_EVENT Aug 13 00:52:21.636686 containerd[1542]: time="2025-08-13T00:52:21.636612632Z" level=warning msg="container event discarded" container=a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049 type=CONTAINER_CREATED_EVENT Aug 13 00:52:21.682258 containerd[1542]: time="2025-08-13T00:52:21.682205363Z" level=warning msg="container event discarded" container=ad1ef7b0ea53a6102f7086000985e8f04c15b7bf455714b280f9ebbd8f2ca60b type=CONTAINER_STARTED_EVENT Aug 13 00:52:21.689178 sshd[4359]: Connection closed by 147.75.109.163 port 39138 Aug 13 00:52:21.690518 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:21.695389 systemd-logind[1522]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:52:21.696073 systemd[1]: sshd@43-172.234.29.69:22-147.75.109.163:39138.service: Deactivated successfully. Aug 13 00:52:21.698419 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:52:21.700420 systemd-logind[1522]: Removed session 44. Aug 13 00:52:21.778509 containerd[1542]: time="2025-08-13T00:52:21.778458068Z" level=warning msg="container event discarded" container=a65c2704ee9e9de15fe756f537e77dd7c119ac2faaff6ed7cf6ece55ed245049 type=CONTAINER_STARTED_EVENT Aug 13 00:52:21.827823 containerd[1542]: time="2025-08-13T00:52:21.827760070Z" level=warning msg="container event discarded" container=ce78cbcc2a4760665cf2b812a6f96f7bd0fd1fc6f3180edb1d2dcc46d5e4bd69 type=CONTAINER_STARTED_EVENT Aug 13 00:52:25.655053 kubelet[2711]: E0813 00:52:25.654340 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:52:26.748481 systemd[1]: Started sshd@44-172.234.29.69:22-147.75.109.163:39146.service - OpenSSH per-connection server daemon (147.75.109.163:39146). Aug 13 00:52:27.082396 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 39146 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:27.083277 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:27.087610 systemd-logind[1522]: New session 45 of user core. Aug 13 00:52:27.094401 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:52:27.383079 sshd[4375]: Connection closed by 147.75.109.163 port 39146 Aug 13 00:52:27.384456 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:27.388237 systemd-logind[1522]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:52:27.388413 systemd[1]: sshd@44-172.234.29.69:22-147.75.109.163:39146.service: Deactivated successfully. Aug 13 00:52:27.390403 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:52:27.392120 systemd-logind[1522]: Removed session 45. Aug 13 00:52:31.146841 containerd[1542]: time="2025-08-13T00:52:31.146790165Z" level=warning msg="container event discarded" container=a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e type=CONTAINER_CREATED_EVENT Aug 13 00:52:31.146841 containerd[1542]: time="2025-08-13T00:52:31.146828445Z" level=warning msg="container event discarded" container=a85a14baa6b7cf73493e8e2402d6db12d6f36515377d6a9ed8dee83285252e8e type=CONTAINER_STARTED_EVENT Aug 13 00:52:31.167463 containerd[1542]: time="2025-08-13T00:52:31.167408156Z" level=warning msg="container event discarded" container=5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876 type=CONTAINER_CREATED_EVENT Aug 13 00:52:31.179600 containerd[1542]: time="2025-08-13T00:52:31.179564202Z" level=warning msg="container event discarded" container=68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0 type=CONTAINER_CREATED_EVENT Aug 13 00:52:31.179600 containerd[1542]: time="2025-08-13T00:52:31.179592702Z" level=warning msg="container event discarded" container=68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0 type=CONTAINER_STARTED_EVENT Aug 13 00:52:31.251877 containerd[1542]: time="2025-08-13T00:52:31.251843957Z" level=warning msg="container event discarded" container=5522689ec1f6b93d43b3f894d0c37412031d26f0e198c0aa37a151edba8d3876 type=CONTAINER_STARTED_EVENT Aug 13 00:52:31.358678 containerd[1542]: time="2025-08-13T00:52:31.358650641Z" level=warning msg="container event discarded" container=5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8 type=CONTAINER_CREATED_EVENT Aug 13 00:52:31.358678 containerd[1542]: time="2025-08-13T00:52:31.358673870Z" level=warning msg="container event discarded" container=5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8 type=CONTAINER_STARTED_EVENT Aug 13 00:52:32.449358 systemd[1]: Started sshd@45-172.234.29.69:22-147.75.109.163:46070.service - OpenSSH per-connection server daemon (147.75.109.163:46070). Aug 13 00:52:32.783414 sshd[4390]: Accepted publickey for core from 147.75.109.163 port 46070 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:32.784852 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:32.789335 systemd-logind[1522]: New session 46 of user core. Aug 13 00:52:32.795432 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:52:33.088967 sshd[4392]: Connection closed by 147.75.109.163 port 46070 Aug 13 00:52:33.089720 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:33.093335 systemd[1]: sshd@45-172.234.29.69:22-147.75.109.163:46070.service: Deactivated successfully. Aug 13 00:52:33.095651 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:52:33.097189 systemd-logind[1522]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:52:33.099000 systemd-logind[1522]: Removed session 46. Aug 13 00:52:34.654374 kubelet[2711]: E0813 00:52:34.654264 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:52:36.377900 containerd[1542]: time="2025-08-13T00:52:36.377853156Z" level=warning msg="container event discarded" container=ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1 type=CONTAINER_CREATED_EVENT Aug 13 00:52:36.437476 containerd[1542]: time="2025-08-13T00:52:36.437415452Z" level=warning msg="container event discarded" container=ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1 type=CONTAINER_STARTED_EVENT Aug 13 00:52:36.535811 containerd[1542]: time="2025-08-13T00:52:36.535773175Z" level=warning msg="container event discarded" container=ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1 type=CONTAINER_STOPPED_EVENT Aug 13 00:52:36.727148 containerd[1542]: time="2025-08-13T00:52:36.727094251Z" level=warning msg="container event discarded" container=876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29 type=CONTAINER_CREATED_EVENT Aug 13 00:52:36.783464 containerd[1542]: time="2025-08-13T00:52:36.783403942Z" level=warning msg="container event discarded" container=876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29 type=CONTAINER_STARTED_EVENT Aug 13 00:52:36.859807 containerd[1542]: time="2025-08-13T00:52:36.859745067Z" level=warning msg="container event discarded" container=876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29 type=CONTAINER_STOPPED_EVENT Aug 13 00:52:37.264608 containerd[1542]: time="2025-08-13T00:52:37.264530312Z" level=warning msg="container event discarded" container=7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e type=CONTAINER_CREATED_EVENT Aug 13 00:52:37.323188 containerd[1542]: time="2025-08-13T00:52:37.323152904Z" level=warning msg="container event discarded" container=7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e type=CONTAINER_STARTED_EVENT Aug 13 00:52:37.746779 containerd[1542]: time="2025-08-13T00:52:37.746724242Z" level=warning msg="container event discarded" container=81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6 type=CONTAINER_CREATED_EVENT Aug 13 00:52:37.898155 containerd[1542]: time="2025-08-13T00:52:37.898072392Z" level=warning msg="container event discarded" container=81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6 type=CONTAINER_STARTED_EVENT Aug 13 00:52:37.985468 containerd[1542]: time="2025-08-13T00:52:37.985416893Z" level=warning msg="container event discarded" container=81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6 type=CONTAINER_STOPPED_EVENT Aug 13 00:52:38.150838 systemd[1]: Started sshd@46-172.234.29.69:22-147.75.109.163:55360.service - OpenSSH per-connection server daemon (147.75.109.163:55360). Aug 13 00:52:38.481981 sshd[4404]: Accepted publickey for core from 147.75.109.163 port 55360 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:38.483284 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:38.488881 systemd-logind[1522]: New session 47 of user core. Aug 13 00:52:38.491466 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:52:38.750644 containerd[1542]: time="2025-08-13T00:52:38.750516441Z" level=warning msg="container event discarded" container=0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df type=CONTAINER_CREATED_EVENT Aug 13 00:52:38.779777 sshd[4406]: Connection closed by 147.75.109.163 port 55360 Aug 13 00:52:38.780235 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:38.784739 systemd-logind[1522]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:52:38.785489 systemd[1]: sshd@46-172.234.29.69:22-147.75.109.163:55360.service: Deactivated successfully. Aug 13 00:52:38.787880 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:52:38.789785 systemd-logind[1522]: Removed session 47. Aug 13 00:52:38.814764 containerd[1542]: time="2025-08-13T00:52:38.814732291Z" level=warning msg="container event discarded" container=0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df type=CONTAINER_STARTED_EVENT Aug 13 00:52:38.845080 containerd[1542]: time="2025-08-13T00:52:38.845045673Z" level=warning msg="container event discarded" container=0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df type=CONTAINER_STOPPED_EVENT Aug 13 00:52:39.758871 containerd[1542]: time="2025-08-13T00:52:39.758807293Z" level=warning msg="container event discarded" container=037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581 type=CONTAINER_CREATED_EVENT Aug 13 00:52:39.821057 containerd[1542]: time="2025-08-13T00:52:39.821028703Z" level=warning msg="container event discarded" container=037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581 type=CONTAINER_STARTED_EVENT Aug 13 00:52:43.655340 kubelet[2711]: E0813 00:52:43.654869 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:52:43.847025 systemd[1]: Started sshd@47-172.234.29.69:22-147.75.109.163:55376.service - OpenSSH per-connection server daemon (147.75.109.163:55376). Aug 13 00:52:44.190574 sshd[4418]: Accepted publickey for core from 147.75.109.163 port 55376 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:44.191901 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:44.196243 systemd-logind[1522]: New session 48 of user core. Aug 13 00:52:44.204551 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:52:44.485828 sshd[4420]: Connection closed by 147.75.109.163 port 55376 Aug 13 00:52:44.486565 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:44.490365 systemd-logind[1522]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:52:44.490620 systemd[1]: sshd@47-172.234.29.69:22-147.75.109.163:55376.service: Deactivated successfully. Aug 13 00:52:44.492626 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:52:44.494079 systemd-logind[1522]: Removed session 48. Aug 13 00:52:46.654785 kubelet[2711]: E0813 00:52:46.654752 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:52:49.555629 systemd[1]: Started sshd@48-172.234.29.69:22-147.75.109.163:59104.service - OpenSSH per-connection server daemon (147.75.109.163:59104). Aug 13 00:52:49.900920 sshd[4432]: Accepted publickey for core from 147.75.109.163 port 59104 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:49.901381 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:49.905822 systemd-logind[1522]: New session 49 of user core. Aug 13 00:52:49.912421 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:52:50.201315 sshd[4434]: Connection closed by 147.75.109.163 port 59104 Aug 13 00:52:50.202241 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:50.207842 systemd[1]: sshd@48-172.234.29.69:22-147.75.109.163:59104.service: Deactivated successfully. Aug 13 00:52:50.209537 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:52:50.210650 systemd-logind[1522]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:52:50.212902 systemd-logind[1522]: Removed session 49. Aug 13 00:52:55.265688 systemd[1]: Started sshd@49-172.234.29.69:22-147.75.109.163:59108.service - OpenSSH per-connection server daemon (147.75.109.163:59108). Aug 13 00:52:55.609684 sshd[4446]: Accepted publickey for core from 147.75.109.163 port 59108 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:52:55.611773 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:52:55.617779 systemd-logind[1522]: New session 50 of user core. Aug 13 00:52:55.620466 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:52:55.907821 sshd[4448]: Connection closed by 147.75.109.163 port 59108 Aug 13 00:52:55.908536 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Aug 13 00:52:55.911731 systemd[1]: sshd@49-172.234.29.69:22-147.75.109.163:59108.service: Deactivated successfully. Aug 13 00:52:55.914864 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:52:55.917575 systemd-logind[1522]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:52:55.919210 systemd-logind[1522]: Removed session 50. Aug 13 00:52:56.654257 kubelet[2711]: E0813 00:52:56.654224 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:00.974922 systemd[1]: Started sshd@50-172.234.29.69:22-147.75.109.163:32848.service - OpenSSH per-connection server daemon (147.75.109.163:32848). Aug 13 00:53:01.321948 sshd[4460]: Accepted publickey for core from 147.75.109.163 port 32848 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:01.323400 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:01.327756 systemd-logind[1522]: New session 51 of user core. Aug 13 00:53:01.332439 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:53:01.618441 sshd[4462]: Connection closed by 147.75.109.163 port 32848 Aug 13 00:53:01.619127 sshd-session[4460]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:01.623341 systemd-logind[1522]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:53:01.623981 systemd[1]: sshd@50-172.234.29.69:22-147.75.109.163:32848.service: Deactivated successfully. Aug 13 00:53:01.625885 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:53:01.627692 systemd-logind[1522]: Removed session 51. Aug 13 00:53:06.685464 systemd[1]: Started sshd@51-172.234.29.69:22-147.75.109.163:32856.service - OpenSSH per-connection server daemon (147.75.109.163:32856). Aug 13 00:53:07.032070 sshd[4476]: Accepted publickey for core from 147.75.109.163 port 32856 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:07.033539 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:07.038389 systemd-logind[1522]: New session 52 of user core. Aug 13 00:53:07.045437 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:53:07.334352 sshd[4478]: Connection closed by 147.75.109.163 port 32856 Aug 13 00:53:07.334962 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:07.340137 systemd[1]: sshd@51-172.234.29.69:22-147.75.109.163:32856.service: Deactivated successfully. Aug 13 00:53:07.342749 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:53:07.343845 systemd-logind[1522]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:53:07.345512 systemd-logind[1522]: Removed session 52. Aug 13 00:53:12.394043 systemd[1]: Started sshd@52-172.234.29.69:22-147.75.109.163:36826.service - OpenSSH per-connection server daemon (147.75.109.163:36826). Aug 13 00:53:12.724616 sshd[4489]: Accepted publickey for core from 147.75.109.163 port 36826 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:12.726147 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:12.730511 systemd-logind[1522]: New session 53 of user core. Aug 13 00:53:12.736489 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 00:53:13.018458 sshd[4491]: Connection closed by 147.75.109.163 port 36826 Aug 13 00:53:13.019189 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:13.023165 systemd-logind[1522]: Session 53 logged out. Waiting for processes to exit. Aug 13 00:53:13.023875 systemd[1]: sshd@52-172.234.29.69:22-147.75.109.163:36826.service: Deactivated successfully. Aug 13 00:53:13.026689 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 00:53:13.028539 systemd-logind[1522]: Removed session 53. Aug 13 00:53:18.088627 systemd[1]: Started sshd@53-172.234.29.69:22-147.75.109.163:57344.service - OpenSSH per-connection server daemon (147.75.109.163:57344). Aug 13 00:53:18.429479 sshd[4502]: Accepted publickey for core from 147.75.109.163 port 57344 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:18.431345 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:18.436209 systemd-logind[1522]: New session 54 of user core. Aug 13 00:53:18.442424 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 00:53:18.737802 sshd[4504]: Connection closed by 147.75.109.163 port 57344 Aug 13 00:53:18.738391 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:18.742370 systemd-logind[1522]: Session 54 logged out. Waiting for processes to exit. Aug 13 00:53:18.743486 systemd[1]: sshd@53-172.234.29.69:22-147.75.109.163:57344.service: Deactivated successfully. Aug 13 00:53:18.745772 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 00:53:18.747137 systemd-logind[1522]: Removed session 54. Aug 13 00:53:18.800370 systemd[1]: Started sshd@54-172.234.29.69:22-147.75.109.163:57352.service - OpenSSH per-connection server daemon (147.75.109.163:57352). Aug 13 00:53:19.130843 sshd[4516]: Accepted publickey for core from 147.75.109.163 port 57352 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:19.132466 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:19.137111 systemd-logind[1522]: New session 55 of user core. Aug 13 00:53:19.141426 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 00:53:20.620594 containerd[1542]: time="2025-08-13T00:53:20.620343207Z" level=info msg="StopContainer for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" with timeout 30 (s)" Aug 13 00:53:20.621773 containerd[1542]: time="2025-08-13T00:53:20.621731063Z" level=info msg="Stop container \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" with signal terminated" Aug 13 00:53:20.636720 systemd[1]: cri-containerd-7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e.scope: Deactivated successfully. Aug 13 00:53:20.639992 containerd[1542]: time="2025-08-13T00:53:20.639950248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" id:\"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" pid:3229 exited_at:{seconds:1755046400 nanos:639385400}" Aug 13 00:53:20.640133 containerd[1542]: time="2025-08-13T00:53:20.639962898Z" level=info msg="received exit event container_id:\"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" id:\"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" pid:3229 exited_at:{seconds:1755046400 nanos:639385400}" Aug 13 00:53:20.656490 containerd[1542]: time="2025-08-13T00:53:20.656467659Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:53:20.662655 containerd[1542]: time="2025-08-13T00:53:20.662587217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" id:\"abfdc370d92884a0249d808ab03433d013a83e5ea25e4347866dbca45e78a2c7\" pid:4541 exited_at:{seconds:1755046400 nanos:661894570}" Aug 13 00:53:20.666994 containerd[1542]: time="2025-08-13T00:53:20.666962961Z" level=info msg="StopContainer for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" with timeout 2 (s)" Aug 13 00:53:20.668683 containerd[1542]: time="2025-08-13T00:53:20.668614805Z" level=info msg="Stop container \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" with signal terminated" Aug 13 00:53:20.680274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e-rootfs.mount: Deactivated successfully. Aug 13 00:53:20.686921 systemd-networkd[1463]: lxc_health: Link DOWN Aug 13 00:53:20.687341 systemd-networkd[1463]: lxc_health: Lost carrier Aug 13 00:53:20.705766 containerd[1542]: time="2025-08-13T00:53:20.705744843Z" level=info msg="StopContainer for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" returns successfully" Aug 13 00:53:20.707125 containerd[1542]: time="2025-08-13T00:53:20.707109058Z" level=info msg="StopPodSandbox for \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\"" Aug 13 00:53:20.708332 containerd[1542]: time="2025-08-13T00:53:20.707235528Z" level=info msg="Container to stop \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:20.713217 systemd[1]: cri-containerd-037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581.scope: Deactivated successfully. Aug 13 00:53:20.713619 systemd[1]: cri-containerd-037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581.scope: Consumed 6.188s CPU time, 121M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 00:53:20.714414 containerd[1542]: time="2025-08-13T00:53:20.714394302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" id:\"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" pid:3340 exited_at:{seconds:1755046400 nanos:713093587}" Aug 13 00:53:20.714691 containerd[1542]: time="2025-08-13T00:53:20.714544811Z" level=info msg="received exit event container_id:\"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" id:\"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" pid:3340 exited_at:{seconds:1755046400 nanos:713093587}" Aug 13 00:53:20.725673 systemd[1]: cri-containerd-5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8.scope: Deactivated successfully. Aug 13 00:53:20.731831 containerd[1542]: time="2025-08-13T00:53:20.731810120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" id:\"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" pid:2940 exit_status:137 exited_at:{seconds:1755046400 nanos:731621321}" Aug 13 00:53:20.747682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581-rootfs.mount: Deactivated successfully. Aug 13 00:53:20.758680 containerd[1542]: time="2025-08-13T00:53:20.758640194Z" level=info msg="StopContainer for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" returns successfully" Aug 13 00:53:20.759241 containerd[1542]: time="2025-08-13T00:53:20.759219762Z" level=info msg="StopPodSandbox for \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\"" Aug 13 00:53:20.759493 containerd[1542]: time="2025-08-13T00:53:20.759475632Z" level=info msg="Container to stop \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:20.759684 containerd[1542]: time="2025-08-13T00:53:20.759670231Z" level=info msg="Container to stop \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:20.759737 containerd[1542]: time="2025-08-13T00:53:20.759725251Z" level=info msg="Container to stop \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:20.759784 containerd[1542]: time="2025-08-13T00:53:20.759773131Z" level=info msg="Container to stop \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:20.760361 containerd[1542]: time="2025-08-13T00:53:20.760344398Z" level=info msg="Container to stop \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:53:20.778281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8-rootfs.mount: Deactivated successfully. Aug 13 00:53:20.783141 systemd[1]: cri-containerd-68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0.scope: Deactivated successfully. Aug 13 00:53:20.787877 containerd[1542]: time="2025-08-13T00:53:20.787741161Z" level=info msg="shim disconnected" id=5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8 namespace=k8s.io Aug 13 00:53:20.788130 containerd[1542]: time="2025-08-13T00:53:20.787776131Z" level=warning msg="cleaning up after shim disconnected" id=5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8 namespace=k8s.io Aug 13 00:53:20.788130 containerd[1542]: time="2025-08-13T00:53:20.788082460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:53:20.797797 kubelet[2711]: E0813 00:53:20.797743 2711 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:53:20.814908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0-rootfs.mount: Deactivated successfully. Aug 13 00:53:20.817729 containerd[1542]: time="2025-08-13T00:53:20.817638504Z" level=info msg="received exit event sandbox_id:\"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" exit_status:137 exited_at:{seconds:1755046400 nanos:731621321}" Aug 13 00:53:20.818267 containerd[1542]: time="2025-08-13T00:53:20.818206861Z" level=info msg="shim disconnected" id=68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0 namespace=k8s.io Aug 13 00:53:20.818267 containerd[1542]: time="2025-08-13T00:53:20.818237401Z" level=warning msg="cleaning up after shim disconnected" id=68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0 namespace=k8s.io Aug 13 00:53:20.818267 containerd[1542]: time="2025-08-13T00:53:20.818245471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:53:20.819714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8-shm.mount: Deactivated successfully. Aug 13 00:53:20.821069 containerd[1542]: time="2025-08-13T00:53:20.820657923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" id:\"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" pid:2863 exit_status:137 exited_at:{seconds:1755046400 nanos:781551903}" Aug 13 00:53:20.821281 containerd[1542]: time="2025-08-13T00:53:20.821262941Z" level=info msg="received exit event sandbox_id:\"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" exit_status:137 exited_at:{seconds:1755046400 nanos:781551903}" Aug 13 00:53:20.822824 containerd[1542]: time="2025-08-13T00:53:20.822791055Z" level=info msg="TearDown network for sandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" successfully" Aug 13 00:53:20.822824 containerd[1542]: time="2025-08-13T00:53:20.822819195Z" level=info msg="StopPodSandbox for \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" returns successfully" Aug 13 00:53:20.823506 containerd[1542]: time="2025-08-13T00:53:20.823295034Z" level=info msg="TearDown network for sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" successfully" Aug 13 00:53:20.823506 containerd[1542]: time="2025-08-13T00:53:20.823356274Z" level=info msg="StopPodSandbox for \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" returns successfully" Aug 13 00:53:20.890126 kubelet[2711]: I0813 00:53:20.889628 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhwvf\" (UniqueName: \"kubernetes.io/projected/90281167-bc96-4ee7-975d-6bd06c3bd885-kube-api-access-vhwvf\") pod \"90281167-bc96-4ee7-975d-6bd06c3bd885\" (UID: \"90281167-bc96-4ee7-975d-6bd06c3bd885\") " Aug 13 00:53:20.890126 kubelet[2711]: I0813 00:53:20.889679 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90281167-bc96-4ee7-975d-6bd06c3bd885-cilium-config-path\") pod \"90281167-bc96-4ee7-975d-6bd06c3bd885\" (UID: \"90281167-bc96-4ee7-975d-6bd06c3bd885\") " Aug 13 00:53:20.893170 kubelet[2711]: I0813 00:53:20.893131 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90281167-bc96-4ee7-975d-6bd06c3bd885-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90281167-bc96-4ee7-975d-6bd06c3bd885" (UID: "90281167-bc96-4ee7-975d-6bd06c3bd885"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:53:20.894475 kubelet[2711]: I0813 00:53:20.894446 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90281167-bc96-4ee7-975d-6bd06c3bd885-kube-api-access-vhwvf" (OuterVolumeSpecName: "kube-api-access-vhwvf") pod "90281167-bc96-4ee7-975d-6bd06c3bd885" (UID: "90281167-bc96-4ee7-975d-6bd06c3bd885"). InnerVolumeSpecName "kube-api-access-vhwvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:53:20.991250 kubelet[2711]: I0813 00:53:20.990601 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hubble-tls\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991250 kubelet[2711]: I0813 00:53:20.990639 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-run\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991250 kubelet[2711]: I0813 00:53:20.990657 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-kernel\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991250 kubelet[2711]: I0813 00:53:20.990677 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8df6994e-8a85-41b6-8da6-9a30b65a07d4-clustermesh-secrets\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991250 kubelet[2711]: I0813 00:53:20.990693 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-etc-cni-netd\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991250 kubelet[2711]: I0813 00:53:20.990713 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ghgm\" (UniqueName: \"kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-kube-api-access-6ghgm\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991465 kubelet[2711]: I0813 00:53:20.990729 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-lib-modules\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991465 kubelet[2711]: I0813 00:53:20.990746 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cni-path\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991465 kubelet[2711]: I0813 00:53:20.990761 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-net\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991465 kubelet[2711]: I0813 00:53:20.990776 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hostproc\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991465 kubelet[2711]: I0813 00:53:20.990793 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-bpf-maps\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991465 kubelet[2711]: I0813 00:53:20.990813 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-config-path\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991607 kubelet[2711]: I0813 00:53:20.990830 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-xtables-lock\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991607 kubelet[2711]: I0813 00:53:20.990845 2711 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-cgroup\") pod \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\" (UID: \"8df6994e-8a85-41b6-8da6-9a30b65a07d4\") " Aug 13 00:53:20.991607 kubelet[2711]: I0813 00:53:20.990878 2711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhwvf\" (UniqueName: \"kubernetes.io/projected/90281167-bc96-4ee7-975d-6bd06c3bd885-kube-api-access-vhwvf\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:20.991607 kubelet[2711]: I0813 00:53:20.990889 2711 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90281167-bc96-4ee7-975d-6bd06c3bd885-cilium-config-path\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:20.991607 kubelet[2711]: I0813 00:53:20.990943 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.991761 kubelet[2711]: I0813 00:53:20.991741 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.991833 kubelet[2711]: I0813 00:53:20.991820 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.991896 kubelet[2711]: I0813 00:53:20.991882 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.992387 kubelet[2711]: I0813 00:53:20.992351 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.995323 kubelet[2711]: I0813 00:53:20.994479 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.995323 kubelet[2711]: I0813 00:53:20.994510 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.995323 kubelet[2711]: I0813 00:53:20.994525 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.995323 kubelet[2711]: I0813 00:53:20.994538 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.995323 kubelet[2711]: I0813 00:53:20.994651 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:53:20.997652 kubelet[2711]: I0813 00:53:20.997613 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df6994e-8a85-41b6-8da6-9a30b65a07d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:53:20.999037 kubelet[2711]: I0813 00:53:20.999015 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:53:20.999329 kubelet[2711]: I0813 00:53:20.999313 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:53:20.999554 kubelet[2711]: I0813 00:53:20.999499 2711 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-kube-api-access-6ghgm" (OuterVolumeSpecName: "kube-api-access-6ghgm") pod "8df6994e-8a85-41b6-8da6-9a30b65a07d4" (UID: "8df6994e-8a85-41b6-8da6-9a30b65a07d4"). InnerVolumeSpecName "kube-api-access-6ghgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091741 2711 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-etc-cni-netd\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091766 2711 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ghgm\" (UniqueName: \"kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-kube-api-access-6ghgm\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091783 2711 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-lib-modules\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091791 2711 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cni-path\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091799 2711 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-net\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091806 2711 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hostproc\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.091782 kubelet[2711]: I0813 00:53:21.091814 2711 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-bpf-maps\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091824 2711 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-config-path\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091833 2711 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-xtables-lock\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091840 2711 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-cgroup\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091849 2711 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8df6994e-8a85-41b6-8da6-9a30b65a07d4-hubble-tls\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091856 2711 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-cilium-run\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091864 2711 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8df6994e-8a85-41b6-8da6-9a30b65a07d4-host-proc-sys-kernel\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.092173 kubelet[2711]: I0813 00:53:21.091872 2711 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8df6994e-8a85-41b6-8da6-9a30b65a07d4-clustermesh-secrets\") on node \"172-234-29-69\" DevicePath \"\"" Aug 13 00:53:21.385028 kubelet[2711]: I0813 00:53:21.384974 2711 scope.go:117] "RemoveContainer" containerID="7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e" Aug 13 00:53:21.387989 containerd[1542]: time="2025-08-13T00:53:21.387934223Z" level=info msg="RemoveContainer for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\"" Aug 13 00:53:21.397029 systemd[1]: Removed slice kubepods-besteffort-pod90281167_bc96_4ee7_975d_6bd06c3bd885.slice - libcontainer container kubepods-besteffort-pod90281167_bc96_4ee7_975d_6bd06c3bd885.slice. Aug 13 00:53:21.399551 containerd[1542]: time="2025-08-13T00:53:21.399143313Z" level=info msg="RemoveContainer for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" returns successfully" Aug 13 00:53:21.399662 kubelet[2711]: I0813 00:53:21.399644 2711 scope.go:117] "RemoveContainer" containerID="7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e" Aug 13 00:53:21.401842 containerd[1542]: time="2025-08-13T00:53:21.401793994Z" level=error msg="ContainerStatus for \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\": not found" Aug 13 00:53:21.402021 kubelet[2711]: E0813 00:53:21.401997 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\": not found" containerID="7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e" Aug 13 00:53:21.402167 kubelet[2711]: I0813 00:53:21.402094 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e"} err="failed to get container status \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c278e5781c0f00bdbd9b76cf8dd05a6478545bd84cb448a3954c34de2e6a72e\": not found" Aug 13 00:53:21.402221 kubelet[2711]: I0813 00:53:21.402211 2711 scope.go:117] "RemoveContainer" containerID="037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581" Aug 13 00:53:21.405147 containerd[1542]: time="2025-08-13T00:53:21.405040532Z" level=info msg="RemoveContainer for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\"" Aug 13 00:53:21.408238 systemd[1]: Removed slice kubepods-burstable-pod8df6994e_8a85_41b6_8da6_9a30b65a07d4.slice - libcontainer container kubepods-burstable-pod8df6994e_8a85_41b6_8da6_9a30b65a07d4.slice. Aug 13 00:53:21.408368 systemd[1]: kubepods-burstable-pod8df6994e_8a85_41b6_8da6_9a30b65a07d4.slice: Consumed 6.289s CPU time, 121.5M memory peak, 144K read from disk, 13.3M written to disk. Aug 13 00:53:21.415377 containerd[1542]: time="2025-08-13T00:53:21.415267286Z" level=info msg="RemoveContainer for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" returns successfully" Aug 13 00:53:21.416837 kubelet[2711]: I0813 00:53:21.416548 2711 scope.go:117] "RemoveContainer" containerID="0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df" Aug 13 00:53:21.423688 containerd[1542]: time="2025-08-13T00:53:21.423624576Z" level=info msg="RemoveContainer for \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\"" Aug 13 00:53:21.428161 containerd[1542]: time="2025-08-13T00:53:21.428108449Z" level=info msg="RemoveContainer for \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" returns successfully" Aug 13 00:53:21.429524 kubelet[2711]: I0813 00:53:21.428385 2711 scope.go:117] "RemoveContainer" containerID="81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6" Aug 13 00:53:21.435696 containerd[1542]: time="2025-08-13T00:53:21.435155275Z" level=info msg="RemoveContainer for \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\"" Aug 13 00:53:21.438960 containerd[1542]: time="2025-08-13T00:53:21.438929431Z" level=info msg="RemoveContainer for \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" returns successfully" Aug 13 00:53:21.439159 kubelet[2711]: I0813 00:53:21.439130 2711 scope.go:117] "RemoveContainer" containerID="876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29" Aug 13 00:53:21.440741 containerd[1542]: time="2025-08-13T00:53:21.440695225Z" level=info msg="RemoveContainer for \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\"" Aug 13 00:53:21.448859 containerd[1542]: time="2025-08-13T00:53:21.448082448Z" level=info msg="RemoveContainer for \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" returns successfully" Aug 13 00:53:21.449001 kubelet[2711]: I0813 00:53:21.448920 2711 scope.go:117] "RemoveContainer" containerID="ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1" Aug 13 00:53:21.450583 containerd[1542]: time="2025-08-13T00:53:21.450549130Z" level=info msg="RemoveContainer for \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\"" Aug 13 00:53:21.453321 containerd[1542]: time="2025-08-13T00:53:21.453286440Z" level=info msg="RemoveContainer for \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" returns successfully" Aug 13 00:53:21.453567 kubelet[2711]: I0813 00:53:21.453542 2711 scope.go:117] "RemoveContainer" containerID="037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581" Aug 13 00:53:21.453960 containerd[1542]: time="2025-08-13T00:53:21.453905888Z" level=error msg="ContainerStatus for \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\": not found" Aug 13 00:53:21.454190 kubelet[2711]: E0813 00:53:21.454164 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\": not found" containerID="037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581" Aug 13 00:53:21.454334 kubelet[2711]: I0813 00:53:21.454256 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581"} err="failed to get container status \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\": rpc error: code = NotFound desc = an error occurred when try to find container \"037f6ca83e7beac0860b9e77b4a8845d634d7930c8d828548df518102a85d581\": not found" Aug 13 00:53:21.454385 kubelet[2711]: I0813 00:53:21.454290 2711 scope.go:117] "RemoveContainer" containerID="0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df" Aug 13 00:53:21.454695 containerd[1542]: time="2025-08-13T00:53:21.454648635Z" level=error msg="ContainerStatus for \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\": not found" Aug 13 00:53:21.454874 kubelet[2711]: E0813 00:53:21.454840 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\": not found" containerID="0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df" Aug 13 00:53:21.454914 kubelet[2711]: I0813 00:53:21.454875 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df"} err="failed to get container status \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\": rpc error: code = NotFound desc = an error occurred when try to find container \"0546c72aef099fdd6782322a334120ba5116a36f4a9dc7e4781cecfe842ba7df\": not found" Aug 13 00:53:21.454914 kubelet[2711]: I0813 00:53:21.454898 2711 scope.go:117] "RemoveContainer" containerID="81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6" Aug 13 00:53:21.455073 containerd[1542]: time="2025-08-13T00:53:21.455030044Z" level=error msg="ContainerStatus for \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\": not found" Aug 13 00:53:21.455426 kubelet[2711]: E0813 00:53:21.455188 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\": not found" containerID="81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6" Aug 13 00:53:21.455426 kubelet[2711]: I0813 00:53:21.455238 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6"} err="failed to get container status \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"81263c8649857a46f8bdb198072c0f97ba4060f974c6367674c23854d4dff5b6\": not found" Aug 13 00:53:21.455426 kubelet[2711]: I0813 00:53:21.455253 2711 scope.go:117] "RemoveContainer" containerID="876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29" Aug 13 00:53:21.455524 containerd[1542]: time="2025-08-13T00:53:21.455491113Z" level=error msg="ContainerStatus for \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\": not found" Aug 13 00:53:21.455687 kubelet[2711]: E0813 00:53:21.455668 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\": not found" containerID="876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29" Aug 13 00:53:21.455829 kubelet[2711]: I0813 00:53:21.455747 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29"} err="failed to get container status \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\": rpc error: code = NotFound desc = an error occurred when try to find container \"876faeb7c58d7a3056d1969a8d7450b52a51eb4bc56e043d05e5925e0e5ccc29\": not found" Aug 13 00:53:21.455829 kubelet[2711]: I0813 00:53:21.455766 2711 scope.go:117] "RemoveContainer" containerID="ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1" Aug 13 00:53:21.456061 containerd[1542]: time="2025-08-13T00:53:21.456006190Z" level=error msg="ContainerStatus for \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\": not found" Aug 13 00:53:21.456232 kubelet[2711]: E0813 00:53:21.456191 2711 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\": not found" containerID="ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1" Aug 13 00:53:21.456267 kubelet[2711]: I0813 00:53:21.456237 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1"} err="failed to get container status \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca05aee711ed5f20920afe55ee30e6fc8657921e32332a57bdaa1ed33b42c9c1\": not found" Aug 13 00:53:21.657318 kubelet[2711]: I0813 00:53:21.657263 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" path="/var/lib/kubelet/pods/8df6994e-8a85-41b6-8da6-9a30b65a07d4/volumes" Aug 13 00:53:21.658176 kubelet[2711]: I0813 00:53:21.658138 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90281167-bc96-4ee7-975d-6bd06c3bd885" path="/var/lib/kubelet/pods/90281167-bc96-4ee7-975d-6bd06c3bd885/volumes" Aug 13 00:53:21.678542 systemd[1]: var-lib-kubelet-pods-90281167\x2dbc96\x2d4ee7\x2d975d\x2d6bd06c3bd885-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvhwvf.mount: Deactivated successfully. Aug 13 00:53:21.678689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0-shm.mount: Deactivated successfully. Aug 13 00:53:21.678788 systemd[1]: var-lib-kubelet-pods-8df6994e\x2d8a85\x2d41b6\x2d8da6\x2d9a30b65a07d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ghgm.mount: Deactivated successfully. Aug 13 00:53:21.678897 systemd[1]: var-lib-kubelet-pods-8df6994e\x2d8a85\x2d41b6\x2d8da6\x2d9a30b65a07d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:53:21.678973 systemd[1]: var-lib-kubelet-pods-8df6994e\x2d8a85\x2d41b6\x2d8da6\x2d9a30b65a07d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:53:22.619755 sshd[4518]: Connection closed by 147.75.109.163 port 57352 Aug 13 00:53:22.621620 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:22.627462 systemd[1]: sshd@54-172.234.29.69:22-147.75.109.163:57352.service: Deactivated successfully. Aug 13 00:53:22.629922 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 00:53:22.634689 systemd-logind[1522]: Session 55 logged out. Waiting for processes to exit. Aug 13 00:53:22.637157 systemd-logind[1522]: Removed session 55. Aug 13 00:53:22.683483 systemd[1]: Started sshd@55-172.234.29.69:22-147.75.109.163:57362.service - OpenSSH per-connection server daemon (147.75.109.163:57362). Aug 13 00:53:23.022162 sshd[4669]: Accepted publickey for core from 147.75.109.163 port 57362 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:23.024611 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:23.031150 systemd-logind[1522]: New session 56 of user core. Aug 13 00:53:23.037444 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 00:53:23.640531 kubelet[2711]: I0813 00:53:23.640083 2711 setters.go:600] "Node became not ready" node="172-234-29-69" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:53:23Z","lastTransitionTime":"2025-08-13T00:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:53:23.645815 kubelet[2711]: E0813 00:53:23.645625 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" containerName="mount-cgroup" Aug 13 00:53:23.645815 kubelet[2711]: E0813 00:53:23.645662 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" containerName="clean-cilium-state" Aug 13 00:53:23.645815 kubelet[2711]: E0813 00:53:23.645669 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" containerName="cilium-agent" Aug 13 00:53:23.645815 kubelet[2711]: E0813 00:53:23.645676 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" containerName="apply-sysctl-overwrites" Aug 13 00:53:23.645815 kubelet[2711]: E0813 00:53:23.645681 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90281167-bc96-4ee7-975d-6bd06c3bd885" containerName="cilium-operator" Aug 13 00:53:23.645815 kubelet[2711]: E0813 00:53:23.645686 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" containerName="mount-bpf-fs" Aug 13 00:53:23.645815 kubelet[2711]: I0813 00:53:23.645704 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="90281167-bc96-4ee7-975d-6bd06c3bd885" containerName="cilium-operator" Aug 13 00:53:23.645815 kubelet[2711]: I0813 00:53:23.645710 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df6994e-8a85-41b6-8da6-9a30b65a07d4" containerName="cilium-agent" Aug 13 00:53:23.651640 kubelet[2711]: W0813 00:53:23.651586 2711 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-234-29-69" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-234-29-69' and this object Aug 13 00:53:23.651640 kubelet[2711]: E0813 00:53:23.651618 2711 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-234-29-69\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-29-69' and this object" logger="UnhandledError" Aug 13 00:53:23.656524 systemd[1]: Created slice kubepods-burstable-pod24d3614f_3041_43f9_8ac5_dc6d47893cdd.slice - libcontainer container kubepods-burstable-pod24d3614f_3041_43f9_8ac5_dc6d47893cdd.slice. Aug 13 00:53:23.660391 sshd[4671]: Connection closed by 147.75.109.163 port 57362 Aug 13 00:53:23.660804 sshd-session[4669]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:23.666271 systemd[1]: sshd@55-172.234.29.69:22-147.75.109.163:57362.service: Deactivated successfully. Aug 13 00:53:23.668234 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 00:53:23.668265 systemd-logind[1522]: Session 56 logged out. Waiting for processes to exit. Aug 13 00:53:23.673825 systemd-logind[1522]: Removed session 56. Aug 13 00:53:23.708459 kubelet[2711]: I0813 00:53:23.708408 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24d3614f-3041-43f9-8ac5-dc6d47893cdd-hubble-tls\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708578 kubelet[2711]: I0813 00:53:23.708498 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cilium-ipsec-secrets\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708578 kubelet[2711]: I0813 00:53:23.708518 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27v5\" (UniqueName: \"kubernetes.io/projected/24d3614f-3041-43f9-8ac5-dc6d47893cdd-kube-api-access-g27v5\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708578 kubelet[2711]: I0813 00:53:23.708534 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cni-path\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708578 kubelet[2711]: I0813 00:53:23.708579 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-etc-cni-netd\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708664 kubelet[2711]: I0813 00:53:23.708598 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cilium-config-path\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708664 kubelet[2711]: I0813 00:53:23.708649 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-host-proc-sys-kernel\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708712 kubelet[2711]: I0813 00:53:23.708665 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cilium-run\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708712 kubelet[2711]: I0813 00:53:23.708681 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-bpf-maps\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708755 kubelet[2711]: I0813 00:53:23.708728 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cilium-cgroup\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708755 kubelet[2711]: I0813 00:53:23.708742 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-xtables-lock\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708799 kubelet[2711]: I0813 00:53:23.708755 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-host-proc-sys-net\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708823 kubelet[2711]: I0813 00:53:23.708800 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-hostproc\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708823 kubelet[2711]: I0813 00:53:23.708816 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24d3614f-3041-43f9-8ac5-dc6d47893cdd-lib-modules\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.708866 kubelet[2711]: I0813 00:53:23.708829 2711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24d3614f-3041-43f9-8ac5-dc6d47893cdd-clustermesh-secrets\") pod \"cilium-7m5sc\" (UID: \"24d3614f-3041-43f9-8ac5-dc6d47893cdd\") " pod="kube-system/cilium-7m5sc" Aug 13 00:53:23.719681 systemd[1]: Started sshd@56-172.234.29.69:22-147.75.109.163:57370.service - OpenSSH per-connection server daemon (147.75.109.163:57370). Aug 13 00:53:24.060849 sshd[4682]: Accepted publickey for core from 147.75.109.163 port 57370 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:24.062931 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:24.069004 systemd-logind[1522]: New session 57 of user core. Aug 13 00:53:24.074418 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 00:53:24.303318 sshd[4689]: Connection closed by 147.75.109.163 port 57370 Aug 13 00:53:24.303888 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:24.309002 systemd[1]: sshd@56-172.234.29.69:22-147.75.109.163:57370.service: Deactivated successfully. Aug 13 00:53:24.311226 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 00:53:24.312834 systemd-logind[1522]: Session 57 logged out. Waiting for processes to exit. Aug 13 00:53:24.314176 systemd-logind[1522]: Removed session 57. Aug 13 00:53:24.370138 systemd[1]: Started sshd@57-172.234.29.69:22-147.75.109.163:57378.service - OpenSSH per-connection server daemon (147.75.109.163:57378). Aug 13 00:53:24.715856 sshd[4696]: Accepted publickey for core from 147.75.109.163 port 57378 ssh2: RSA SHA256:P6SbzdJer/BWyEn9/9VAOqbJC08j66ZTaaExcpjUbsw Aug 13 00:53:24.717765 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:53:24.723703 systemd-logind[1522]: New session 58 of user core. Aug 13 00:53:24.731445 systemd[1]: Started session-58.scope - Session 58 of User core. Aug 13 00:53:24.810939 kubelet[2711]: E0813 00:53:24.810894 2711 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:53:24.811339 kubelet[2711]: E0813 00:53:24.811020 2711 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cilium-config-path podName:24d3614f-3041-43f9-8ac5-dc6d47893cdd nodeName:}" failed. No retries permitted until 2025-08-13 00:53:25.310996602 +0000 UTC m=+359.753535180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/24d3614f-3041-43f9-8ac5-dc6d47893cdd-cilium-config-path") pod "cilium-7m5sc" (UID: "24d3614f-3041-43f9-8ac5-dc6d47893cdd") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:53:25.469398 kubelet[2711]: E0813 00:53:25.469359 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:25.470676 containerd[1542]: time="2025-08-13T00:53:25.470614540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7m5sc,Uid:24d3614f-3041-43f9-8ac5-dc6d47893cdd,Namespace:kube-system,Attempt:0,}" Aug 13 00:53:25.491343 containerd[1542]: time="2025-08-13T00:53:25.491272477Z" level=info msg="connecting to shim cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add" address="unix:///run/containerd/s/97729cf201c8dc179c1e234e1c9f6e01f5494fd0a490dfa94935a8b3e2676ea5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:53:25.523441 systemd[1]: Started cri-containerd-cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add.scope - libcontainer container cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add. Aug 13 00:53:25.550143 containerd[1542]: time="2025-08-13T00:53:25.550106771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7m5sc,Uid:24d3614f-3041-43f9-8ac5-dc6d47893cdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\"" Aug 13 00:53:25.551070 kubelet[2711]: E0813 00:53:25.551032 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:25.553051 containerd[1542]: time="2025-08-13T00:53:25.553023600Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:53:25.565365 containerd[1542]: time="2025-08-13T00:53:25.563698673Z" level=info msg="Container aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:25.567275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221202973.mount: Deactivated successfully. Aug 13 00:53:25.571565 containerd[1542]: time="2025-08-13T00:53:25.571527265Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\"" Aug 13 00:53:25.572107 containerd[1542]: time="2025-08-13T00:53:25.572067074Z" level=info msg="StartContainer for \"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\"" Aug 13 00:53:25.572776 containerd[1542]: time="2025-08-13T00:53:25.572737682Z" level=info msg="connecting to shim aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c" address="unix:///run/containerd/s/97729cf201c8dc179c1e234e1c9f6e01f5494fd0a490dfa94935a8b3e2676ea5" protocol=ttrpc version=3 Aug 13 00:53:25.595423 systemd[1]: Started cri-containerd-aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c.scope - libcontainer container aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c. Aug 13 00:53:25.627810 containerd[1542]: time="2025-08-13T00:53:25.627742268Z" level=info msg="StartContainer for \"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\" returns successfully" Aug 13 00:53:25.637032 systemd[1]: cri-containerd-aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c.scope: Deactivated successfully. Aug 13 00:53:25.638210 containerd[1542]: time="2025-08-13T00:53:25.638132202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\" id:\"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\" pid:4761 exited_at:{seconds:1755046405 nanos:637635103}" Aug 13 00:53:25.638210 containerd[1542]: time="2025-08-13T00:53:25.638200501Z" level=info msg="received exit event container_id:\"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\" id:\"aae19de4723c03e40650673441c649b17380c2bef9621e0d08cef791540a402c\" pid:4761 exited_at:{seconds:1755046405 nanos:637635103}" Aug 13 00:53:25.666333 containerd[1542]: time="2025-08-13T00:53:25.665577745Z" level=info msg="StopPodSandbox for \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\"" Aug 13 00:53:25.666333 containerd[1542]: time="2025-08-13T00:53:25.665739524Z" level=info msg="TearDown network for sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" successfully" Aug 13 00:53:25.666333 containerd[1542]: time="2025-08-13T00:53:25.665751134Z" level=info msg="StopPodSandbox for \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" returns successfully" Aug 13 00:53:25.666590 containerd[1542]: time="2025-08-13T00:53:25.666514082Z" level=info msg="RemovePodSandbox for \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\"" Aug 13 00:53:25.666590 containerd[1542]: time="2025-08-13T00:53:25.666547392Z" level=info msg="Forcibly stopping sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\"" Aug 13 00:53:25.666803 containerd[1542]: time="2025-08-13T00:53:25.666641152Z" level=info msg="TearDown network for sandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" successfully" Aug 13 00:53:25.672091 containerd[1542]: time="2025-08-13T00:53:25.672047112Z" level=info msg="Ensure that sandbox 68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0 in task-service has been cleanup successfully" Aug 13 00:53:25.676101 containerd[1542]: time="2025-08-13T00:53:25.676038548Z" level=info msg="RemovePodSandbox \"68b809f38d919330218ef88f91cd0ed7ad095b1d88a46b8d6e7c3c2d5f3c21a0\" returns successfully" Aug 13 00:53:25.676580 containerd[1542]: time="2025-08-13T00:53:25.676548887Z" level=info msg="StopPodSandbox for \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\"" Aug 13 00:53:25.676689 containerd[1542]: time="2025-08-13T00:53:25.676647456Z" level=info msg="TearDown network for sandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" successfully" Aug 13 00:53:25.676689 containerd[1542]: time="2025-08-13T00:53:25.676659856Z" level=info msg="StopPodSandbox for \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" returns successfully" Aug 13 00:53:25.677099 containerd[1542]: time="2025-08-13T00:53:25.677066464Z" level=info msg="RemovePodSandbox for \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\"" Aug 13 00:53:25.677099 containerd[1542]: time="2025-08-13T00:53:25.677096804Z" level=info msg="Forcibly stopping sandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\"" Aug 13 00:53:25.677415 containerd[1542]: time="2025-08-13T00:53:25.677152204Z" level=info msg="TearDown network for sandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" successfully" Aug 13 00:53:25.678207 containerd[1542]: time="2025-08-13T00:53:25.678171341Z" level=info msg="Ensure that sandbox 5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8 in task-service has been cleanup successfully" Aug 13 00:53:25.680007 containerd[1542]: time="2025-08-13T00:53:25.679983074Z" level=info msg="RemovePodSandbox \"5e23f51dd5bc9c8e01794cfdf9bad4cc4b19150c19744920f7e0f8becd14e7a8\" returns successfully" Aug 13 00:53:25.799708 kubelet[2711]: E0813 00:53:25.799541 2711 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:53:26.415724 kubelet[2711]: E0813 00:53:26.415625 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:26.421412 containerd[1542]: time="2025-08-13T00:53:26.421352371Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:53:26.428804 containerd[1542]: time="2025-08-13T00:53:26.428768165Z" level=info msg="Container 00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:26.433810 containerd[1542]: time="2025-08-13T00:53:26.433770058Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\"" Aug 13 00:53:26.435283 containerd[1542]: time="2025-08-13T00:53:26.435249772Z" level=info msg="StartContainer for \"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\"" Aug 13 00:53:26.437111 containerd[1542]: time="2025-08-13T00:53:26.437082786Z" level=info msg="connecting to shim 00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767" address="unix:///run/containerd/s/97729cf201c8dc179c1e234e1c9f6e01f5494fd0a490dfa94935a8b3e2676ea5" protocol=ttrpc version=3 Aug 13 00:53:26.457432 systemd[1]: Started cri-containerd-00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767.scope - libcontainer container 00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767. Aug 13 00:53:26.509381 containerd[1542]: time="2025-08-13T00:53:26.509324773Z" level=info msg="StartContainer for \"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\" returns successfully" Aug 13 00:53:26.513929 systemd[1]: cri-containerd-00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767.scope: Deactivated successfully. Aug 13 00:53:26.516778 containerd[1542]: time="2025-08-13T00:53:26.516732806Z" level=info msg="received exit event container_id:\"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\" id:\"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\" pid:4808 exited_at:{seconds:1755046406 nanos:516234179}" Aug 13 00:53:26.517020 containerd[1542]: time="2025-08-13T00:53:26.516831006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\" id:\"00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767\" pid:4808 exited_at:{seconds:1755046406 nanos:516234179}" Aug 13 00:53:26.542542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00bea57169ce509e10952bcdc6b57ac9adfc50100f51d31f4e49ddfdba579767-rootfs.mount: Deactivated successfully. Aug 13 00:53:27.421391 kubelet[2711]: E0813 00:53:27.420281 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:27.423000 containerd[1542]: time="2025-08-13T00:53:27.422923623Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:53:27.442606 containerd[1542]: time="2025-08-13T00:53:27.442445645Z" level=info msg="Container cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:27.444708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033999843.mount: Deactivated successfully. Aug 13 00:53:27.452899 containerd[1542]: time="2025-08-13T00:53:27.452861918Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\"" Aug 13 00:53:27.453680 containerd[1542]: time="2025-08-13T00:53:27.453557566Z" level=info msg="StartContainer for \"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\"" Aug 13 00:53:27.455272 containerd[1542]: time="2025-08-13T00:53:27.455238140Z" level=info msg="connecting to shim cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7" address="unix:///run/containerd/s/97729cf201c8dc179c1e234e1c9f6e01f5494fd0a490dfa94935a8b3e2676ea5" protocol=ttrpc version=3 Aug 13 00:53:27.476540 systemd[1]: Started cri-containerd-cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7.scope - libcontainer container cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7. Aug 13 00:53:27.523959 containerd[1542]: time="2025-08-13T00:53:27.523866440Z" level=info msg="StartContainer for \"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\" returns successfully" Aug 13 00:53:27.530211 systemd[1]: cri-containerd-cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7.scope: Deactivated successfully. Aug 13 00:53:27.531514 containerd[1542]: time="2025-08-13T00:53:27.531492853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\" id:\"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\" pid:4853 exited_at:{seconds:1755046407 nanos:531135314}" Aug 13 00:53:27.531712 containerd[1542]: time="2025-08-13T00:53:27.531625383Z" level=info msg="received exit event container_id:\"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\" id:\"cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7\" pid:4853 exited_at:{seconds:1755046407 nanos:531135314}" Aug 13 00:53:27.556030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd2e18362ea3cec84d2fbb41ec990b0455d0ea89d64bc100592319b5dfa896b7-rootfs.mount: Deactivated successfully. Aug 13 00:53:28.425077 kubelet[2711]: E0813 00:53:28.425028 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:28.428855 containerd[1542]: time="2025-08-13T00:53:28.428556100Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:53:28.439606 containerd[1542]: time="2025-08-13T00:53:28.439360863Z" level=info msg="Container 0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:28.444490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903581371.mount: Deactivated successfully. Aug 13 00:53:28.447332 containerd[1542]: time="2025-08-13T00:53:28.447264105Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\"" Aug 13 00:53:28.448392 containerd[1542]: time="2025-08-13T00:53:28.448356741Z" level=info msg="StartContainer for \"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\"" Aug 13 00:53:28.450389 containerd[1542]: time="2025-08-13T00:53:28.450367484Z" level=info msg="connecting to shim 0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a" address="unix:///run/containerd/s/97729cf201c8dc179c1e234e1c9f6e01f5494fd0a490dfa94935a8b3e2676ea5" protocol=ttrpc version=3 Aug 13 00:53:28.476433 systemd[1]: Started cri-containerd-0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a.scope - libcontainer container 0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a. Aug 13 00:53:28.504709 systemd[1]: cri-containerd-0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a.scope: Deactivated successfully. Aug 13 00:53:28.506659 containerd[1542]: time="2025-08-13T00:53:28.506485849Z" level=info msg="received exit event container_id:\"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\" id:\"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\" pid:4893 exited_at:{seconds:1755046408 nanos:506019371}" Aug 13 00:53:28.506659 containerd[1542]: time="2025-08-13T00:53:28.506637618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\" id:\"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\" pid:4893 exited_at:{seconds:1755046408 nanos:506019371}" Aug 13 00:53:28.508804 containerd[1542]: time="2025-08-13T00:53:28.508785211Z" level=info msg="StartContainer for \"0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a\" returns successfully" Aug 13 00:53:28.529274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a5f176a808d1f47dc5ad7b8e4ef4a3ad75af62bd24e2b4a29257e65f858160a-rootfs.mount: Deactivated successfully. Aug 13 00:53:29.431734 kubelet[2711]: E0813 00:53:29.431668 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:29.438218 containerd[1542]: time="2025-08-13T00:53:29.436536099Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:53:29.450929 containerd[1542]: time="2025-08-13T00:53:29.450879650Z" level=info msg="Container 515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:53:29.452609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650299479.mount: Deactivated successfully. Aug 13 00:53:29.464908 containerd[1542]: time="2025-08-13T00:53:29.464874921Z" level=info msg="CreateContainer within sandbox \"cb962117e185c0932b4107a08c2344bb41cbb60820e2c858881d3fb4f3e12add\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\"" Aug 13 00:53:29.465606 containerd[1542]: time="2025-08-13T00:53:29.465516689Z" level=info msg="StartContainer for \"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\"" Aug 13 00:53:29.467441 containerd[1542]: time="2025-08-13T00:53:29.467411853Z" level=info msg="connecting to shim 515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c" address="unix:///run/containerd/s/97729cf201c8dc179c1e234e1c9f6e01f5494fd0a490dfa94935a8b3e2676ea5" protocol=ttrpc version=3 Aug 13 00:53:29.503463 systemd[1]: Started cri-containerd-515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c.scope - libcontainer container 515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c. Aug 13 00:53:29.538411 containerd[1542]: time="2025-08-13T00:53:29.538338895Z" level=info msg="StartContainer for \"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" returns successfully" Aug 13 00:53:29.613585 containerd[1542]: time="2025-08-13T00:53:29.613509874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" id:\"03fa3c4c48a24ee2d83de3c0ea601af7c6a019ed8e468892fb570dbdb843426e\" pid:4961 exited_at:{seconds:1755046409 nanos:613150666}" Aug 13 00:53:29.997123 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 00:53:30.438869 kubelet[2711]: E0813 00:53:30.438838 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:31.216373 containerd[1542]: time="2025-08-13T00:53:31.216313303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" id:\"af0446fc4ac9ee137f0aa2c861e3b78c6dabff22492f51eb920c00884116b033\" pid:5043 exit_status:1 exited_at:{seconds:1755046411 nanos:215360516}" Aug 13 00:53:31.470542 kubelet[2711]: E0813 00:53:31.470105 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:32.694548 systemd-networkd[1463]: lxc_health: Link UP Aug 13 00:53:32.710772 systemd-networkd[1463]: lxc_health: Gained carrier Aug 13 00:53:33.346224 containerd[1542]: time="2025-08-13T00:53:33.346150940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" id:\"8f499e63876afe7a87f810ba1fae0526b1d5fa10667052d7bd4373292862fce8\" pid:5465 exited_at:{seconds:1755046413 nanos:345823781}" Aug 13 00:53:33.350398 kubelet[2711]: E0813 00:53:33.350262 2711 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46886->127.0.0.1:38663: write tcp 127.0.0.1:46886->127.0.0.1:38663: write: broken pipe Aug 13 00:53:33.472383 kubelet[2711]: E0813 00:53:33.472002 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:33.491622 kubelet[2711]: I0813 00:53:33.491566 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7m5sc" podStartSLOduration=10.491532899 podStartE2EDuration="10.491532899s" podCreationTimestamp="2025-08-13 00:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:53:30.455076822 +0000 UTC m=+364.897615400" watchObservedRunningTime="2025-08-13 00:53:33.491532899 +0000 UTC m=+367.934071477" Aug 13 00:53:34.232661 systemd-networkd[1463]: lxc_health: Gained IPv6LL Aug 13 00:53:34.448324 kubelet[2711]: E0813 00:53:34.448077 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:35.452619 kubelet[2711]: E0813 00:53:35.452423 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16" Aug 13 00:53:35.473931 containerd[1542]: time="2025-08-13T00:53:35.473881094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" id:\"628800d658d68a1f4cb41b5632e285613e463f0bc4e0774e57e57415b20c6d4d\" pid:5499 exited_at:{seconds:1755046415 nanos:473278696}" Aug 13 00:53:37.581384 containerd[1542]: time="2025-08-13T00:53:37.581334806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" id:\"b3e7f3f3b1a440be9fb7e790ae5ba64e9ce3bfb6f2921019d3ca73ec2a414786\" pid:5529 exited_at:{seconds:1755046417 nanos:580704157}" Aug 13 00:53:39.673705 containerd[1542]: time="2025-08-13T00:53:39.673494695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"515f7b11a5c03ad08abac7df5a41ad590540b934a8f20239028ada982f4db29c\" id:\"81f3a47e32d23499421b00469dc94b5fb8a8307ac9de88110a2abbc7c78c33ba\" pid:5553 exited_at:{seconds:1755046419 nanos:672815147}" Aug 13 00:53:39.731375 sshd[4698]: Connection closed by 147.75.109.163 port 57378 Aug 13 00:53:39.731942 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Aug 13 00:53:39.736816 systemd[1]: sshd@57-172.234.29.69:22-147.75.109.163:57378.service: Deactivated successfully. Aug 13 00:53:39.739010 systemd[1]: session-58.scope: Deactivated successfully. Aug 13 00:53:39.739925 systemd-logind[1522]: Session 58 logged out. Waiting for processes to exit. Aug 13 00:53:39.742035 systemd-logind[1522]: Removed session 58. Aug 13 00:53:42.654770 kubelet[2711]: E0813 00:53:42.654720 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.18 172.232.0.17 172.232.0.16"