Jul 6 23:24:15.939437 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:53:45 -00 2025 Jul 6 23:24:15.939459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:24:15.939468 kernel: BIOS-provided physical RAM map: Jul 6 23:24:15.939480 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jul 6 23:24:15.939485 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jul 6 23:24:15.939494 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:24:15.939501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 6 23:24:15.939507 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 6 23:24:15.939512 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:24:15.939518 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:24:15.939524 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:24:15.939530 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:24:15.939536 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jul 6 23:24:15.939542 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:24:15.939552 kernel: NX (Execute Disable) protection: active Jul 6 23:24:15.939559 kernel: APIC: Static calls initialized Jul 6 23:24:15.939565 kernel: SMBIOS 2.8 present. Jul 6 23:24:15.939571 kernel: DMI: Linode Compute Instance, BIOS Not Specified Jul 6 23:24:15.939577 kernel: Hypervisor detected: KVM Jul 6 23:24:15.939586 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:24:15.939592 kernel: kvm-clock: using sched offset of 4693460526 cycles Jul 6 23:24:15.939599 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:24:15.939606 kernel: tsc: Detected 2000.000 MHz processor Jul 6 23:24:15.939612 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:24:15.939619 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:24:15.939626 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jul 6 23:24:15.939632 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:24:15.939639 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:24:15.939648 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 6 23:24:15.939654 kernel: Using GB pages for direct mapping Jul 6 23:24:15.939661 kernel: ACPI: Early table checksum verification disabled Jul 6 23:24:15.939667 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) Jul 6 23:24:15.939673 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939680 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939686 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939692 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 6 23:24:15.939699 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939708 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939714 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939721 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:24:15.939737 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jul 6 23:24:15.939749 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jul 6 23:24:15.939760 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 6 23:24:15.939786 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jul 6 23:24:15.939793 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jul 6 23:24:15.939800 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jul 6 23:24:15.939806 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jul 6 23:24:15.939813 kernel: No NUMA configuration found Jul 6 23:24:15.939820 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jul 6 23:24:15.939826 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Jul 6 23:24:15.939833 kernel: Zone ranges: Jul 6 23:24:15.939840 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:24:15.939850 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:24:15.939856 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jul 6 23:24:15.939863 kernel: Movable zone start for each node Jul 6 23:24:15.939870 kernel: Early memory node ranges Jul 6 23:24:15.939876 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:24:15.939883 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 6 23:24:15.939890 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jul 6 23:24:15.939896 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jul 6 23:24:15.939903 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:24:15.939912 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:24:15.939919 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 6 23:24:15.939926 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:24:15.939932 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:24:15.939939 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:24:15.939946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:24:15.939952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:24:15.939959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:24:15.939966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:24:15.939975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:24:15.939982 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:24:15.939989 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:24:15.939995 kernel: TSC deadline timer available Jul 6 23:24:15.940002 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:24:15.940009 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:24:15.940015 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:24:15.940022 kernel: kvm-guest: setup PV sched yield Jul 6 23:24:15.940028 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:24:15.940038 kernel: Booting paravirtualized kernel on KVM Jul 6 23:24:15.940044 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:24:15.940051 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:24:15.940058 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:24:15.940064 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:24:15.940071 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:24:15.940078 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:24:15.940084 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:24:15.940092 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:24:15.940102 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:24:15.940108 kernel: random: crng init done Jul 6 23:24:15.940115 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:24:15.940122 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:24:15.940128 kernel: Fallback order for Node 0: 0 Jul 6 23:24:15.940135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jul 6 23:24:15.940142 kernel: Policy zone: Normal Jul 6 23:24:15.940148 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:24:15.940157 kernel: software IO TLB: area num 2. Jul 6 23:24:15.940164 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 229356K reserved, 0K cma-reserved) Jul 6 23:24:15.940171 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:24:15.940178 kernel: ftrace: allocating 37940 entries in 149 pages Jul 6 23:24:15.940184 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:24:15.940191 kernel: Dynamic Preempt: voluntary Jul 6 23:24:15.940198 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:24:15.940205 kernel: rcu: RCU event tracing is enabled. Jul 6 23:24:15.940212 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:24:15.940221 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:24:15.940228 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:24:15.940235 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:24:15.940241 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:24:15.940248 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:24:15.940255 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:24:15.940261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:24:15.940268 kernel: Console: colour VGA+ 80x25 Jul 6 23:24:15.940275 kernel: printk: console [tty0] enabled Jul 6 23:24:15.940281 kernel: printk: console [ttyS0] enabled Jul 6 23:24:15.940291 kernel: ACPI: Core revision 20230628 Jul 6 23:24:15.940297 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:24:15.940304 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:24:15.940319 kernel: x2apic enabled Jul 6 23:24:15.940329 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:24:15.940336 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:24:15.940343 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:24:15.940350 kernel: kvm-guest: setup PV IPIs Jul 6 23:24:15.940357 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:24:15.940364 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:24:15.940371 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jul 6 23:24:15.940381 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:24:15.940388 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:24:15.940395 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:24:15.940402 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:24:15.940408 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:24:15.940418 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:24:15.940425 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 6 23:24:15.940432 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:24:15.940439 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:24:15.940446 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:24:15.940453 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:24:15.940460 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:24:15.940467 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:24:15.940476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:24:15.940483 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:24:15.940490 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 6 23:24:15.940497 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:24:15.940503 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jul 6 23:24:15.940510 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jul 6 23:24:15.940517 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:24:15.940524 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:24:15.940530 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:24:15.940540 kernel: landlock: Up and running. Jul 6 23:24:15.940546 kernel: SELinux: Initializing. Jul 6 23:24:15.940553 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:24:15.940560 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:24:15.940567 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jul 6 23:24:15.940574 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:24:15.940581 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:24:15.940587 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:24:15.940594 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:24:15.940604 kernel: ... version: 0 Jul 6 23:24:15.940611 kernel: ... bit width: 48 Jul 6 23:24:15.940617 kernel: ... generic registers: 6 Jul 6 23:24:15.940624 kernel: ... value mask: 0000ffffffffffff Jul 6 23:24:15.940631 kernel: ... max period: 00007fffffffffff Jul 6 23:24:15.940637 kernel: ... fixed-purpose events: 0 Jul 6 23:24:15.940644 kernel: ... event mask: 000000000000003f Jul 6 23:24:15.940651 kernel: signal: max sigframe size: 3376 Jul 6 23:24:15.940658 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:24:15.940667 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:24:15.940674 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:24:15.940680 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:24:15.940687 kernel: .... node #0, CPUs: #1 Jul 6 23:24:15.940694 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:24:15.940700 kernel: smpboot: Max logical packages: 1 Jul 6 23:24:15.940707 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jul 6 23:24:15.940714 kernel: devtmpfs: initialized Jul 6 23:24:15.940721 kernel: x86/mm: Memory block size: 128MB Jul 6 23:24:15.940727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:24:15.940737 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:24:15.940744 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:24:15.940750 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:24:15.940757 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:24:15.940764 kernel: audit: type=2000 audit(1751844255.605:1): state=initialized audit_enabled=0 res=1 Jul 6 23:24:15.940786 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:24:15.940793 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:24:15.940800 kernel: cpuidle: using governor menu Jul 6 23:24:15.940806 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:24:15.940816 kernel: dca service started, version 1.12.1 Jul 6 23:24:15.940823 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:24:15.940830 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:24:15.940837 kernel: PCI: Using configuration type 1 for base access Jul 6 23:24:15.940844 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:24:15.940850 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:24:15.940857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:24:15.940864 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:24:15.940873 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:24:15.940880 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:24:15.940887 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:24:15.940894 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:24:15.940900 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:24:15.940907 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:24:15.940914 kernel: ACPI: Interpreter enabled Jul 6 23:24:15.940921 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:24:15.940927 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:24:15.940934 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:24:15.940944 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:24:15.940951 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:24:15.940957 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:24:15.941134 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:24:15.941259 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:24:15.941373 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:24:15.941383 kernel: PCI host bridge to bus 0000:00 Jul 6 23:24:15.941509 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:24:15.941616 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:24:15.941719 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:24:15.945863 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 6 23:24:15.945981 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:24:15.946087 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jul 6 23:24:15.946197 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:24:15.946333 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:24:15.946459 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:24:15.946575 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 6 23:24:15.946687 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 6 23:24:15.946897 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 6 23:24:15.947017 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:24:15.947144 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Jul 6 23:24:15.947257 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Jul 6 23:24:15.947369 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 6 23:24:15.947480 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 6 23:24:15.947605 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:24:15.947722 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jul 6 23:24:15.948039 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 6 23:24:15.948168 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 6 23:24:15.948283 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 6 23:24:15.948404 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:24:15.948518 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:24:15.948640 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:24:15.948753 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Jul 6 23:24:15.948932 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Jul 6 23:24:15.949057 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:24:15.949168 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:24:15.949178 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:24:15.949185 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:24:15.949192 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:24:15.949199 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:24:15.949206 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:24:15.949218 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:24:15.949225 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:24:15.949232 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:24:15.949239 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:24:15.949246 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:24:15.949253 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:24:15.949260 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:24:15.949267 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:24:15.949274 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:24:15.949284 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:24:15.949291 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:24:15.949298 kernel: iommu: Default domain type: Translated Jul 6 23:24:15.949305 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:24:15.949312 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:24:15.949319 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:24:15.949326 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jul 6 23:24:15.949334 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 6 23:24:15.949443 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:24:15.949556 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:24:15.949666 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:24:15.949676 kernel: vgaarb: loaded Jul 6 23:24:15.949683 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:24:15.949690 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:24:15.949697 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:24:15.949704 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:24:15.949711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:24:15.949722 kernel: pnp: PnP ACPI init Jul 6 23:24:15.949866 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:24:15.949878 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:24:15.949886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:24:15.949893 kernel: NET: Registered PF_INET protocol family Jul 6 23:24:15.949900 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:24:15.949907 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:24:15.949914 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:24:15.949922 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:24:15.949933 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:24:15.949940 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:24:15.949947 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:24:15.949954 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:24:15.949961 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:24:15.949968 kernel: NET: Registered PF_XDP protocol family Jul 6 23:24:15.950070 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:24:15.950174 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:24:15.950280 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:24:15.950383 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 6 23:24:15.950487 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:24:15.950806 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jul 6 23:24:15.950817 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:24:15.950824 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:24:15.950832 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jul 6 23:24:15.950839 kernel: Initialise system trusted keyrings Jul 6 23:24:15.950845 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:24:15.950856 kernel: Key type asymmetric registered Jul 6 23:24:15.950863 kernel: Asymmetric key parser 'x509' registered Jul 6 23:24:15.950871 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:24:15.950878 kernel: io scheduler mq-deadline registered Jul 6 23:24:15.950884 kernel: io scheduler kyber registered Jul 6 23:24:15.950892 kernel: io scheduler bfq registered Jul 6 23:24:15.950899 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:24:15.950906 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:24:15.950914 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:24:15.950923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:24:15.950930 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:24:15.950937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:24:15.950945 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:24:15.950952 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:24:15.950959 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:24:15.951078 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 6 23:24:15.951186 kernel: rtc_cmos 00:03: registered as rtc0 Jul 6 23:24:15.951299 kernel: rtc_cmos 00:03: setting system clock to 2025-07-06T23:24:15 UTC (1751844255) Jul 6 23:24:15.951407 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:24:15.951416 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:24:15.951423 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:24:15.951431 kernel: Segment Routing with IPv6 Jul 6 23:24:15.951438 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:24:15.951445 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:24:15.951452 kernel: Key type dns_resolver registered Jul 6 23:24:15.951459 kernel: IPI shorthand broadcast: enabled Jul 6 23:24:15.951469 kernel: sched_clock: Marking stable (737003182, 200156335)->(998395251, -61235734) Jul 6 23:24:15.951476 kernel: registered taskstats version 1 Jul 6 23:24:15.951483 kernel: Loading compiled-in X.509 certificates Jul 6 23:24:15.951491 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: f74b958d282931d4f0d8d911dd18abd0ec707734' Jul 6 23:24:15.951497 kernel: Key type .fscrypt registered Jul 6 23:24:15.951504 kernel: Key type fscrypt-provisioning registered Jul 6 23:24:15.951512 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:24:15.951519 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:24:15.951528 kernel: ima: No architecture policies found Jul 6 23:24:15.951535 kernel: clk: Disabling unused clocks Jul 6 23:24:15.951542 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 6 23:24:15.951549 kernel: Write protecting the kernel read-only data: 38912k Jul 6 23:24:15.951556 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 6 23:24:15.951563 kernel: Run /init as init process Jul 6 23:24:15.951570 kernel: with arguments: Jul 6 23:24:15.951577 kernel: /init Jul 6 23:24:15.951584 kernel: with environment: Jul 6 23:24:15.951591 kernel: HOME=/ Jul 6 23:24:15.951600 kernel: TERM=linux Jul 6 23:24:15.951608 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:24:15.951616 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:24:15.951626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:24:15.951634 systemd[1]: Detected virtualization kvm. Jul 6 23:24:15.951641 systemd[1]: Detected architecture x86-64. Jul 6 23:24:15.951649 systemd[1]: Running in initrd. Jul 6 23:24:15.951658 systemd[1]: No hostname configured, using default hostname. Jul 6 23:24:15.951666 systemd[1]: Hostname set to . Jul 6 23:24:15.951673 systemd[1]: Initializing machine ID from random generator. Jul 6 23:24:15.951680 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:24:15.951688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:24:15.951711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:24:15.951724 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:24:15.951732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:24:15.951740 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:24:15.951748 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:24:15.951756 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:24:15.951764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:24:15.951791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:24:15.951803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:24:15.951811 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:24:15.951818 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:24:15.951826 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:24:15.951833 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:24:15.951841 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:24:15.951849 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:24:15.951857 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:24:15.951867 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:24:15.951875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:24:15.951883 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:24:15.951891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:24:15.951898 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:24:15.951906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:24:15.951914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:24:15.951922 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:24:15.951929 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:24:15.951939 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:24:15.951947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:24:15.951954 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:15.951962 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:24:15.951970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:24:15.951981 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:24:15.952011 systemd-journald[178]: Collecting audit messages is disabled. Jul 6 23:24:15.952033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:24:15.952042 systemd-journald[178]: Journal started Jul 6 23:24:15.952060 systemd-journald[178]: Runtime Journal (/run/log/journal/41a139fcf43747e7baa06c49b6fe3459) is 8M, max 78.3M, 70.3M free. Jul 6 23:24:15.938341 systemd-modules-load[179]: Inserted module 'overlay' Jul 6 23:24:16.005849 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:24:16.005872 kernel: Bridge firewalling registered Jul 6 23:24:15.971253 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 6 23:24:16.009850 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:24:16.010873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:24:16.011592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:16.013492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:24:16.020926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:24:16.023028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:24:16.025894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:24:16.031646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:24:16.056066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:24:16.065969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:24:16.066741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:24:16.072988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:24:16.075280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:24:16.079942 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:24:16.087418 dracut-cmdline[211]: dracut-dracut-053 Jul 6 23:24:16.090812 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:24:16.124123 systemd-resolved[214]: Positive Trust Anchors: Jul 6 23:24:16.124868 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:24:16.124895 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:24:16.130056 systemd-resolved[214]: Defaulting to hostname 'linux'. Jul 6 23:24:16.131360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:24:16.131938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:24:16.164903 kernel: SCSI subsystem initialized Jul 6 23:24:16.174858 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:24:16.185822 kernel: iscsi: registered transport (tcp) Jul 6 23:24:16.207325 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:24:16.207379 kernel: QLogic iSCSI HBA Driver Jul 6 23:24:16.257383 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:24:16.267045 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:24:16.292349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:24:16.292387 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:24:16.292962 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:24:16.336810 kernel: raid6: avx2x4 gen() 29756 MB/s Jul 6 23:24:16.354808 kernel: raid6: avx2x2 gen() 30470 MB/s Jul 6 23:24:16.373391 kernel: raid6: avx2x1 gen() 21506 MB/s Jul 6 23:24:16.373417 kernel: raid6: using algorithm avx2x2 gen() 30470 MB/s Jul 6 23:24:16.392355 kernel: raid6: .... xor() 30025 MB/s, rmw enabled Jul 6 23:24:16.392395 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:24:16.411813 kernel: xor: automatically using best checksumming function avx Jul 6 23:24:16.534809 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:24:16.545832 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:24:16.551912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:24:16.565106 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jul 6 23:24:16.570110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:24:16.578921 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:24:16.592813 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 6 23:24:16.622141 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:24:16.626888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:24:16.687989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:24:16.693986 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:24:16.709845 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:24:16.711448 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:24:16.712875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:24:16.713420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:24:16.718930 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:24:16.738981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:24:16.762634 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:24:16.856847 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:24:16.856924 kernel: AES CTR mode by8 optimization enabled Jul 6 23:24:16.870801 kernel: libata version 3.00 loaded. Jul 6 23:24:16.872926 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:24:16.877864 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 6 23:24:16.918591 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:24:16.919431 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:24:16.920902 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:24:16.922274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:24:16.926852 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:24:16.922452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:16.924146 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:16.930870 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:24:16.932502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:16.948645 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:24:16.948935 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:24:16.949077 kernel: scsi host1: ahci Jul 6 23:24:16.949229 kernel: scsi host2: ahci Jul 6 23:24:16.949364 kernel: scsi host3: ahci Jul 6 23:24:16.949496 kernel: scsi host4: ahci Jul 6 23:24:16.949631 kernel: scsi host5: ahci Jul 6 23:24:16.949763 kernel: scsi host6: ahci Jul 6 23:24:16.933998 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:24:16.957500 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Jul 6 23:24:16.957599 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Jul 6 23:24:16.957612 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Jul 6 23:24:16.957622 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Jul 6 23:24:16.957647 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Jul 6 23:24:16.957657 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Jul 6 23:24:16.962812 kernel: sd 0:0:0:0: Power-on or device reset occurred Jul 6 23:24:16.963191 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jul 6 23:24:16.963351 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:24:16.963494 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jul 6 23:24:16.963632 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:24:16.965797 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:24:16.965827 kernel: GPT:9289727 != 167739391 Jul 6 23:24:16.965838 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:24:16.965847 kernel: GPT:9289727 != 167739391 Jul 6 23:24:16.965855 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:24:16.965864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:24:16.966826 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:24:17.034141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:17.041957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:24:17.083172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:24:17.272234 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:24:17.272271 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:24:17.272282 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:24:17.272299 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 6 23:24:17.272308 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:24:17.272797 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:24:17.312807 kernel: BTRFS: device fsid 25bdfe43-d649-4808-8940-e1722efc7a2e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (471) Jul 6 23:24:17.319800 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (454) Jul 6 23:24:17.328960 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 6 23:24:17.343578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 6 23:24:17.354619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:24:17.364501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 6 23:24:17.365246 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 6 23:24:17.384003 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:24:17.389623 disk-uuid[571]: Primary Header is updated. Jul 6 23:24:17.389623 disk-uuid[571]: Secondary Entries is updated. Jul 6 23:24:17.389623 disk-uuid[571]: Secondary Header is updated. Jul 6 23:24:17.394873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:24:17.400828 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:24:18.406551 disk-uuid[572]: The operation has completed successfully. Jul 6 23:24:18.407575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:24:18.464752 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:24:18.464897 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:24:18.495897 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:24:18.499495 sh[586]: Success Jul 6 23:24:18.513342 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:24:18.564472 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:24:18.576866 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:24:18.577575 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:24:18.599507 kernel: BTRFS info (device dm-0): first mount of filesystem 25bdfe43-d649-4808-8940-e1722efc7a2e Jul 6 23:24:18.599547 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:24:18.599566 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:24:18.603849 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:24:18.603877 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:24:18.611803 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:24:18.613695 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:24:18.615627 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:24:18.622896 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:24:18.625029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:24:18.651564 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:24:18.651815 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:24:18.651829 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:24:18.658055 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:24:18.658091 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:24:18.666832 kernel: BTRFS info (device sda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:24:18.668887 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:24:18.674126 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:24:18.737096 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:24:18.744975 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:24:18.759274 ignition[693]: Ignition 2.20.0 Jul 6 23:24:18.760009 ignition[693]: Stage: fetch-offline Jul 6 23:24:18.760045 ignition[693]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:18.760055 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:18.760134 ignition[693]: parsed url from cmdline: "" Jul 6 23:24:18.760137 ignition[693]: no config URL provided Jul 6 23:24:18.760143 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:24:18.760151 ignition[693]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:24:18.760156 ignition[693]: failed to fetch config: resource requires networking Jul 6 23:24:18.760312 ignition[693]: Ignition finished successfully Jul 6 23:24:18.765031 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:24:18.785354 systemd-networkd[769]: lo: Link UP Jul 6 23:24:18.785365 systemd-networkd[769]: lo: Gained carrier Jul 6 23:24:18.787376 systemd-networkd[769]: Enumeration completed Jul 6 23:24:18.787450 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:24:18.788041 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:18.788045 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:24:18.789799 systemd[1]: Reached target network.target - Network. Jul 6 23:24:18.789929 systemd-networkd[769]: eth0: Link UP Jul 6 23:24:18.789933 systemd-networkd[769]: eth0: Gained carrier Jul 6 23:24:18.789940 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:18.796972 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:24:18.808324 ignition[774]: Ignition 2.20.0 Jul 6 23:24:18.808335 ignition[774]: Stage: fetch Jul 6 23:24:18.808473 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:18.808484 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:18.808564 ignition[774]: parsed url from cmdline: "" Jul 6 23:24:18.808567 ignition[774]: no config URL provided Jul 6 23:24:18.808572 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:24:18.808581 ignition[774]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:24:18.808602 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #1 Jul 6 23:24:18.808746 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 6 23:24:19.009833 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #2 Jul 6 23:24:19.009987 ignition[774]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 6 23:24:19.243872 systemd-networkd[769]: eth0: DHCPv4 address 172.237.135.91/24, gateway 172.237.135.1 acquired from 23.205.167.148 Jul 6 23:24:19.410164 ignition[774]: PUT http://169.254.169.254/v1/token: attempt #3 Jul 6 23:24:19.501086 ignition[774]: PUT result: OK Jul 6 23:24:19.501140 ignition[774]: GET http://169.254.169.254/v1/user-data: attempt #1 Jul 6 23:24:19.608384 ignition[774]: GET result: OK Jul 6 23:24:19.608470 ignition[774]: parsing config with SHA512: 73019c001873fe29909b1fe681c742d60323ff3fbea97a72d4f2895dad8651f036624b2ce2be4a09eff5252d026baf8ffdd4a7f516951575bcc47ffa8b8f59e1 Jul 6 23:24:19.611723 unknown[774]: fetched base config from "system" Jul 6 23:24:19.611733 unknown[774]: fetched base config from "system" Jul 6 23:24:19.612027 ignition[774]: fetch: fetch complete Jul 6 23:24:19.611739 unknown[774]: fetched user config from "akamai" Jul 6 23:24:19.612032 ignition[774]: fetch: fetch passed Jul 6 23:24:19.614273 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:24:19.612074 ignition[774]: Ignition finished successfully Jul 6 23:24:19.621767 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:24:19.635054 ignition[781]: Ignition 2.20.0 Jul 6 23:24:19.635065 ignition[781]: Stage: kargs Jul 6 23:24:19.635197 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:19.637898 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:24:19.635207 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:19.635914 ignition[781]: kargs: kargs passed Jul 6 23:24:19.635952 ignition[781]: Ignition finished successfully Jul 6 23:24:19.645927 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:24:19.657397 ignition[788]: Ignition 2.20.0 Jul 6 23:24:19.657407 ignition[788]: Stage: disks Jul 6 23:24:19.657562 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:19.659999 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:24:19.657573 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:19.661556 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:24:19.658418 ignition[788]: disks: disks passed Jul 6 23:24:19.662132 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:24:19.658455 ignition[788]: Ignition finished successfully Jul 6 23:24:19.694028 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:24:19.695141 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:24:19.696091 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:24:19.701952 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:24:19.718283 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:24:19.722745 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:24:19.730889 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:24:19.818049 kernel: EXT4-fs (sda9): mounted filesystem daab0c95-3783-44c0-bef8-9d61a5c53c14 r/w with ordered data mode. Quota mode: none. Jul 6 23:24:19.818889 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:24:19.820171 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:24:19.833901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:24:19.836333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:24:19.837809 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:24:19.839404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:24:19.839428 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:24:19.844132 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:24:19.846153 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:24:19.851791 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (804) Jul 6 23:24:19.857015 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:24:19.857059 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:24:19.857071 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:24:19.864901 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:24:19.864926 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:24:19.868184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:24:19.908130 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:24:19.913413 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:24:19.917654 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:24:19.922933 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:24:20.011260 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:24:20.017885 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:24:20.020814 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:24:20.027666 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:24:20.030057 kernel: BTRFS info (device sda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:24:20.048389 ignition[916]: INFO : Ignition 2.20.0 Jul 6 23:24:20.049316 ignition[916]: INFO : Stage: mount Jul 6 23:24:20.050624 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:20.050624 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:20.052785 ignition[916]: INFO : mount: mount passed Jul 6 23:24:20.052785 ignition[916]: INFO : Ignition finished successfully Jul 6 23:24:20.051217 systemd-networkd[769]: eth0: Gained IPv6LL Jul 6 23:24:20.055744 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:24:20.061873 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:24:20.062612 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:24:20.823948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:24:20.837797 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (928) Jul 6 23:24:20.837850 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:24:20.840089 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:24:20.842699 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:24:20.847798 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:24:20.847825 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:24:20.850036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:24:20.870522 ignition[944]: INFO : Ignition 2.20.0 Jul 6 23:24:20.870522 ignition[944]: INFO : Stage: files Jul 6 23:24:20.871863 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:20.871863 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:20.871863 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:24:20.874011 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:24:20.874011 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:24:20.875650 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:24:20.875650 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:24:20.875650 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:24:20.874823 unknown[944]: wrote ssh authorized keys file for user: core Jul 6 23:24:20.878484 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:24:20.878484 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:24:21.109934 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:24:21.481171 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:24:21.481171 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:24:21.483142 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:24:21.904380 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:24:22.039550 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:24:22.039550 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:24:22.041735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:24:22.567787 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:24:23.352964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:24:23.378745 ignition[944]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:24:23.378745 ignition[944]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:24:23.378745 ignition[944]: INFO : files: files passed Jul 6 23:24:23.378745 ignition[944]: INFO : Ignition finished successfully Jul 6 23:24:23.367026 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:24:23.389997 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:24:23.392940 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:24:23.394721 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:24:23.394859 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:24:23.409203 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:24:23.409203 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:24:23.411126 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:24:23.411741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:24:23.413190 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:24:23.418909 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:24:23.442193 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:24:23.442327 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:24:23.443907 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:24:23.444669 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:24:23.445853 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:24:23.455949 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:24:23.468801 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:24:23.478961 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:24:23.488114 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:24:23.488768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:24:23.490017 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:24:23.490798 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:24:23.490975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:24:23.492257 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:24:23.493026 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:24:23.493995 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:24:23.495189 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:24:23.496291 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:24:23.497375 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:24:23.498559 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:24:23.499810 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:24:23.500964 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:24:23.502155 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:24:23.503337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:24:23.503444 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:24:23.504938 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:24:23.505679 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:24:23.506671 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:24:23.507981 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:24:23.509184 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:24:23.509289 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:24:23.510940 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:24:23.511048 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:24:23.511790 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:24:23.511935 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:24:23.524280 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:24:23.525842 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:24:23.528144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:24:23.528303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:24:23.531085 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:24:23.531225 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:24:23.540029 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:24:23.540141 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:24:23.545812 ignition[997]: INFO : Ignition 2.20.0 Jul 6 23:24:23.545812 ignition[997]: INFO : Stage: umount Jul 6 23:24:23.547285 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:24:23.547285 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jul 6 23:24:23.547285 ignition[997]: INFO : umount: umount passed Jul 6 23:24:23.547285 ignition[997]: INFO : Ignition finished successfully Jul 6 23:24:23.551099 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:24:23.551209 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:24:23.551916 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:24:23.551967 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:24:23.553064 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:24:23.553111 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:24:23.555193 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:24:23.555241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:24:23.557368 systemd[1]: Stopped target network.target - Network. Jul 6 23:24:23.560230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:24:23.560286 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:24:23.593535 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:24:23.594642 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:24:23.594708 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:24:23.596087 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:24:23.597085 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:24:23.598428 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:24:23.598476 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:24:23.599885 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:24:23.599922 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:24:23.600941 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:24:23.600994 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:24:23.602221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:24:23.602269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:24:23.603365 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:24:23.604589 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:24:23.608281 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:24:23.608959 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:24:23.609081 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:24:23.612922 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:24:23.613129 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:24:23.613244 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:24:23.615156 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:24:23.615438 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:24:23.615545 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:24:23.617609 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:24:23.617679 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:24:23.619049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:24:23.619101 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:24:23.624914 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:24:23.625434 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:24:23.625491 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:24:23.627272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:24:23.627329 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:24:23.630059 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:24:23.630132 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:24:23.631572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:24:23.631623 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:24:23.633222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:24:23.635897 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:24:23.635966 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:24:23.647122 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:24:23.647250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:24:23.649258 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:24:23.649419 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:24:23.650898 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:24:23.650968 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:24:23.652094 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:24:23.652131 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:24:23.653343 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:24:23.653393 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:24:23.655211 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:24:23.655263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:24:23.656604 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:24:23.656653 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:24:23.668148 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:24:23.668702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:24:23.668756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:24:23.669837 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:24:23.669886 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:24:23.670457 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:24:23.670501 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:24:23.671078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:24:23.671123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:23.676084 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:24:23.676147 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:24:23.676525 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:24:23.676632 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:24:23.677862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:24:23.684947 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:24:23.691674 systemd[1]: Switching root. Jul 6 23:24:23.730392 systemd-journald[178]: Journal stopped Jul 6 23:24:24.793029 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 6 23:24:24.793053 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:24:24.793065 kernel: SELinux: policy capability open_perms=1 Jul 6 23:24:24.793074 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:24:24.793083 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:24:24.793095 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:24:24.793105 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:24:24.793114 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:24:24.793122 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:24:24.793131 kernel: audit: type=1403 audit(1751844263.857:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:24:24.793141 systemd[1]: Successfully loaded SELinux policy in 45.588ms. Jul 6 23:24:24.793154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.287ms. Jul 6 23:24:24.793164 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:24:24.793174 systemd[1]: Detected virtualization kvm. Jul 6 23:24:24.793185 systemd[1]: Detected architecture x86-64. Jul 6 23:24:24.793194 systemd[1]: Detected first boot. Jul 6 23:24:24.793207 systemd[1]: Initializing machine ID from random generator. Jul 6 23:24:24.793217 zram_generator::config[1042]: No configuration found. Jul 6 23:24:24.793227 kernel: Guest personality initialized and is inactive Jul 6 23:24:24.793236 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 6 23:24:24.793245 kernel: Initialized host personality Jul 6 23:24:24.793254 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:24:24.793264 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:24:24.793276 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:24:24.793286 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:24:24.793295 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:24:24.793305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:24:24.793315 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:24:24.793325 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:24:24.793336 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:24:24.793348 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:24:24.793358 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:24:24.793368 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:24:24.793378 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:24:24.793388 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:24:24.793397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:24:24.793407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:24:24.793417 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:24:24.793427 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:24:24.793439 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:24:24.793452 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:24:24.793463 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:24:24.793473 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:24:24.793483 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:24:24.793493 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:24:24.793502 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:24:24.793515 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:24:24.793525 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:24:24.793535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:24:24.793545 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:24:24.793555 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:24:24.793565 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:24:24.793766 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:24:24.794201 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:24:24.794222 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:24:24.794238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:24:24.794248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:24:24.794259 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:24:24.794269 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:24:24.794281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:24:24.794292 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:24:24.794302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:24:24.794313 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:24:24.794323 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:24:24.794333 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:24:24.794344 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:24:24.794354 systemd[1]: Reached target machines.target - Containers. Jul 6 23:24:24.794366 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:24:24.794377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:24:24.794387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:24:24.794398 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:24:24.794463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:24:24.794476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:24:24.794486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:24:24.794496 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:24:24.794507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:24:24.794520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:24:24.794531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:24:24.794541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:24:24.794551 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:24:24.794561 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:24:24.794572 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:24:24.794582 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:24:24.794592 kernel: fuse: init (API version 7.39) Jul 6 23:24:24.794808 kernel: loop: module loaded Jul 6 23:24:24.794819 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:24:24.794830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:24:24.794840 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:24:24.794850 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:24:24.794860 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:24:24.794893 systemd-journald[1125]: Collecting audit messages is disabled. Jul 6 23:24:24.794921 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:24:24.794932 systemd[1]: Stopped verity-setup.service. Jul 6 23:24:24.794942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:24:24.794953 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:24:24.794966 systemd-journald[1125]: Journal started Jul 6 23:24:24.794985 systemd-journald[1125]: Runtime Journal (/run/log/journal/6d41bddb57434495b10ab86410876e45) is 8M, max 78.3M, 70.3M free. Jul 6 23:24:24.479572 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:24:24.491445 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:24:24.492102 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:24:24.804179 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:24:24.804212 kernel: ACPI: bus type drm_connector registered Jul 6 23:24:24.803732 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:24:24.805708 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:24:24.806341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:24:24.808978 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:24:24.809947 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:24:24.811105 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:24:24.814485 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:24:24.815738 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:24:24.816049 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:24:24.817282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:24:24.817563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:24:24.818646 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:24:24.819270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:24:24.820267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:24:24.820518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:24:24.821613 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:24:24.821898 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:24:24.822805 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:24:24.823170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:24:24.824333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:24:24.825279 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:24:24.826306 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:24:24.830848 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:24:24.843562 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:24:24.850864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:24:24.855407 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:24:24.856099 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:24:24.856183 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:24:24.857555 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:24:24.873293 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:24:24.878930 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:24:24.880460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:24:24.882559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:24:24.885205 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:24:24.886307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:24:24.891952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:24:24.893670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:24:24.895857 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:24:24.918612 systemd-journald[1125]: Time spent on flushing to /var/log/journal/6d41bddb57434495b10ab86410876e45 is 79.502ms for 988 entries. Jul 6 23:24:24.918612 systemd-journald[1125]: System Journal (/var/log/journal/6d41bddb57434495b10ab86410876e45) is 8M, max 195.6M, 187.6M free. Jul 6 23:24:25.022755 systemd-journald[1125]: Received client request to flush runtime journal. Jul 6 23:24:25.027870 kernel: loop0: detected capacity change from 0 to 224512 Jul 6 23:24:25.027894 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:24:24.901898 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:24:24.906597 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:24:24.910543 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:24:24.911213 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:24:24.912545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:24:24.945102 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:24:24.946348 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:24:24.954902 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:24:24.966467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:24:24.975156 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:24:25.001603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:24:25.003685 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:24:25.019361 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:24:25.023705 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jul 6 23:24:25.023716 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jul 6 23:24:25.033488 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:24:25.039176 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:24:25.047745 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:24:25.060796 kernel: loop1: detected capacity change from 0 to 8 Jul 6 23:24:25.078381 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:24:25.087016 kernel: loop2: detected capacity change from 0 to 147912 Jul 6 23:24:25.085937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:24:25.103516 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jul 6 23:24:25.104014 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jul 6 23:24:25.109603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:24:25.142893 kernel: loop3: detected capacity change from 0 to 138176 Jul 6 23:24:25.188918 kernel: loop4: detected capacity change from 0 to 224512 Jul 6 23:24:25.207806 kernel: loop5: detected capacity change from 0 to 8 Jul 6 23:24:25.212798 kernel: loop6: detected capacity change from 0 to 147912 Jul 6 23:24:25.239797 kernel: loop7: detected capacity change from 0 to 138176 Jul 6 23:24:25.259616 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jul 6 23:24:25.260268 (sd-merge)[1197]: Merged extensions into '/usr'. Jul 6 23:24:25.264281 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:24:25.264295 systemd[1]: Reloading... Jul 6 23:24:25.358427 zram_generator::config[1228]: No configuration found. Jul 6 23:24:25.465651 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:24:25.493873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:24:25.554599 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:24:25.555148 systemd[1]: Reloading finished in 290 ms. Jul 6 23:24:25.574071 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:24:25.575099 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:24:25.576015 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:24:25.585307 systemd[1]: Starting ensure-sysext.service... Jul 6 23:24:25.588919 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:24:25.592939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:24:25.610997 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:24:25.611247 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:24:25.611873 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:24:25.611884 systemd[1]: Reloading... Jul 6 23:24:25.614099 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:24:25.614343 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 6 23:24:25.614424 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jul 6 23:24:25.620172 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:24:25.620247 systemd-tmpfiles[1270]: Skipping /boot Jul 6 23:24:25.633187 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:24:25.633245 systemd-tmpfiles[1270]: Skipping /boot Jul 6 23:24:25.655545 systemd-udevd[1271]: Using default interface naming scheme 'v255'. Jul 6 23:24:25.706804 zram_generator::config[1300]: No configuration found. Jul 6 23:24:25.850808 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1301) Jul 6 23:24:25.860045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:24:25.876174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:24:25.911837 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:24:25.921812 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:24:25.922082 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:24:25.927332 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:24:25.930797 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:24:25.944958 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:24:25.970420 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:24:25.972056 systemd[1]: Reloading finished in 359 ms. Jul 6 23:24:25.982519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:24:25.984401 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:24:26.012800 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:24:26.031961 systemd[1]: Finished ensure-sysext.service. Jul 6 23:24:26.033110 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:24:26.057090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:24:26.059946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:24:26.064955 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:24:26.069736 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:24:26.070659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:24:26.072131 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:24:26.077904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:24:26.085283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:24:26.094850 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:24:26.094930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:24:26.102997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:24:26.103659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:24:26.105951 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:24:26.107646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:24:26.111887 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:24:26.121230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:24:26.126457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:24:26.137930 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:24:26.145452 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:24:26.150174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:24:26.151904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:24:26.153524 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:24:26.155450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:24:26.155665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:24:26.161462 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:24:26.162582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:24:26.164141 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:24:26.164843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:24:26.166317 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:24:26.167124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:24:26.168549 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:24:26.187100 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:24:26.188832 augenrules[1419]: No rules Jul 6 23:24:26.189637 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:24:26.190134 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:24:26.192546 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:24:26.198156 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:24:26.204447 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:24:26.205376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:24:26.205440 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:24:26.206960 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:24:26.210860 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:24:26.215909 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:24:26.240033 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:24:26.241050 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:24:26.248624 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:24:26.256528 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:24:26.321038 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:24:26.322880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:24:26.364870 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:24:26.365528 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:24:26.387322 systemd-resolved[1399]: Positive Trust Anchors: Jul 6 23:24:26.387334 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:24:26.387361 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:24:26.389755 systemd-networkd[1396]: lo: Link UP Jul 6 23:24:26.390004 systemd-networkd[1396]: lo: Gained carrier Jul 6 23:24:26.390669 systemd-resolved[1399]: Defaulting to hostname 'linux'. Jul 6 23:24:26.391770 systemd-networkd[1396]: Enumeration completed Jul 6 23:24:26.392256 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:24:26.392940 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:26.392951 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:24:26.393054 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:24:26.393684 systemd-networkd[1396]: eth0: Link UP Jul 6 23:24:26.393691 systemd[1]: Reached target network.target - Network. Jul 6 23:24:26.393692 systemd-networkd[1396]: eth0: Gained carrier Jul 6 23:24:26.393704 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:24:26.394364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:24:26.395021 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:24:26.395742 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:24:26.396464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:24:26.397285 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:24:26.397909 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:24:26.398472 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:24:26.399049 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:24:26.399084 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:24:26.399569 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:24:26.401171 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:24:26.403174 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:24:26.405891 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:24:26.406605 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:24:26.407188 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:24:26.409580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:24:26.410519 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:24:26.412253 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:24:26.413930 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:24:26.417239 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:24:26.418097 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:24:26.418654 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:24:26.419214 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:24:26.419247 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:24:26.420921 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:24:26.424918 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:24:26.427904 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:24:26.433979 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:24:26.436812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:24:26.437720 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:24:26.440942 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:24:26.457143 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:24:26.463349 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:24:26.470347 jq[1453]: false Jul 6 23:24:26.470973 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:24:26.482230 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:24:26.485089 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:24:26.486944 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:24:26.491442 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:24:26.495801 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:24:26.498886 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:24:26.508094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:24:26.508315 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:24:26.510330 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:24:26.510920 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:24:26.511356 dbus-daemon[1452]: [system] SELinux support is enabled Jul 6 23:24:26.513039 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:24:26.527158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:24:26.527624 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:24:26.529180 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:24:26.529202 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:24:26.555253 jq[1468]: true Jul 6 23:24:26.562087 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:24:26.563065 extend-filesystems[1454]: Found loop4 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found loop5 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found loop6 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found loop7 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found sda Jul 6 23:24:26.563065 extend-filesystems[1454]: Found sda1 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found sda2 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found sda3 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found usr Jul 6 23:24:26.563065 extend-filesystems[1454]: Found sda4 Jul 6 23:24:26.563065 extend-filesystems[1454]: Found sda6 Jul 6 23:24:26.626453 extend-filesystems[1454]: Found sda7 Jul 6 23:24:26.626453 extend-filesystems[1454]: Found sda9 Jul 6 23:24:26.626453 extend-filesystems[1454]: Checking size of /dev/sda9 Jul 6 23:24:26.629460 update_engine[1466]: I20250706 23:24:26.588603 1466 main.cc:92] Flatcar Update Engine starting Jul 6 23:24:26.629460 update_engine[1466]: I20250706 23:24:26.616122 1466 update_check_scheduler.cc:74] Next update check in 10m15s Jul 6 23:24:26.584978 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:24:26.633158 coreos-metadata[1451]: Jul 06 23:24:26.617 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 6 23:24:26.633341 tar[1474]: linux-amd64/LICENSE Jul 6 23:24:26.633341 tar[1474]: linux-amd64/helm Jul 6 23:24:26.635827 extend-filesystems[1454]: Resized partition /dev/sda9 Jul 6 23:24:26.585247 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:24:26.636446 jq[1485]: true Jul 6 23:24:26.623365 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:24:26.632313 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:24:26.640172 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:24:26.646319 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jul 6 23:24:26.735598 bash[1511]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:24:26.741211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:24:26.751795 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1302) Jul 6 23:24:26.753151 systemd[1]: Starting sshkeys.service... Jul 6 23:24:26.779806 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:24:26.779865 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:24:26.781025 systemd-logind[1463]: New seat seat0. Jul 6 23:24:26.786328 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:24:26.801659 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:24:26.812079 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:24:26.864765 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jul 6 23:24:26.865072 systemd-networkd[1396]: eth0: DHCPv4 address 172.237.135.91/24, gateway 172.237.135.1 acquired from 23.205.167.148 Jul 6 23:24:26.873829 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:26.866905 dbus-daemon[1452]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1396 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 6 23:24:26.882717 extend-filesystems[1494]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 6 23:24:26.882717 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 10 Jul 6 23:24:26.882717 extend-filesystems[1494]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jul 6 23:24:26.881033 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 6 23:24:26.917002 extend-filesystems[1454]: Resized filesystem in /dev/sda9 Jul 6 23:24:26.883445 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:24:26.883691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:24:26.971905 coreos-metadata[1518]: Jul 06 23:24:26.971 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jul 6 23:24:27.026797 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:24:27.050083 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:24:27.051465 containerd[1482]: time="2025-07-06T23:24:27.050966973Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:24:27.082841 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:24:27.087878 coreos-metadata[1518]: Jul 06 23:24:27.087 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jul 6 23:24:27.091069 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:24:27.094618 containerd[1482]: time="2025-07-06T23:24:27.094583491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096173 containerd[1482]: time="2025-07-06T23:24:27.096132981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096173 containerd[1482]: time="2025-07-06T23:24:27.096159110Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:24:27.096173 containerd[1482]: time="2025-07-06T23:24:27.096173070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:24:27.096355 containerd[1482]: time="2025-07-06T23:24:27.096334630Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:24:27.096383 containerd[1482]: time="2025-07-06T23:24:27.096355950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096460 containerd[1482]: time="2025-07-06T23:24:27.096438980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096481 containerd[1482]: time="2025-07-06T23:24:27.096457710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096666 containerd[1482]: time="2025-07-06T23:24:27.096643680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096687 containerd[1482]: time="2025-07-06T23:24:27.096663350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096687 containerd[1482]: time="2025-07-06T23:24:27.096675070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096687 containerd[1482]: time="2025-07-06T23:24:27.096683340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.096805 containerd[1482]: time="2025-07-06T23:24:27.096768370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.097049 containerd[1482]: time="2025-07-06T23:24:27.097027610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:24:27.097230 containerd[1482]: time="2025-07-06T23:24:27.097209290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:24:27.097262 containerd[1482]: time="2025-07-06T23:24:27.097227820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:24:27.097340 containerd[1482]: time="2025-07-06T23:24:27.097321300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:24:27.097398 containerd[1482]: time="2025-07-06T23:24:27.097380270Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:24:27.101491 containerd[1482]: time="2025-07-06T23:24:27.101465988Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:24:27.101529 containerd[1482]: time="2025-07-06T23:24:27.101514758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:24:27.101550 containerd[1482]: time="2025-07-06T23:24:27.101529528Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:24:27.101550 containerd[1482]: time="2025-07-06T23:24:27.101543078Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:24:27.101596 containerd[1482]: time="2025-07-06T23:24:27.101586728Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.101916688Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102550377Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102654557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102668457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102679887Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102691867Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102703327Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102712967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102724087Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102739247Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102750007Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102760257Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102769317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:24:27.103844 containerd[1482]: time="2025-07-06T23:24:27.102801367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102812677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102822817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102833197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102843597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102853817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102863697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102873827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102885947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102898487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102908167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102917777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102933837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102945867Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102961827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104102 containerd[1482]: time="2025-07-06T23:24:27.102972187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.102982237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103660657Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103678977Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103687947Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103752757Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103764437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103792157Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103802917Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:24:27.104333 containerd[1482]: time="2025-07-06T23:24:27.103822387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:24:27.104476 containerd[1482]: time="2025-07-06T23:24:27.104076067Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:24:27.104476 containerd[1482]: time="2025-07-06T23:24:27.104114357Z" level=info msg="Connect containerd service" Jul 6 23:24:27.104476 containerd[1482]: time="2025-07-06T23:24:27.104137287Z" level=info msg="using legacy CRI server" Jul 6 23:24:27.104476 containerd[1482]: time="2025-07-06T23:24:27.104143347Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:24:27.104476 containerd[1482]: time="2025-07-06T23:24:27.104233716Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.105701106Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.105806486Z" level=info msg="Start subscribing containerd event" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.105938016Z" level=info msg="Start recovering state" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.105987726Z" level=info msg="Start event monitor" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.106002086Z" level=info msg="Start snapshots syncer" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.106009256Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.106015876Z" level=info msg="Start streaming server" Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.106805635Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:24:27.108599 containerd[1482]: time="2025-07-06T23:24:27.106886945Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:24:27.110233 containerd[1482]: time="2025-07-06T23:24:27.110210303Z" level=info msg="containerd successfully booted in 0.061152s" Jul 6 23:24:27.110332 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:24:27.114166 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:24:27.114809 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:24:27.124501 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:24:27.128669 dbus-daemon[1452]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 6 23:24:27.129118 dbus-daemon[1452]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1523 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 6 23:24:27.135843 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 6 23:24:27.146123 systemd[1]: Starting polkit.service - Authorization Manager... Jul 6 23:24:27.159887 polkitd[1548]: Started polkitd version 121 Jul 6 23:24:27.166388 polkitd[1548]: Loading rules from directory /etc/polkit-1/rules.d Jul 6 23:24:27.166444 polkitd[1548]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 6 23:24:27.166992 polkitd[1548]: Finished loading, compiling and executing 2 rules Jul 6 23:24:27.167142 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:24:27.167364 dbus-daemon[1452]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 6 23:24:27.167645 polkitd[1548]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 6 23:24:27.168987 systemd[1]: Started polkit.service - Authorization Manager. Jul 6 23:24:27.177156 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:24:27.178875 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:24:27.179592 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:24:27.184580 systemd-resolved[1399]: System hostname changed to '172-237-135-91'. Jul 6 23:24:27.184672 systemd-hostnamed[1523]: Hostname set to <172-237-135-91> (transient) Jul 6 23:24:27.223466 coreos-metadata[1518]: Jul 06 23:24:27.223 INFO Fetch successful Jul 6 23:24:27.246487 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:24:27.247857 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:24:27.250109 systemd[1]: Finished sshkeys.service. Jul 6 23:24:27.417598 tar[1474]: linux-amd64/README.md Jul 6 23:24:27.428249 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:24:27.610038 coreos-metadata[1451]: Jul 06 23:24:27.609 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jul 6 23:24:27.699048 coreos-metadata[1451]: Jul 06 23:24:27.698 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jul 6 23:24:27.877501 coreos-metadata[1451]: Jul 06 23:24:27.877 INFO Fetch successful Jul 6 23:24:27.877630 coreos-metadata[1451]: Jul 06 23:24:27.877 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jul 6 23:24:27.986929 systemd-networkd[1396]: eth0: Gained IPv6LL Jul 6 23:24:27.987461 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:27.989906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:24:27.990990 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:24:27.996932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:27.999662 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:24:28.021865 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:24:28.128534 coreos-metadata[1451]: Jul 06 23:24:28.128 INFO Fetch successful Jul 6 23:24:28.209549 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:24:28.211756 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:24:29.046490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:29.047506 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:24:29.048691 systemd[1]: Startup finished in 864ms (kernel) + 8.155s (initrd) + 5.235s (userspace) = 14.255s. Jul 6 23:24:29.092220 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:24:29.488297 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:29.628241 kubelet[1605]: E0706 23:24:29.628186 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:24:29.631437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:24:29.631643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:24:29.632084 systemd[1]: kubelet.service: Consumed 920ms CPU time, 264.9M memory peak. Jul 6 23:24:30.309369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:24:30.314134 systemd[1]: Started sshd@0-172.237.135.91:22-147.75.109.163:46700.service - OpenSSH per-connection server daemon (147.75.109.163:46700). Jul 6 23:24:30.664535 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 46700 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:30.666952 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:30.674264 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:24:30.678991 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:24:30.686006 systemd-logind[1463]: New session 1 of user core. Jul 6 23:24:30.692392 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:24:30.700519 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:24:30.704175 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:24:30.707185 systemd-logind[1463]: New session c1 of user core. Jul 6 23:24:30.865524 systemd[1621]: Queued start job for default target default.target. Jul 6 23:24:30.874444 systemd[1621]: Created slice app.slice - User Application Slice. Jul 6 23:24:30.874480 systemd[1621]: Reached target paths.target - Paths. Jul 6 23:24:30.874527 systemd[1621]: Reached target timers.target - Timers. Jul 6 23:24:30.876703 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:24:30.897730 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:24:30.897885 systemd[1621]: Reached target sockets.target - Sockets. Jul 6 23:24:30.897931 systemd[1621]: Reached target basic.target - Basic System. Jul 6 23:24:30.897975 systemd[1621]: Reached target default.target - Main User Target. Jul 6 23:24:30.898007 systemd[1621]: Startup finished in 182ms. Jul 6 23:24:30.898178 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:24:30.905043 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:24:31.059880 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:31.170981 systemd[1]: Started sshd@1-172.237.135.91:22-147.75.109.163:46708.service - OpenSSH per-connection server daemon (147.75.109.163:46708). Jul 6 23:24:31.499412 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 46708 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:31.501070 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:31.505460 systemd-logind[1463]: New session 2 of user core. Jul 6 23:24:31.516916 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:24:31.746748 sshd[1635]: Connection closed by 147.75.109.163 port 46708 Jul 6 23:24:31.747266 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:31.750763 systemd[1]: sshd@1-172.237.135.91:22-147.75.109.163:46708.service: Deactivated successfully. Jul 6 23:24:31.752394 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:24:31.754085 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:24:31.755207 systemd-logind[1463]: Removed session 2. Jul 6 23:24:31.808213 systemd[1]: Started sshd@2-172.237.135.91:22-147.75.109.163:46714.service - OpenSSH per-connection server daemon (147.75.109.163:46714). Jul 6 23:24:32.145770 sshd[1641]: Accepted publickey for core from 147.75.109.163 port 46714 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:32.146957 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:32.151021 systemd-logind[1463]: New session 3 of user core. Jul 6 23:24:32.162912 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:24:32.390446 sshd[1643]: Connection closed by 147.75.109.163 port 46714 Jul 6 23:24:32.391259 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:32.395385 systemd[1]: sshd@2-172.237.135.91:22-147.75.109.163:46714.service: Deactivated successfully. Jul 6 23:24:32.397353 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:24:32.398076 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:24:32.398912 systemd-logind[1463]: Removed session 3. Jul 6 23:24:32.454972 systemd[1]: Started sshd@3-172.237.135.91:22-147.75.109.163:46726.service - OpenSSH per-connection server daemon (147.75.109.163:46726). Jul 6 23:24:32.781599 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 46726 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:32.782987 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:32.787062 systemd-logind[1463]: New session 4 of user core. Jul 6 23:24:32.796955 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:24:33.030468 sshd[1651]: Connection closed by 147.75.109.163 port 46726 Jul 6 23:24:33.031269 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:33.034958 systemd[1]: sshd@3-172.237.135.91:22-147.75.109.163:46726.service: Deactivated successfully. Jul 6 23:24:33.036762 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:24:33.037470 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:24:33.038236 systemd-logind[1463]: Removed session 4. Jul 6 23:24:33.099152 systemd[1]: Started sshd@4-172.237.135.91:22-147.75.109.163:46732.service - OpenSSH per-connection server daemon (147.75.109.163:46732). Jul 6 23:24:33.441285 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 46732 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:33.442735 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:33.446524 systemd-logind[1463]: New session 5 of user core. Jul 6 23:24:33.460888 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:24:33.653131 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:24:33.653746 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:33.672077 sudo[1660]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:33.725486 sshd[1659]: Connection closed by 147.75.109.163 port 46732 Jul 6 23:24:33.726376 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:33.731361 systemd[1]: sshd@4-172.237.135.91:22-147.75.109.163:46732.service: Deactivated successfully. Jul 6 23:24:33.733274 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:24:33.734003 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:24:33.734942 systemd-logind[1463]: Removed session 5. Jul 6 23:24:33.792027 systemd[1]: Started sshd@5-172.237.135.91:22-147.75.109.163:46736.service - OpenSSH per-connection server daemon (147.75.109.163:46736). Jul 6 23:24:34.119719 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 46736 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:34.121809 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:34.126713 systemd-logind[1463]: New session 6 of user core. Jul 6 23:24:34.132926 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:24:34.316287 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:24:34.316645 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:34.320117 sudo[1670]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:34.326150 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:24:34.326534 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:34.350321 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:24:34.377895 augenrules[1692]: No rules Jul 6 23:24:34.379740 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:24:34.380014 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:24:34.380978 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:34.431337 sshd[1668]: Connection closed by 147.75.109.163 port 46736 Jul 6 23:24:34.431710 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:34.435573 systemd[1]: sshd@5-172.237.135.91:22-147.75.109.163:46736.service: Deactivated successfully. Jul 6 23:24:34.437537 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:24:34.438512 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:24:34.439520 systemd-logind[1463]: Removed session 6. Jul 6 23:24:34.503009 systemd[1]: Started sshd@6-172.237.135.91:22-147.75.109.163:46742.service - OpenSSH per-connection server daemon (147.75.109.163:46742). Jul 6 23:24:34.829517 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 46742 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:24:34.831173 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:24:34.835529 systemd-logind[1463]: New session 7 of user core. Jul 6 23:24:34.840916 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:24:35.026677 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:24:35.027273 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:24:35.302121 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:24:35.302363 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:24:35.581877 dockerd[1722]: time="2025-07-06T23:24:35.581438467Z" level=info msg="Starting up" Jul 6 23:24:35.654707 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3610512521-merged.mount: Deactivated successfully. Jul 6 23:24:35.682235 dockerd[1722]: time="2025-07-06T23:24:35.681998247Z" level=info msg="Loading containers: start." Jul 6 23:24:35.845819 kernel: Initializing XFRM netlink socket Jul 6 23:24:35.871021 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:35.871331 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:35.879034 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:35.933474 systemd-networkd[1396]: docker0: Link UP Jul 6 23:24:35.934201 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 6 23:24:35.971381 dockerd[1722]: time="2025-07-06T23:24:35.971341032Z" level=info msg="Loading containers: done." Jul 6 23:24:35.988653 dockerd[1722]: time="2025-07-06T23:24:35.988609293Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:24:35.989080 dockerd[1722]: time="2025-07-06T23:24:35.988678153Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:24:35.989080 dockerd[1722]: time="2025-07-06T23:24:35.989017773Z" level=info msg="Daemon has completed initialization" Jul 6 23:24:36.016229 dockerd[1722]: time="2025-07-06T23:24:36.016168809Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:24:36.018974 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:24:36.650320 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3937254302-merged.mount: Deactivated successfully. Jul 6 23:24:36.863595 containerd[1482]: time="2025-07-06T23:24:36.863561146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:24:37.584360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616338078.mount: Deactivated successfully. Jul 6 23:24:38.922735 containerd[1482]: time="2025-07-06T23:24:38.922625286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:38.923559 containerd[1482]: time="2025-07-06T23:24:38.923482135Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799051" Jul 6 23:24:38.924254 containerd[1482]: time="2025-07-06T23:24:38.924232555Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:38.927805 containerd[1482]: time="2025-07-06T23:24:38.927739503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:38.929030 containerd[1482]: time="2025-07-06T23:24:38.928923783Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.065329597s" Jul 6 23:24:38.929030 containerd[1482]: time="2025-07-06T23:24:38.928982873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:24:38.930841 containerd[1482]: time="2025-07-06T23:24:38.930656122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:24:39.882101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:24:39.887184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:40.051960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:40.053097 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:24:40.096789 kubelet[1976]: E0706 23:24:40.096460 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:24:40.101606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:24:40.101805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:24:40.102168 systemd[1]: kubelet.service: Consumed 179ms CPU time, 108.7M memory peak. Jul 6 23:24:40.483234 containerd[1482]: time="2025-07-06T23:24:40.483164565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:40.484414 containerd[1482]: time="2025-07-06T23:24:40.484360235Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783918" Jul 6 23:24:40.485725 containerd[1482]: time="2025-07-06T23:24:40.485291644Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:40.487965 containerd[1482]: time="2025-07-06T23:24:40.487923243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:40.489019 containerd[1482]: time="2025-07-06T23:24:40.488986482Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.55830722s" Jul 6 23:24:40.489065 containerd[1482]: time="2025-07-06T23:24:40.489018642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:24:40.489645 containerd[1482]: time="2025-07-06T23:24:40.489597832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:24:41.682539 containerd[1482]: time="2025-07-06T23:24:41.682488566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:41.683593 containerd[1482]: time="2025-07-06T23:24:41.683554795Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176922" Jul 6 23:24:41.684501 containerd[1482]: time="2025-07-06T23:24:41.684208465Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:41.686691 containerd[1482]: time="2025-07-06T23:24:41.686663533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:41.690543 containerd[1482]: time="2025-07-06T23:24:41.690517202Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.20087065s" Jul 6 23:24:41.690578 containerd[1482]: time="2025-07-06T23:24:41.690545362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:24:41.691327 containerd[1482]: time="2025-07-06T23:24:41.691293821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:24:42.621175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469224227.mount: Deactivated successfully. Jul 6 23:24:44.292969 containerd[1482]: time="2025-07-06T23:24:44.292890720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:44.293906 containerd[1482]: time="2025-07-06T23:24:44.293531790Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895369" Jul 6 23:24:44.294183 containerd[1482]: time="2025-07-06T23:24:44.294152069Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:44.295555 containerd[1482]: time="2025-07-06T23:24:44.295516909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:44.296668 containerd[1482]: time="2025-07-06T23:24:44.296133708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.604805817s" Jul 6 23:24:44.296668 containerd[1482]: time="2025-07-06T23:24:44.296161698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:24:44.296989 containerd[1482]: time="2025-07-06T23:24:44.296972998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:24:44.856698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28318977.mount: Deactivated successfully. Jul 6 23:24:45.619976 containerd[1482]: time="2025-07-06T23:24:45.619914276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:45.621211 containerd[1482]: time="2025-07-06T23:24:45.621118966Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jul 6 23:24:45.622315 containerd[1482]: time="2025-07-06T23:24:45.622151095Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:45.624494 containerd[1482]: time="2025-07-06T23:24:45.624473384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:45.625618 containerd[1482]: time="2025-07-06T23:24:45.625574294Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.328517906s" Jul 6 23:24:45.625655 containerd[1482]: time="2025-07-06T23:24:45.625618704Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:24:45.626669 containerd[1482]: time="2025-07-06T23:24:45.626623563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:24:46.117270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090087098.mount: Deactivated successfully. Jul 6 23:24:46.123228 containerd[1482]: time="2025-07-06T23:24:46.122419285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:46.123228 containerd[1482]: time="2025-07-06T23:24:46.123190145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jul 6 23:24:46.123472 containerd[1482]: time="2025-07-06T23:24:46.123451855Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:46.125264 containerd[1482]: time="2025-07-06T23:24:46.125243604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:46.125974 containerd[1482]: time="2025-07-06T23:24:46.125926943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 499.2689ms" Jul 6 23:24:46.125974 containerd[1482]: time="2025-07-06T23:24:46.125971793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:24:46.126755 containerd[1482]: time="2025-07-06T23:24:46.126682943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:24:46.665370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326527345.mount: Deactivated successfully. Jul 6 23:24:48.168559 containerd[1482]: time="2025-07-06T23:24:48.168485142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:48.169978 containerd[1482]: time="2025-07-06T23:24:48.169699021Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551366" Jul 6 23:24:48.170951 containerd[1482]: time="2025-07-06T23:24:48.170485691Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:48.173267 containerd[1482]: time="2025-07-06T23:24:48.173227089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:24:48.174425 containerd[1482]: time="2025-07-06T23:24:48.174399509Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.047685026s" Jul 6 23:24:48.174506 containerd[1482]: time="2025-07-06T23:24:48.174490869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:24:50.009734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:50.009914 systemd[1]: kubelet.service: Consumed 179ms CPU time, 108.7M memory peak. Jul 6 23:24:50.016025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:50.049249 systemd[1]: Reload requested from client PID 2132 ('systemctl') (unit session-7.scope)... Jul 6 23:24:50.049270 systemd[1]: Reloading... Jul 6 23:24:50.217534 zram_generator::config[2195]: No configuration found. Jul 6 23:24:50.317305 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:24:50.410278 systemd[1]: Reloading finished in 360 ms. Jul 6 23:24:50.455570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:50.466268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:50.467671 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:24:50.468029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:50.468071 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.3M memory peak. Jul 6 23:24:50.474111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:50.617484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:50.621615 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:24:50.663817 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:24:50.663817 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:24:50.663817 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:24:50.663817 kubelet[2234]: I0706 23:24:50.662654 2234 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:24:50.788642 kubelet[2234]: I0706 23:24:50.788598 2234 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:24:50.788642 kubelet[2234]: I0706 23:24:50.788626 2234 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:24:50.788943 kubelet[2234]: I0706 23:24:50.788926 2234 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:24:50.813923 kubelet[2234]: E0706 23:24:50.813885 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.135.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:50.815086 kubelet[2234]: I0706 23:24:50.814899 2234 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:24:50.828246 kubelet[2234]: E0706 23:24:50.828157 2234 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:24:50.828246 kubelet[2234]: I0706 23:24:50.828203 2234 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:24:50.832557 kubelet[2234]: I0706 23:24:50.832529 2234 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:24:50.835344 kubelet[2234]: I0706 23:24:50.835293 2234 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:24:50.835507 kubelet[2234]: I0706 23:24:50.835331 2234 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-135-91","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:24:50.835604 kubelet[2234]: I0706 23:24:50.835517 2234 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:24:50.835604 kubelet[2234]: I0706 23:24:50.835529 2234 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:24:50.835908 kubelet[2234]: I0706 23:24:50.835863 2234 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:24:50.839354 kubelet[2234]: I0706 23:24:50.839212 2234 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:24:50.839354 kubelet[2234]: I0706 23:24:50.839245 2234 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:24:50.839354 kubelet[2234]: I0706 23:24:50.839260 2234 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:24:50.839354 kubelet[2234]: I0706 23:24:50.839269 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:24:50.844507 kubelet[2234]: W0706 23:24:50.843870 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.135.91:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-135-91&limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:50.844507 kubelet[2234]: E0706 23:24:50.843937 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.135.91:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-135-91&limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:50.844507 kubelet[2234]: W0706 23:24:50.844313 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.135.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:50.844507 kubelet[2234]: E0706 23:24:50.844353 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.135.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:50.844817 kubelet[2234]: I0706 23:24:50.844769 2234 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:24:50.845648 kubelet[2234]: I0706 23:24:50.845205 2234 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:24:50.845905 kubelet[2234]: W0706 23:24:50.845872 2234 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:24:50.849096 kubelet[2234]: I0706 23:24:50.848871 2234 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:24:50.849096 kubelet[2234]: I0706 23:24:50.848908 2234 server.go:1287] "Started kubelet" Jul 6 23:24:50.850101 kubelet[2234]: I0706 23:24:50.849661 2234 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:24:50.851024 kubelet[2234]: I0706 23:24:50.850577 2234 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:24:50.852653 kubelet[2234]: I0706 23:24:50.852033 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:24:50.852653 kubelet[2234]: I0706 23:24:50.852331 2234 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:24:50.852888 kubelet[2234]: I0706 23:24:50.852863 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:24:50.854111 kubelet[2234]: E0706 23:24:50.852517 2234 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.135.91:6443/api/v1/namespaces/default/events\": dial tcp 172.237.135.91:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-135-91.184fcd19d1d814f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-135-91,UID:172-237-135-91,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-135-91,},FirstTimestamp:2025-07-06 23:24:50.848888051 +0000 UTC m=+0.220348771,LastTimestamp:2025-07-06 23:24:50.848888051 +0000 UTC m=+0.220348771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-135-91,}" Jul 6 23:24:50.855618 kubelet[2234]: I0706 23:24:50.855564 2234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:24:50.859126 kubelet[2234]: E0706 23:24:50.859050 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:50.859178 kubelet[2234]: I0706 23:24:50.859146 2234 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:24:50.859383 kubelet[2234]: I0706 23:24:50.859332 2234 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:24:50.859433 kubelet[2234]: I0706 23:24:50.859412 2234 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:24:50.860010 kubelet[2234]: W0706 23:24:50.859967 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.135.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:50.860046 kubelet[2234]: E0706 23:24:50.860012 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.135.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:50.860463 kubelet[2234]: E0706 23:24:50.860101 2234 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:24:50.860463 kubelet[2234]: I0706 23:24:50.860242 2234 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:24:50.860463 kubelet[2234]: I0706 23:24:50.860294 2234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:24:50.860918 kubelet[2234]: E0706 23:24:50.860886 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.135.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-135-91?timeout=10s\": dial tcp 172.237.135.91:6443: connect: connection refused" interval="200ms" Jul 6 23:24:50.861849 kubelet[2234]: I0706 23:24:50.861770 2234 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:24:50.875460 kubelet[2234]: I0706 23:24:50.875384 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:24:50.876683 kubelet[2234]: I0706 23:24:50.876645 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:24:50.876683 kubelet[2234]: I0706 23:24:50.876672 2234 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:24:50.876752 kubelet[2234]: I0706 23:24:50.876689 2234 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:24:50.876752 kubelet[2234]: I0706 23:24:50.876698 2234 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:24:50.876834 kubelet[2234]: E0706 23:24:50.876747 2234 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:24:50.886836 kubelet[2234]: W0706 23:24:50.886627 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.135.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:50.886836 kubelet[2234]: E0706 23:24:50.886683 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.135.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:50.888356 kubelet[2234]: I0706 23:24:50.888338 2234 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:24:50.888663 kubelet[2234]: I0706 23:24:50.888466 2234 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:24:50.888663 kubelet[2234]: I0706 23:24:50.888486 2234 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:24:50.890065 kubelet[2234]: I0706 23:24:50.889872 2234 policy_none.go:49] "None policy: Start" Jul 6 23:24:50.890065 kubelet[2234]: I0706 23:24:50.889887 2234 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:24:50.890065 kubelet[2234]: I0706 23:24:50.889897 2234 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:24:50.895744 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:24:50.909654 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:24:50.912503 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:24:50.920560 kubelet[2234]: I0706 23:24:50.920527 2234 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:24:50.920720 kubelet[2234]: I0706 23:24:50.920695 2234 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:24:50.920755 kubelet[2234]: I0706 23:24:50.920712 2234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:24:50.922546 kubelet[2234]: I0706 23:24:50.921400 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:24:50.922665 kubelet[2234]: E0706 23:24:50.922635 2234 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:24:50.922696 kubelet[2234]: E0706 23:24:50.922688 2234 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-135-91\" not found" Jul 6 23:24:50.987081 systemd[1]: Created slice kubepods-burstable-pod1505bd80c60514aac03f14b9a9c5bd0e.slice - libcontainer container kubepods-burstable-pod1505bd80c60514aac03f14b9a9c5bd0e.slice. Jul 6 23:24:51.005940 kubelet[2234]: E0706 23:24:51.005802 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:51.008926 systemd[1]: Created slice kubepods-burstable-pod7721788227a0286b4262a9e30d8cff9e.slice - libcontainer container kubepods-burstable-pod7721788227a0286b4262a9e30d8cff9e.slice. Jul 6 23:24:51.011540 kubelet[2234]: E0706 23:24:51.011448 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:51.013646 systemd[1]: Created slice kubepods-burstable-podbb0976de9d5e8821d3c98b22fd0839c0.slice - libcontainer container kubepods-burstable-podbb0976de9d5e8821d3c98b22fd0839c0.slice. Jul 6 23:24:51.015398 kubelet[2234]: E0706 23:24:51.015380 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:51.023380 kubelet[2234]: I0706 23:24:51.023361 2234 kubelet_node_status.go:75] "Attempting to register node" node="172-237-135-91" Jul 6 23:24:51.023707 kubelet[2234]: E0706 23:24:51.023676 2234 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.135.91:6443/api/v1/nodes\": dial tcp 172.237.135.91:6443: connect: connection refused" node="172-237-135-91" Jul 6 23:24:51.061738 kubelet[2234]: E0706 23:24:51.061692 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.135.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-135-91?timeout=10s\": dial tcp 172.237.135.91:6443: connect: connection refused" interval="400ms" Jul 6 23:24:51.161016 kubelet[2234]: I0706 23:24:51.160884 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:51.161016 kubelet[2234]: I0706 23:24:51.160919 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7721788227a0286b4262a9e30d8cff9e-ca-certs\") pod \"kube-apiserver-172-237-135-91\" (UID: \"7721788227a0286b4262a9e30d8cff9e\") " pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:51.161016 kubelet[2234]: I0706 23:24:51.160937 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-ca-certs\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:51.161016 kubelet[2234]: I0706 23:24:51.160954 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-flexvolume-dir\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:51.161016 kubelet[2234]: I0706 23:24:51.160977 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-kubeconfig\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:51.161257 kubelet[2234]: I0706 23:24:51.160992 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7721788227a0286b4262a9e30d8cff9e-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-135-91\" (UID: \"7721788227a0286b4262a9e30d8cff9e\") " pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:51.161257 kubelet[2234]: I0706 23:24:51.161006 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-k8s-certs\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:51.161257 kubelet[2234]: I0706 23:24:51.161025 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1505bd80c60514aac03f14b9a9c5bd0e-kubeconfig\") pod \"kube-scheduler-172-237-135-91\" (UID: \"1505bd80c60514aac03f14b9a9c5bd0e\") " pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:51.161257 kubelet[2234]: I0706 23:24:51.161041 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7721788227a0286b4262a9e30d8cff9e-k8s-certs\") pod \"kube-apiserver-172-237-135-91\" (UID: \"7721788227a0286b4262a9e30d8cff9e\") " pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:51.226034 kubelet[2234]: I0706 23:24:51.225967 2234 kubelet_node_status.go:75] "Attempting to register node" node="172-237-135-91" Jul 6 23:24:51.226404 kubelet[2234]: E0706 23:24:51.226343 2234 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.135.91:6443/api/v1/nodes\": dial tcp 172.237.135.91:6443: connect: connection refused" node="172-237-135-91" Jul 6 23:24:51.306941 kubelet[2234]: E0706 23:24:51.306907 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:51.307733 containerd[1482]: time="2025-07-06T23:24:51.307685222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-135-91,Uid:1505bd80c60514aac03f14b9a9c5bd0e,Namespace:kube-system,Attempt:0,}" Jul 6 23:24:51.312163 kubelet[2234]: E0706 23:24:51.311771 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:51.312401 containerd[1482]: time="2025-07-06T23:24:51.312328689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-135-91,Uid:7721788227a0286b4262a9e30d8cff9e,Namespace:kube-system,Attempt:0,}" Jul 6 23:24:51.317557 kubelet[2234]: E0706 23:24:51.317532 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:51.317945 containerd[1482]: time="2025-07-06T23:24:51.317918937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-135-91,Uid:bb0976de9d5e8821d3c98b22fd0839c0,Namespace:kube-system,Attempt:0,}" Jul 6 23:24:51.462704 kubelet[2234]: E0706 23:24:51.462584 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.135.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-135-91?timeout=10s\": dial tcp 172.237.135.91:6443: connect: connection refused" interval="800ms" Jul 6 23:24:51.628890 kubelet[2234]: I0706 23:24:51.628852 2234 kubelet_node_status.go:75] "Attempting to register node" node="172-237-135-91" Jul 6 23:24:51.629333 kubelet[2234]: E0706 23:24:51.629306 2234 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.135.91:6443/api/v1/nodes\": dial tcp 172.237.135.91:6443: connect: connection refused" node="172-237-135-91" Jul 6 23:24:51.749677 kubelet[2234]: W0706 23:24:51.749521 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.135.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:51.749677 kubelet[2234]: E0706 23:24:51.749595 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.135.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:51.839388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153243694.mount: Deactivated successfully. Jul 6 23:24:51.845986 containerd[1482]: time="2025-07-06T23:24:51.844901333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:24:51.848985 containerd[1482]: time="2025-07-06T23:24:51.848417281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:24:51.848985 containerd[1482]: time="2025-07-06T23:24:51.848460551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:24:51.849091 containerd[1482]: time="2025-07-06T23:24:51.849024591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Jul 6 23:24:51.849152 containerd[1482]: time="2025-07-06T23:24:51.849094371Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:24:51.853807 containerd[1482]: time="2025-07-06T23:24:51.853752819Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:24:51.855385 containerd[1482]: time="2025-07-06T23:24:51.855188148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.372876ms" Jul 6 23:24:51.856338 containerd[1482]: time="2025-07-06T23:24:51.856308527Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:24:51.857381 containerd[1482]: time="2025-07-06T23:24:51.857352547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.968578ms" Jul 6 23:24:51.857755 containerd[1482]: time="2025-07-06T23:24:51.857689397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.71784ms" Jul 6 23:24:51.858096 containerd[1482]: time="2025-07-06T23:24:51.858076397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:24:51.955946 containerd[1482]: time="2025-07-06T23:24:51.955570358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:24:51.955946 containerd[1482]: time="2025-07-06T23:24:51.955639508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:24:51.955946 containerd[1482]: time="2025-07-06T23:24:51.955652778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:51.955946 containerd[1482]: time="2025-07-06T23:24:51.955719268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:51.958579 containerd[1482]: time="2025-07-06T23:24:51.957561807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:24:51.958579 containerd[1482]: time="2025-07-06T23:24:51.957607987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:24:51.958579 containerd[1482]: time="2025-07-06T23:24:51.957618157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:51.958579 containerd[1482]: time="2025-07-06T23:24:51.957683597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:51.961521 containerd[1482]: time="2025-07-06T23:24:51.961262795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:24:51.961521 containerd[1482]: time="2025-07-06T23:24:51.961303785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:24:51.961521 containerd[1482]: time="2025-07-06T23:24:51.961320485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:51.961521 containerd[1482]: time="2025-07-06T23:24:51.961383395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:24:51.963324 kubelet[2234]: W0706 23:24:51.963257 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.135.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:51.963324 kubelet[2234]: E0706 23:24:51.963297 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.135.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:51.983120 systemd[1]: Started cri-containerd-c057dc0f1cdffdcb60f387f920ab587215fe9d48a01b8a5ab860569b2a1278d6.scope - libcontainer container c057dc0f1cdffdcb60f387f920ab587215fe9d48a01b8a5ab860569b2a1278d6. Jul 6 23:24:51.989869 systemd[1]: Started cri-containerd-5fe1824d284e9f36f0e08ae6b7e63e772f4288a621091faab93b716001ff2e65.scope - libcontainer container 5fe1824d284e9f36f0e08ae6b7e63e772f4288a621091faab93b716001ff2e65. Jul 6 23:24:51.991135 systemd[1]: Started cri-containerd-caecdfac7a130f79b60b1fc115fa6ddf5909521f0931a333bb5211ecb76345c0.scope - libcontainer container caecdfac7a130f79b60b1fc115fa6ddf5909521f0931a333bb5211ecb76345c0. Jul 6 23:24:52.034660 containerd[1482]: time="2025-07-06T23:24:52.034073519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-135-91,Uid:bb0976de9d5e8821d3c98b22fd0839c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fe1824d284e9f36f0e08ae6b7e63e772f4288a621091faab93b716001ff2e65\"" Jul 6 23:24:52.036495 kubelet[2234]: E0706 23:24:52.036462 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:52.041635 containerd[1482]: time="2025-07-06T23:24:52.041536375Z" level=info msg="CreateContainer within sandbox \"5fe1824d284e9f36f0e08ae6b7e63e772f4288a621091faab93b716001ff2e65\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:24:52.052411 containerd[1482]: time="2025-07-06T23:24:52.052387119Z" level=info msg="CreateContainer within sandbox \"5fe1824d284e9f36f0e08ae6b7e63e772f4288a621091faab93b716001ff2e65\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a06d16f789f377fcd945f130dda646bef8d329214228cd13228381769ba37d95\"" Jul 6 23:24:52.053256 containerd[1482]: time="2025-07-06T23:24:52.053237409Z" level=info msg="StartContainer for \"a06d16f789f377fcd945f130dda646bef8d329214228cd13228381769ba37d95\"" Jul 6 23:24:52.057875 containerd[1482]: time="2025-07-06T23:24:52.057811867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-135-91,Uid:7721788227a0286b4262a9e30d8cff9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"caecdfac7a130f79b60b1fc115fa6ddf5909521f0931a333bb5211ecb76345c0\"" Jul 6 23:24:52.059130 kubelet[2234]: E0706 23:24:52.059055 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:52.062077 containerd[1482]: time="2025-07-06T23:24:52.061958775Z" level=info msg="CreateContainer within sandbox \"caecdfac7a130f79b60b1fc115fa6ddf5909521f0931a333bb5211ecb76345c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:24:52.080181 containerd[1482]: time="2025-07-06T23:24:52.080130156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-135-91,Uid:1505bd80c60514aac03f14b9a9c5bd0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c057dc0f1cdffdcb60f387f920ab587215fe9d48a01b8a5ab860569b2a1278d6\"" Jul 6 23:24:52.080536 containerd[1482]: time="2025-07-06T23:24:52.080494325Z" level=info msg="CreateContainer within sandbox \"caecdfac7a130f79b60b1fc115fa6ddf5909521f0931a333bb5211ecb76345c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b402fb401e20a73170f86dc8e24a8032e26b48fbd54004e2234f536ff186826e\"" Jul 6 23:24:52.081141 containerd[1482]: time="2025-07-06T23:24:52.081124425Z" level=info msg="StartContainer for \"b402fb401e20a73170f86dc8e24a8032e26b48fbd54004e2234f536ff186826e\"" Jul 6 23:24:52.081292 kubelet[2234]: E0706 23:24:52.081216 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:52.087321 containerd[1482]: time="2025-07-06T23:24:52.087300142Z" level=info msg="CreateContainer within sandbox \"c057dc0f1cdffdcb60f387f920ab587215fe9d48a01b8a5ab860569b2a1278d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:24:52.091017 systemd[1]: Started cri-containerd-a06d16f789f377fcd945f130dda646bef8d329214228cd13228381769ba37d95.scope - libcontainer container a06d16f789f377fcd945f130dda646bef8d329214228cd13228381769ba37d95. Jul 6 23:24:52.100021 containerd[1482]: time="2025-07-06T23:24:52.099986426Z" level=info msg="CreateContainer within sandbox \"c057dc0f1cdffdcb60f387f920ab587215fe9d48a01b8a5ab860569b2a1278d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbd59c98b556dd078c864ea9d993f325206fb83be2393b5e86cb8bafa7d2894c\"" Jul 6 23:24:52.100439 containerd[1482]: time="2025-07-06T23:24:52.100409355Z" level=info msg="StartContainer for \"bbd59c98b556dd078c864ea9d993f325206fb83be2393b5e86cb8bafa7d2894c\"" Jul 6 23:24:52.120260 systemd[1]: Started cri-containerd-b402fb401e20a73170f86dc8e24a8032e26b48fbd54004e2234f536ff186826e.scope - libcontainer container b402fb401e20a73170f86dc8e24a8032e26b48fbd54004e2234f536ff186826e. Jul 6 23:24:52.142966 systemd[1]: Started cri-containerd-bbd59c98b556dd078c864ea9d993f325206fb83be2393b5e86cb8bafa7d2894c.scope - libcontainer container bbd59c98b556dd078c864ea9d993f325206fb83be2393b5e86cb8bafa7d2894c. Jul 6 23:24:52.166835 containerd[1482]: time="2025-07-06T23:24:52.166602012Z" level=info msg="StartContainer for \"a06d16f789f377fcd945f130dda646bef8d329214228cd13228381769ba37d95\" returns successfully" Jul 6 23:24:52.167852 kubelet[2234]: W0706 23:24:52.167793 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.135.91:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-135-91&limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:52.167852 kubelet[2234]: E0706 23:24:52.167849 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.135.91:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-135-91&limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:52.229152 containerd[1482]: time="2025-07-06T23:24:52.228846431Z" level=info msg="StartContainer for \"b402fb401e20a73170f86dc8e24a8032e26b48fbd54004e2234f536ff186826e\" returns successfully" Jul 6 23:24:52.263451 kubelet[2234]: E0706 23:24:52.263386 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.135.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-135-91?timeout=10s\": dial tcp 172.237.135.91:6443: connect: connection refused" interval="1.6s" Jul 6 23:24:52.272193 kubelet[2234]: W0706 23:24:52.271940 2234 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.135.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.135.91:6443: connect: connection refused Jul 6 23:24:52.273641 kubelet[2234]: E0706 23:24:52.272529 2234 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.135.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.135.91:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:24:52.302727 containerd[1482]: time="2025-07-06T23:24:52.302567534Z" level=info msg="StartContainer for \"bbd59c98b556dd078c864ea9d993f325206fb83be2393b5e86cb8bafa7d2894c\" returns successfully" Jul 6 23:24:52.431542 kubelet[2234]: I0706 23:24:52.431498 2234 kubelet_node_status.go:75] "Attempting to register node" node="172-237-135-91" Jul 6 23:24:52.897546 kubelet[2234]: E0706 23:24:52.897503 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:52.898049 kubelet[2234]: E0706 23:24:52.897621 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:52.902571 kubelet[2234]: E0706 23:24:52.902544 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:52.902660 kubelet[2234]: E0706 23:24:52.902637 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:52.904440 kubelet[2234]: E0706 23:24:52.904411 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:52.904531 kubelet[2234]: E0706 23:24:52.904508 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:53.664302 kubelet[2234]: I0706 23:24:53.664111 2234 kubelet_node_status.go:78] "Successfully registered node" node="172-237-135-91" Jul 6 23:24:53.664302 kubelet[2234]: E0706 23:24:53.664150 2234 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-237-135-91\": node \"172-237-135-91\" not found" Jul 6 23:24:53.696230 kubelet[2234]: E0706 23:24:53.696196 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:53.797261 kubelet[2234]: E0706 23:24:53.796981 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:53.897681 kubelet[2234]: E0706 23:24:53.897620 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:53.906699 kubelet[2234]: E0706 23:24:53.906601 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:53.906995 kubelet[2234]: E0706 23:24:53.906717 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:53.906995 kubelet[2234]: E0706 23:24:53.906876 2234 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-135-91\" not found" node="172-237-135-91" Jul 6 23:24:53.906995 kubelet[2234]: E0706 23:24:53.906951 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:53.998552 kubelet[2234]: E0706 23:24:53.998419 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.099176 kubelet[2234]: E0706 23:24:54.099130 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.199853 kubelet[2234]: E0706 23:24:54.199811 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.300738 kubelet[2234]: E0706 23:24:54.300589 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.400874 kubelet[2234]: E0706 23:24:54.400829 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.501298 kubelet[2234]: E0706 23:24:54.501245 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.602298 kubelet[2234]: E0706 23:24:54.602080 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.703233 kubelet[2234]: E0706 23:24:54.703174 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.804952 kubelet[2234]: E0706 23:24:54.804230 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:54.905664 kubelet[2234]: E0706 23:24:54.905364 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:55.005805 kubelet[2234]: E0706 23:24:55.005716 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:55.106414 kubelet[2234]: E0706 23:24:55.106374 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:55.206930 kubelet[2234]: E0706 23:24:55.206823 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:55.261188 kubelet[2234]: I0706 23:24:55.261155 2234 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:55.271311 kubelet[2234]: I0706 23:24:55.271255 2234 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:55.274581 kubelet[2234]: I0706 23:24:55.274528 2234 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:55.583960 systemd[1]: Reload requested from client PID 2507 ('systemctl') (unit session-7.scope)... Jul 6 23:24:55.583978 systemd[1]: Reloading... Jul 6 23:24:55.683858 zram_generator::config[2553]: No configuration found. Jul 6 23:24:55.790837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:24:55.849714 kubelet[2234]: I0706 23:24:55.848395 2234 apiserver.go:52] "Watching apiserver" Jul 6 23:24:55.853551 kubelet[2234]: E0706 23:24:55.853103 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:55.854108 kubelet[2234]: E0706 23:24:55.853849 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:55.854469 kubelet[2234]: E0706 23:24:55.854447 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:55.859671 kubelet[2234]: I0706 23:24:55.859621 2234 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:24:55.913142 kubelet[2234]: E0706 23:24:55.913105 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:55.914483 systemd[1]: Reloading finished in 330 ms. Jul 6 23:24:55.945079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:55.971670 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:24:55.971971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:55.972018 systemd[1]: kubelet.service: Consumed 621ms CPU time, 133.5M memory peak. Jul 6 23:24:55.979059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:24:56.146317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:24:56.150568 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:24:56.187109 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:24:56.187109 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:24:56.187109 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:24:56.187954 kubelet[2603]: I0706 23:24:56.187911 2603 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:24:56.195751 kubelet[2603]: I0706 23:24:56.195710 2603 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:24:56.195751 kubelet[2603]: I0706 23:24:56.195731 2603 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:24:56.196012 kubelet[2603]: I0706 23:24:56.195986 2603 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:24:56.197159 kubelet[2603]: I0706 23:24:56.197103 2603 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:24:56.202105 kubelet[2603]: I0706 23:24:56.201894 2603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:24:56.204265 kubelet[2603]: E0706 23:24:56.204211 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:24:56.204346 kubelet[2603]: I0706 23:24:56.204334 2603 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:24:56.208613 kubelet[2603]: I0706 23:24:56.208581 2603 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:24:56.208896 kubelet[2603]: I0706 23:24:56.208868 2603 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:24:56.209230 kubelet[2603]: I0706 23:24:56.208896 2603 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-135-91","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:24:56.209230 kubelet[2603]: I0706 23:24:56.209230 2603 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:24:56.209334 kubelet[2603]: I0706 23:24:56.209240 2603 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:24:56.209334 kubelet[2603]: I0706 23:24:56.209283 2603 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:24:56.209452 kubelet[2603]: I0706 23:24:56.209423 2603 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:24:56.209452 kubelet[2603]: I0706 23:24:56.209446 2603 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:24:56.210313 kubelet[2603]: I0706 23:24:56.209461 2603 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:24:56.210313 kubelet[2603]: I0706 23:24:56.209470 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:24:56.216319 kubelet[2603]: I0706 23:24:56.216303 2603 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:24:56.216846 kubelet[2603]: I0706 23:24:56.216833 2603 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:24:56.217357 kubelet[2603]: I0706 23:24:56.217343 2603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:24:56.217440 kubelet[2603]: I0706 23:24:56.217430 2603 server.go:1287] "Started kubelet" Jul 6 23:24:56.219811 kubelet[2603]: I0706 23:24:56.219748 2603 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:24:56.226445 kubelet[2603]: I0706 23:24:56.224362 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:24:56.232026 kubelet[2603]: I0706 23:24:56.230533 2603 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:24:56.232026 kubelet[2603]: I0706 23:24:56.231298 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:24:56.232026 kubelet[2603]: I0706 23:24:56.231488 2603 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:24:56.232026 kubelet[2603]: I0706 23:24:56.231598 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:24:56.233323 kubelet[2603]: I0706 23:24:56.233160 2603 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:24:56.233323 kubelet[2603]: E0706 23:24:56.233312 2603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-135-91\" not found" Jul 6 23:24:56.237081 kubelet[2603]: I0706 23:24:56.237051 2603 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:24:56.237253 kubelet[2603]: I0706 23:24:56.237211 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:24:56.238958 kubelet[2603]: I0706 23:24:56.238928 2603 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:24:56.239324 kubelet[2603]: I0706 23:24:56.239223 2603 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:24:56.241369 kubelet[2603]: I0706 23:24:56.241303 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:24:56.241546 kubelet[2603]: I0706 23:24:56.241477 2603 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:24:56.242439 kubelet[2603]: I0706 23:24:56.242415 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:24:56.242487 kubelet[2603]: I0706 23:24:56.242444 2603 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:24:56.242487 kubelet[2603]: I0706 23:24:56.242462 2603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:24:56.242487 kubelet[2603]: I0706 23:24:56.242468 2603 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:24:56.242968 kubelet[2603]: E0706 23:24:56.242512 2603 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:24:56.246569 kubelet[2603]: E0706 23:24:56.246548 2603 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:24:56.298616 kubelet[2603]: I0706 23:24:56.298568 2603 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:24:56.298616 kubelet[2603]: I0706 23:24:56.298604 2603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:24:56.298616 kubelet[2603]: I0706 23:24:56.298621 2603 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:24:56.299067 kubelet[2603]: I0706 23:24:56.298755 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:24:56.299067 kubelet[2603]: I0706 23:24:56.298765 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:24:56.299067 kubelet[2603]: I0706 23:24:56.298796 2603 policy_none.go:49] "None policy: Start" Jul 6 23:24:56.299067 kubelet[2603]: I0706 23:24:56.298805 2603 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:24:56.299067 kubelet[2603]: I0706 23:24:56.298814 2603 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:24:56.299067 kubelet[2603]: I0706 23:24:56.298905 2603 state_mem.go:75] "Updated machine memory state" Jul 6 23:24:56.304944 kubelet[2603]: I0706 23:24:56.304927 2603 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:24:56.305102 kubelet[2603]: I0706 23:24:56.305076 2603 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:24:56.305142 kubelet[2603]: I0706 23:24:56.305092 2603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:24:56.305519 kubelet[2603]: I0706 23:24:56.305504 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:24:56.309816 kubelet[2603]: E0706 23:24:56.307935 2603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:24:56.343888 kubelet[2603]: I0706 23:24:56.343857 2603 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:56.344418 kubelet[2603]: I0706 23:24:56.344387 2603 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:56.344634 kubelet[2603]: I0706 23:24:56.344604 2603 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.351026 kubelet[2603]: E0706 23:24:56.350946 2603 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-135-91\" already exists" pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:56.351428 kubelet[2603]: E0706 23:24:56.351410 2603 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-135-91\" already exists" pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:56.351548 kubelet[2603]: E0706 23:24:56.351522 2603 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-135-91\" already exists" pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.409406 kubelet[2603]: I0706 23:24:56.409030 2603 kubelet_node_status.go:75] "Attempting to register node" node="172-237-135-91" Jul 6 23:24:56.416744 kubelet[2603]: I0706 23:24:56.416716 2603 kubelet_node_status.go:124] "Node was previously registered" node="172-237-135-91" Jul 6 23:24:56.416855 kubelet[2603]: I0706 23:24:56.416768 2603 kubelet_node_status.go:78] "Successfully registered node" node="172-237-135-91" Jul 6 23:24:56.540712 kubelet[2603]: I0706 23:24:56.540644 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1505bd80c60514aac03f14b9a9c5bd0e-kubeconfig\") pod \"kube-scheduler-172-237-135-91\" (UID: \"1505bd80c60514aac03f14b9a9c5bd0e\") " pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:56.540712 kubelet[2603]: I0706 23:24:56.540685 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7721788227a0286b4262a9e30d8cff9e-ca-certs\") pod \"kube-apiserver-172-237-135-91\" (UID: \"7721788227a0286b4262a9e30d8cff9e\") " pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:56.540712 kubelet[2603]: I0706 23:24:56.540705 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7721788227a0286b4262a9e30d8cff9e-k8s-certs\") pod \"kube-apiserver-172-237-135-91\" (UID: \"7721788227a0286b4262a9e30d8cff9e\") " pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:56.540712 kubelet[2603]: I0706 23:24:56.540723 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7721788227a0286b4262a9e30d8cff9e-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-135-91\" (UID: \"7721788227a0286b4262a9e30d8cff9e\") " pod="kube-system/kube-apiserver-172-237-135-91" Jul 6 23:24:56.540999 kubelet[2603]: I0706 23:24:56.540743 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-k8s-certs\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.540999 kubelet[2603]: I0706 23:24:56.540765 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.540999 kubelet[2603]: I0706 23:24:56.540806 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-kubeconfig\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.540999 kubelet[2603]: I0706 23:24:56.540832 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-ca-certs\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.540999 kubelet[2603]: I0706 23:24:56.540848 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb0976de9d5e8821d3c98b22fd0839c0-flexvolume-dir\") pod \"kube-controller-manager-172-237-135-91\" (UID: \"bb0976de9d5e8821d3c98b22fd0839c0\") " pod="kube-system/kube-controller-manager-172-237-135-91" Jul 6 23:24:56.588684 sudo[2634]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:24:56.589192 sudo[2634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:24:56.652280 kubelet[2603]: E0706 23:24:56.652011 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:56.653546 kubelet[2603]: E0706 23:24:56.653512 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:56.653727 kubelet[2603]: E0706 23:24:56.653699 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:57.128313 sudo[2634]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:57.202590 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 6 23:24:57.212054 kubelet[2603]: I0706 23:24:57.211126 2603 apiserver.go:52] "Watching apiserver" Jul 6 23:24:57.240090 kubelet[2603]: I0706 23:24:57.240048 2603 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:24:57.270518 kubelet[2603]: E0706 23:24:57.270158 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:57.270919 kubelet[2603]: E0706 23:24:57.270904 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:57.272804 kubelet[2603]: I0706 23:24:57.271110 2603 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:57.281142 kubelet[2603]: E0706 23:24:57.281085 2603 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-135-91\" already exists" pod="kube-system/kube-scheduler-172-237-135-91" Jul 6 23:24:57.284362 kubelet[2603]: E0706 23:24:57.282932 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:57.308357 kubelet[2603]: I0706 23:24:57.308148 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-135-91" podStartSLOduration=2.308133931 podStartE2EDuration="2.308133931s" podCreationTimestamp="2025-07-06 23:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:24:57.307537551 +0000 UTC m=+1.150062066" watchObservedRunningTime="2025-07-06 23:24:57.308133931 +0000 UTC m=+1.150658446" Jul 6 23:24:57.322634 kubelet[2603]: I0706 23:24:57.322264 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-135-91" podStartSLOduration=2.322248814 podStartE2EDuration="2.322248814s" podCreationTimestamp="2025-07-06 23:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:24:57.316124757 +0000 UTC m=+1.158649282" watchObservedRunningTime="2025-07-06 23:24:57.322248814 +0000 UTC m=+1.164773329" Jul 6 23:24:58.271759 kubelet[2603]: E0706 23:24:58.271496 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:58.272179 kubelet[2603]: E0706 23:24:58.272039 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:24:58.598192 sudo[1704]: pam_unix(sudo:session): session closed for user root Jul 6 23:24:58.649362 sshd[1703]: Connection closed by 147.75.109.163 port 46742 Jul 6 23:24:58.650358 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jul 6 23:24:58.654052 systemd[1]: sshd@6-172.237.135.91:22-147.75.109.163:46742.service: Deactivated successfully. Jul 6 23:24:58.656238 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:24:58.656455 systemd[1]: session-7.scope: Consumed 3.905s CPU time, 260.3M memory peak. Jul 6 23:24:58.659685 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:24:58.660804 systemd-logind[1463]: Removed session 7. Jul 6 23:25:00.335683 kubelet[2603]: E0706 23:25:00.335610 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:01.704578 kubelet[2603]: I0706 23:25:01.704508 2603 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:25:01.705354 containerd[1482]: time="2025-07-06T23:25:01.704940692Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:25:01.707645 kubelet[2603]: I0706 23:25:01.705808 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:25:02.301046 kubelet[2603]: I0706 23:25:02.300990 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-135-91" podStartSLOduration=7.300977024 podStartE2EDuration="7.300977024s" podCreationTimestamp="2025-07-06 23:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:24:57.323018973 +0000 UTC m=+1.165543488" watchObservedRunningTime="2025-07-06 23:25:02.300977024 +0000 UTC m=+6.143501539" Jul 6 23:25:02.307194 kubelet[2603]: I0706 23:25:02.307022 2603 status_manager.go:890] "Failed to get status for pod" podUID="f7ebeb06-9142-482e-a7f9-c823fb312d94" pod="kube-system/kube-proxy-58l5j" err="pods \"kube-proxy-58l5j\" is forbidden: User \"system:node:172-237-135-91\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-135-91' and this object" Jul 6 23:25:02.307194 kubelet[2603]: W0706 23:25:02.307085 2603 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:172-237-135-91" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-135-91' and this object Jul 6 23:25:02.307194 kubelet[2603]: E0706 23:25:02.307108 2603 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:172-237-135-91\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-135-91' and this object" logger="UnhandledError" Jul 6 23:25:02.307194 kubelet[2603]: W0706 23:25:02.307160 2603 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-237-135-91" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-135-91' and this object Jul 6 23:25:02.307194 kubelet[2603]: E0706 23:25:02.307171 2603 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-237-135-91\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-135-91' and this object" logger="UnhandledError" Jul 6 23:25:02.312563 systemd[1]: Created slice kubepods-besteffort-podf7ebeb06_9142_482e_a7f9_c823fb312d94.slice - libcontainer container kubepods-besteffort-podf7ebeb06_9142_482e_a7f9_c823fb312d94.slice. Jul 6 23:25:02.340831 systemd[1]: Created slice kubepods-burstable-pod350b79fb_a9de_4337_af9f_51a63aa99973.slice - libcontainer container kubepods-burstable-pod350b79fb_a9de_4337_af9f_51a63aa99973.slice. Jul 6 23:25:02.381048 kubelet[2603]: I0706 23:25:02.381002 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-kernel\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381177 kubelet[2603]: I0706 23:25:02.381075 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7ebeb06-9142-482e-a7f9-c823fb312d94-kube-proxy\") pod \"kube-proxy-58l5j\" (UID: \"f7ebeb06-9142-482e-a7f9-c823fb312d94\") " pod="kube-system/kube-proxy-58l5j" Jul 6 23:25:02.381177 kubelet[2603]: I0706 23:25:02.381110 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-net\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381177 kubelet[2603]: I0706 23:25:02.381135 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-hubble-tls\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381177 kubelet[2603]: I0706 23:25:02.381161 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-cgroup\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381282 kubelet[2603]: I0706 23:25:02.381191 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-xtables-lock\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381282 kubelet[2603]: I0706 23:25:02.381214 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-run\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381282 kubelet[2603]: I0706 23:25:02.381229 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7ebeb06-9142-482e-a7f9-c823fb312d94-lib-modules\") pod \"kube-proxy-58l5j\" (UID: \"f7ebeb06-9142-482e-a7f9-c823fb312d94\") " pod="kube-system/kube-proxy-58l5j" Jul 6 23:25:02.381282 kubelet[2603]: I0706 23:25:02.381249 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvnf\" (UniqueName: \"kubernetes.io/projected/f7ebeb06-9142-482e-a7f9-c823fb312d94-kube-api-access-fdvnf\") pod \"kube-proxy-58l5j\" (UID: \"f7ebeb06-9142-482e-a7f9-c823fb312d94\") " pod="kube-system/kube-proxy-58l5j" Jul 6 23:25:02.381282 kubelet[2603]: I0706 23:25:02.381274 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-hostproc\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381416 kubelet[2603]: I0706 23:25:02.381299 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-etc-cni-netd\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381416 kubelet[2603]: I0706 23:25:02.381319 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-bpf-maps\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381416 kubelet[2603]: I0706 23:25:02.381339 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-config-path\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381416 kubelet[2603]: I0706 23:25:02.381364 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7ebeb06-9142-482e-a7f9-c823fb312d94-xtables-lock\") pod \"kube-proxy-58l5j\" (UID: \"f7ebeb06-9142-482e-a7f9-c823fb312d94\") " pod="kube-system/kube-proxy-58l5j" Jul 6 23:25:02.381416 kubelet[2603]: I0706 23:25:02.381386 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cni-path\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381416 kubelet[2603]: I0706 23:25:02.381409 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-lib-modules\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381575 kubelet[2603]: I0706 23:25:02.381426 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/350b79fb-a9de-4337-af9f-51a63aa99973-clustermesh-secrets\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.381575 kubelet[2603]: I0706 23:25:02.381447 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcwg\" (UniqueName: \"kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-kube-api-access-ffcwg\") pod \"cilium-hc6q5\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " pod="kube-system/cilium-hc6q5" Jul 6 23:25:02.760050 systemd[1]: Created slice kubepods-besteffort-podbf6a07b2_b936_4a4b_ab91_f7bf64c8ff77.slice - libcontainer container kubepods-besteffort-podbf6a07b2_b936_4a4b_ab91_f7bf64c8ff77.slice. Jul 6 23:25:02.783670 kubelet[2603]: I0706 23:25:02.783569 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-259cd\" (UniqueName: \"kubernetes.io/projected/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-kube-api-access-259cd\") pod \"cilium-operator-6c4d7847fc-w4hnw\" (UID: \"bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77\") " pod="kube-system/cilium-operator-6c4d7847fc-w4hnw" Jul 6 23:25:02.783670 kubelet[2603]: I0706 23:25:02.783613 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w4hnw\" (UID: \"bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77\") " pod="kube-system/cilium-operator-6c4d7847fc-w4hnw" Jul 6 23:25:03.244702 kubelet[2603]: E0706 23:25:03.244665 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:03.245228 containerd[1482]: time="2025-07-06T23:25:03.245084762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hc6q5,Uid:350b79fb-a9de-4337-af9f-51a63aa99973,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:03.272060 containerd[1482]: time="2025-07-06T23:25:03.271983058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:25:03.272060 containerd[1482]: time="2025-07-06T23:25:03.272032418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:25:03.272217 containerd[1482]: time="2025-07-06T23:25:03.272043108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:03.272217 containerd[1482]: time="2025-07-06T23:25:03.272120028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:03.300937 systemd[1]: Started cri-containerd-773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640.scope - libcontainer container 773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640. Jul 6 23:25:03.328869 containerd[1482]: time="2025-07-06T23:25:03.328736460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hc6q5,Uid:350b79fb-a9de-4337-af9f-51a63aa99973,Namespace:kube-system,Attempt:0,} returns sandbox id \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\"" Jul 6 23:25:03.329574 kubelet[2603]: E0706 23:25:03.329518 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:03.332205 containerd[1482]: time="2025-07-06T23:25:03.332159808Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:25:03.362845 kubelet[2603]: E0706 23:25:03.362813 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:03.363279 containerd[1482]: time="2025-07-06T23:25:03.363223673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w4hnw,Uid:bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:03.389360 containerd[1482]: time="2025-07-06T23:25:03.389261600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:25:03.389360 containerd[1482]: time="2025-07-06T23:25:03.389318630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:25:03.389360 containerd[1482]: time="2025-07-06T23:25:03.389331420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:03.391129 containerd[1482]: time="2025-07-06T23:25:03.389637159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:03.413938 systemd[1]: Started cri-containerd-8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e.scope - libcontainer container 8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e. Jul 6 23:25:03.455394 containerd[1482]: time="2025-07-06T23:25:03.455291047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w4hnw,Uid:bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77,Namespace:kube-system,Attempt:0,} returns sandbox id \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\"" Jul 6 23:25:03.456859 kubelet[2603]: E0706 23:25:03.456575 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:03.484750 kubelet[2603]: E0706 23:25:03.484712 2603 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:25:03.484873 kubelet[2603]: E0706 23:25:03.484828 2603 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7ebeb06-9142-482e-a7f9-c823fb312d94-kube-proxy podName:f7ebeb06-9142-482e-a7f9-c823fb312d94 nodeName:}" failed. No retries permitted until 2025-07-06 23:25:03.984790352 +0000 UTC m=+7.827314867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f7ebeb06-9142-482e-a7f9-c823fb312d94-kube-proxy") pod "kube-proxy-58l5j" (UID: "f7ebeb06-9142-482e-a7f9-c823fb312d94") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:25:04.137141 kubelet[2603]: E0706 23:25:04.136743 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:04.138385 containerd[1482]: time="2025-07-06T23:25:04.138334635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58l5j,Uid:f7ebeb06-9142-482e-a7f9-c823fb312d94,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:04.166442 containerd[1482]: time="2025-07-06T23:25:04.166351141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:25:04.166442 containerd[1482]: time="2025-07-06T23:25:04.166397791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:25:04.166442 containerd[1482]: time="2025-07-06T23:25:04.166412021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:04.166635 containerd[1482]: time="2025-07-06T23:25:04.166477011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:04.190926 systemd[1]: Started cri-containerd-71b46a138576b702d6d63f405db49d5835bbda4fbcb86b3e32e887d2271318c7.scope - libcontainer container 71b46a138576b702d6d63f405db49d5835bbda4fbcb86b3e32e887d2271318c7. Jul 6 23:25:04.212484 containerd[1482]: time="2025-07-06T23:25:04.212449628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58l5j,Uid:f7ebeb06-9142-482e-a7f9-c823fb312d94,Namespace:kube-system,Attempt:0,} returns sandbox id \"71b46a138576b702d6d63f405db49d5835bbda4fbcb86b3e32e887d2271318c7\"" Jul 6 23:25:04.213069 kubelet[2603]: E0706 23:25:04.213047 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:04.216338 containerd[1482]: time="2025-07-06T23:25:04.216298406Z" level=info msg="CreateContainer within sandbox \"71b46a138576b702d6d63f405db49d5835bbda4fbcb86b3e32e887d2271318c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:25:04.229578 containerd[1482]: time="2025-07-06T23:25:04.229505469Z" level=info msg="CreateContainer within sandbox \"71b46a138576b702d6d63f405db49d5835bbda4fbcb86b3e32e887d2271318c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c708f3260a186f54ccf5cc3ad339634f3a45fbb1e8c9d93e603c71e76eabdf7\"" Jul 6 23:25:04.230378 containerd[1482]: time="2025-07-06T23:25:04.230196979Z" level=info msg="StartContainer for \"7c708f3260a186f54ccf5cc3ad339634f3a45fbb1e8c9d93e603c71e76eabdf7\"" Jul 6 23:25:04.253905 systemd[1]: Started cri-containerd-7c708f3260a186f54ccf5cc3ad339634f3a45fbb1e8c9d93e603c71e76eabdf7.scope - libcontainer container 7c708f3260a186f54ccf5cc3ad339634f3a45fbb1e8c9d93e603c71e76eabdf7. Jul 6 23:25:04.287157 containerd[1482]: time="2025-07-06T23:25:04.287031741Z" level=info msg="StartContainer for \"7c708f3260a186f54ccf5cc3ad339634f3a45fbb1e8c9d93e603c71e76eabdf7\" returns successfully" Jul 6 23:25:05.300808 kubelet[2603]: E0706 23:25:05.299024 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:05.309969 kubelet[2603]: I0706 23:25:05.309805 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-58l5j" podStartSLOduration=3.309792469 podStartE2EDuration="3.309792469s" podCreationTimestamp="2025-07-06 23:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:05.309466849 +0000 UTC m=+9.151991364" watchObservedRunningTime="2025-07-06 23:25:05.309792469 +0000 UTC m=+9.152316984" Jul 6 23:25:06.998274 systemd-timesyncd[1402]: Contacted time server [2600:3c00::f03c:93ff:fe5b:29d1]:123 (2.flatcar.pool.ntp.org). Jul 6 23:25:06.998331 systemd-timesyncd[1402]: Initial clock synchronization to Sun 2025-07-06 23:25:06.998021 UTC. Jul 6 23:25:06.998382 systemd-resolved[1399]: Clock change detected. Flushing caches. Jul 6 23:25:07.219513 kubelet[2603]: E0706 23:25:07.219484 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:07.376156 kubelet[2603]: E0706 23:25:07.374213 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:07.376156 kubelet[2603]: E0706 23:25:07.374951 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:08.220685 kubelet[2603]: E0706 23:25:08.220658 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:08.221122 kubelet[2603]: E0706 23:25:08.221090 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:09.221915 kubelet[2603]: E0706 23:25:09.221784 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:11.247890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201602886.mount: Deactivated successfully. Jul 6 23:25:11.261637 kubelet[2603]: E0706 23:25:11.261410 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:12.228057 kubelet[2603]: E0706 23:25:12.227369 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:12.719129 update_engine[1466]: I20250706 23:25:12.719070 1466 update_attempter.cc:509] Updating boot flags... Jul 6 23:25:12.816082 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2997) Jul 6 23:25:12.874491 containerd[1482]: time="2025-07-06T23:25:12.873705542Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:12.874491 containerd[1482]: time="2025-07-06T23:25:12.874399322Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:25:12.874491 containerd[1482]: time="2025-07-06T23:25:12.874454282Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:12.877615 containerd[1482]: time="2025-07-06T23:25:12.877488500Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.625948536s" Jul 6 23:25:12.877615 containerd[1482]: time="2025-07-06T23:25:12.877538240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:25:12.883843 containerd[1482]: time="2025-07-06T23:25:12.883682837Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:25:12.885057 containerd[1482]: time="2025-07-06T23:25:12.884722626Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:25:12.904854 containerd[1482]: time="2025-07-06T23:25:12.904809926Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\"" Jul 6 23:25:12.907866 containerd[1482]: time="2025-07-06T23:25:12.907784065Z" level=info msg="StartContainer for \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\"" Jul 6 23:25:12.915062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3000) Jul 6 23:25:12.998155 systemd[1]: Started cri-containerd-e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732.scope - libcontainer container e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732. Jul 6 23:25:13.042123 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3000) Jul 6 23:25:13.085943 containerd[1482]: time="2025-07-06T23:25:13.083743527Z" level=info msg="StartContainer for \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\" returns successfully" Jul 6 23:25:13.100339 systemd[1]: cri-containerd-e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732.scope: Deactivated successfully. Jul 6 23:25:13.235674 kubelet[2603]: E0706 23:25:13.235649 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:13.249754 containerd[1482]: time="2025-07-06T23:25:13.249373214Z" level=info msg="shim disconnected" id=e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732 namespace=k8s.io Jul 6 23:25:13.249754 containerd[1482]: time="2025-07-06T23:25:13.249684234Z" level=warning msg="cleaning up after shim disconnected" id=e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732 namespace=k8s.io Jul 6 23:25:13.249754 containerd[1482]: time="2025-07-06T23:25:13.249705734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:25:13.898067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732-rootfs.mount: Deactivated successfully. Jul 6 23:25:13.903513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738903921.mount: Deactivated successfully. Jul 6 23:25:14.238118 kubelet[2603]: E0706 23:25:14.237911 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:14.241754 containerd[1482]: time="2025-07-06T23:25:14.240869278Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:25:14.260055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777832175.mount: Deactivated successfully. Jul 6 23:25:14.265856 containerd[1482]: time="2025-07-06T23:25:14.265782296Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\"" Jul 6 23:25:14.268038 containerd[1482]: time="2025-07-06T23:25:14.267344165Z" level=info msg="StartContainer for \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\"" Jul 6 23:25:14.308169 systemd[1]: Started cri-containerd-ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194.scope - libcontainer container ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194. Jul 6 23:25:14.338023 containerd[1482]: time="2025-07-06T23:25:14.337888650Z" level=info msg="StartContainer for \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\" returns successfully" Jul 6 23:25:14.354868 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:25:14.355105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:25:14.355653 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:25:14.365537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:25:14.365774 systemd[1]: cri-containerd-ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194.scope: Deactivated successfully. Jul 6 23:25:14.387177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:25:14.406616 containerd[1482]: time="2025-07-06T23:25:14.406546245Z" level=info msg="shim disconnected" id=ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194 namespace=k8s.io Jul 6 23:25:14.406865 containerd[1482]: time="2025-07-06T23:25:14.406616945Z" level=warning msg="cleaning up after shim disconnected" id=ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194 namespace=k8s.io Jul 6 23:25:14.406865 containerd[1482]: time="2025-07-06T23:25:14.406628535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:25:15.243343 kubelet[2603]: E0706 23:25:15.242902 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:15.248928 containerd[1482]: time="2025-07-06T23:25:15.248246304Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:25:15.274010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129342923.mount: Deactivated successfully. Jul 6 23:25:15.284319 containerd[1482]: time="2025-07-06T23:25:15.284191176Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\"" Jul 6 23:25:15.284791 containerd[1482]: time="2025-07-06T23:25:15.284767476Z" level=info msg="StartContainer for \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\"" Jul 6 23:25:15.324236 systemd[1]: Started cri-containerd-cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563.scope - libcontainer container cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563. Jul 6 23:25:15.366589 containerd[1482]: time="2025-07-06T23:25:15.366555615Z" level=info msg="StartContainer for \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\" returns successfully" Jul 6 23:25:15.371277 systemd[1]: cri-containerd-cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563.scope: Deactivated successfully. Jul 6 23:25:15.393249 containerd[1482]: time="2025-07-06T23:25:15.393191842Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:15.399438 containerd[1482]: time="2025-07-06T23:25:15.399404109Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:25:15.403048 containerd[1482]: time="2025-07-06T23:25:15.401352548Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:25:15.403048 containerd[1482]: time="2025-07-06T23:25:15.402721787Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.517970031s" Jul 6 23:25:15.403048 containerd[1482]: time="2025-07-06T23:25:15.402756427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:25:15.412552 containerd[1482]: time="2025-07-06T23:25:15.412509822Z" level=info msg="CreateContainer within sandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:25:15.416929 containerd[1482]: time="2025-07-06T23:25:15.416894150Z" level=info msg="shim disconnected" id=cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563 namespace=k8s.io Jul 6 23:25:15.416985 containerd[1482]: time="2025-07-06T23:25:15.416929050Z" level=warning msg="cleaning up after shim disconnected" id=cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563 namespace=k8s.io Jul 6 23:25:15.416985 containerd[1482]: time="2025-07-06T23:25:15.416937120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:25:15.424183 containerd[1482]: time="2025-07-06T23:25:15.424154126Z" level=info msg="CreateContainer within sandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\"" Jul 6 23:25:15.425179 containerd[1482]: time="2025-07-06T23:25:15.425122906Z" level=info msg="StartContainer for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\"" Jul 6 23:25:15.430726 containerd[1482]: time="2025-07-06T23:25:15.430659903Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:25:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:25:15.457156 systemd[1]: Started cri-containerd-9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570.scope - libcontainer container 9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570. Jul 6 23:25:15.482386 containerd[1482]: time="2025-07-06T23:25:15.482343857Z" level=info msg="StartContainer for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" returns successfully" Jul 6 23:25:15.898192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563-rootfs.mount: Deactivated successfully. Jul 6 23:25:16.246072 kubelet[2603]: E0706 23:25:16.245218 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:16.248680 kubelet[2603]: E0706 23:25:16.248483 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:16.250102 containerd[1482]: time="2025-07-06T23:25:16.250065643Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:25:16.261243 containerd[1482]: time="2025-07-06T23:25:16.260132188Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\"" Jul 6 23:25:16.261243 containerd[1482]: time="2025-07-06T23:25:16.261143448Z" level=info msg="StartContainer for \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\"" Jul 6 23:25:16.326212 systemd[1]: Started cri-containerd-ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f.scope - libcontainer container ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f. Jul 6 23:25:16.374161 systemd[1]: cri-containerd-ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f.scope: Deactivated successfully. Jul 6 23:25:16.375444 containerd[1482]: time="2025-07-06T23:25:16.375338251Z" level=info msg="StartContainer for \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\" returns successfully" Jul 6 23:25:16.400336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f-rootfs.mount: Deactivated successfully. Jul 6 23:25:16.404876 containerd[1482]: time="2025-07-06T23:25:16.404551366Z" level=info msg="shim disconnected" id=ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f namespace=k8s.io Jul 6 23:25:16.404876 containerd[1482]: time="2025-07-06T23:25:16.404622916Z" level=warning msg="cleaning up after shim disconnected" id=ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f namespace=k8s.io Jul 6 23:25:16.404876 containerd[1482]: time="2025-07-06T23:25:16.404633056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:25:17.253373 kubelet[2603]: E0706 23:25:17.252095 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:17.253373 kubelet[2603]: E0706 23:25:17.252146 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:17.254563 containerd[1482]: time="2025-07-06T23:25:17.254514941Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:25:17.280871 containerd[1482]: time="2025-07-06T23:25:17.275669790Z" level=info msg="CreateContainer within sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\"" Jul 6 23:25:17.279991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486104662.mount: Deactivated successfully. Jul 6 23:25:17.285070 containerd[1482]: time="2025-07-06T23:25:17.281333138Z" level=info msg="StartContainer for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\"" Jul 6 23:25:17.285146 kubelet[2603]: I0706 23:25:17.284573 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w4hnw" podStartSLOduration=4.255249622 podStartE2EDuration="15.284552536s" podCreationTimestamp="2025-07-06 23:25:02 +0000 UTC" firstStartedPulling="2025-07-06 23:25:03.457990275 +0000 UTC m=+7.300514800" lastFinishedPulling="2025-07-06 23:25:15.406542485 +0000 UTC m=+18.329817714" observedRunningTime="2025-07-06 23:25:16.320937648 +0000 UTC m=+19.244212867" watchObservedRunningTime="2025-07-06 23:25:17.284552536 +0000 UTC m=+20.207827765" Jul 6 23:25:17.323202 systemd[1]: Started cri-containerd-f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71.scope - libcontainer container f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71. Jul 6 23:25:17.353716 containerd[1482]: time="2025-07-06T23:25:17.353654901Z" level=info msg="StartContainer for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" returns successfully" Jul 6 23:25:17.450812 kubelet[2603]: I0706 23:25:17.450778 2603 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:25:17.478465 systemd[1]: Created slice kubepods-burstable-poda40ba20e_13ec_4e80_b63e_b47884160139.slice - libcontainer container kubepods-burstable-poda40ba20e_13ec_4e80_b63e_b47884160139.slice. Jul 6 23:25:17.488611 systemd[1]: Created slice kubepods-burstable-podf3b21596_4e73_4683_b790_53e033e56cb1.slice - libcontainer container kubepods-burstable-podf3b21596_4e73_4683_b790_53e033e56cb1.slice. Jul 6 23:25:17.600475 kubelet[2603]: I0706 23:25:17.600307 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgqlr\" (UniqueName: \"kubernetes.io/projected/a40ba20e-13ec-4e80-b63e-b47884160139-kube-api-access-qgqlr\") pod \"coredns-668d6bf9bc-h7qd6\" (UID: \"a40ba20e-13ec-4e80-b63e-b47884160139\") " pod="kube-system/coredns-668d6bf9bc-h7qd6" Jul 6 23:25:17.600475 kubelet[2603]: I0706 23:25:17.600370 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40ba20e-13ec-4e80-b63e-b47884160139-config-volume\") pod \"coredns-668d6bf9bc-h7qd6\" (UID: \"a40ba20e-13ec-4e80-b63e-b47884160139\") " pod="kube-system/coredns-668d6bf9bc-h7qd6" Jul 6 23:25:17.600475 kubelet[2603]: I0706 23:25:17.600402 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b21596-4e73-4683-b790-53e033e56cb1-config-volume\") pod \"coredns-668d6bf9bc-5bfwc\" (UID: \"f3b21596-4e73-4683-b790-53e033e56cb1\") " pod="kube-system/coredns-668d6bf9bc-5bfwc" Jul 6 23:25:17.600475 kubelet[2603]: I0706 23:25:17.600420 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9r8w\" (UniqueName: \"kubernetes.io/projected/f3b21596-4e73-4683-b790-53e033e56cb1-kube-api-access-t9r8w\") pod \"coredns-668d6bf9bc-5bfwc\" (UID: \"f3b21596-4e73-4683-b790-53e033e56cb1\") " pod="kube-system/coredns-668d6bf9bc-5bfwc" Jul 6 23:25:17.786547 kubelet[2603]: E0706 23:25:17.786483 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:17.787499 containerd[1482]: time="2025-07-06T23:25:17.787192305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7qd6,Uid:a40ba20e-13ec-4e80-b63e-b47884160139,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:17.791260 kubelet[2603]: E0706 23:25:17.791228 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:17.791884 containerd[1482]: time="2025-07-06T23:25:17.791803912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5bfwc,Uid:f3b21596-4e73-4683-b790-53e033e56cb1,Namespace:kube-system,Attempt:0,}" Jul 6 23:25:18.256815 kubelet[2603]: E0706 23:25:18.256603 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:18.287048 kubelet[2603]: I0706 23:25:18.285825 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hc6q5" podStartSLOduration=7.654864081 podStartE2EDuration="16.285810945s" podCreationTimestamp="2025-07-06 23:25:02 +0000 UTC" firstStartedPulling="2025-07-06 23:25:03.330204469 +0000 UTC m=+7.172728984" lastFinishedPulling="2025-07-06 23:25:12.880400629 +0000 UTC m=+15.803675848" observedRunningTime="2025-07-06 23:25:18.285543705 +0000 UTC m=+21.208818924" watchObservedRunningTime="2025-07-06 23:25:18.285810945 +0000 UTC m=+21.209086164" Jul 6 23:25:19.258469 kubelet[2603]: E0706 23:25:19.258424 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:19.611546 systemd-networkd[1396]: cilium_host: Link UP Jul 6 23:25:19.611727 systemd-networkd[1396]: cilium_net: Link UP Jul 6 23:25:19.611918 systemd-networkd[1396]: cilium_net: Gained carrier Jul 6 23:25:19.616154 systemd-networkd[1396]: cilium_host: Gained carrier Jul 6 23:25:19.732360 systemd-networkd[1396]: cilium_vxlan: Link UP Jul 6 23:25:19.732501 systemd-networkd[1396]: cilium_vxlan: Gained carrier Jul 6 23:25:19.933268 kernel: NET: Registered PF_ALG protocol family Jul 6 23:25:20.234196 systemd-networkd[1396]: cilium_host: Gained IPv6LL Jul 6 23:25:20.260940 kubelet[2603]: E0706 23:25:20.260918 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:20.426165 systemd-networkd[1396]: cilium_net: Gained IPv6LL Jul 6 23:25:20.570412 systemd-networkd[1396]: lxc_health: Link UP Jul 6 23:25:20.574119 systemd-networkd[1396]: lxc_health: Gained carrier Jul 6 23:25:20.844075 systemd-networkd[1396]: lxc638701837001: Link UP Jul 6 23:25:20.869393 kernel: eth0: renamed from tmp58664 Jul 6 23:25:20.867486 systemd-networkd[1396]: lxca1f0b5263b14: Link UP Jul 6 23:25:20.876113 kernel: eth0: renamed from tmp8cafa Jul 6 23:25:20.881154 systemd-networkd[1396]: lxc638701837001: Gained carrier Jul 6 23:25:20.882006 systemd-networkd[1396]: lxca1f0b5263b14: Gained carrier Jul 6 23:25:20.939128 systemd-networkd[1396]: cilium_vxlan: Gained IPv6LL Jul 6 23:25:21.770266 systemd-networkd[1396]: lxc_health: Gained IPv6LL Jul 6 23:25:22.166007 kubelet[2603]: E0706 23:25:22.165217 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:22.277204 kubelet[2603]: E0706 23:25:22.276857 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:22.602209 systemd-networkd[1396]: lxc638701837001: Gained IPv6LL Jul 6 23:25:22.602540 systemd-networkd[1396]: lxca1f0b5263b14: Gained IPv6LL Jul 6 23:25:23.278830 kubelet[2603]: E0706 23:25:23.278750 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:24.141057 containerd[1482]: time="2025-07-06T23:25:24.140293847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:25:24.141057 containerd[1482]: time="2025-07-06T23:25:24.140375807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:25:24.141057 containerd[1482]: time="2025-07-06T23:25:24.140407597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:24.142342 containerd[1482]: time="2025-07-06T23:25:24.142144306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:24.176858 systemd[1]: Started cri-containerd-8cafae8a3ad1f6d82a22aa1409b357c1a4566225c5286631a8acdd13defa5d70.scope - libcontainer container 8cafae8a3ad1f6d82a22aa1409b357c1a4566225c5286631a8acdd13defa5d70. Jul 6 23:25:24.184140 containerd[1482]: time="2025-07-06T23:25:24.182152146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:25:24.184140 containerd[1482]: time="2025-07-06T23:25:24.182205166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:25:24.184140 containerd[1482]: time="2025-07-06T23:25:24.182215676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:24.184140 containerd[1482]: time="2025-07-06T23:25:24.182283046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:25:24.222312 systemd[1]: Started cri-containerd-58664bf08ebd2fb5bd85270ed35573925c17d3bbc16dc60efe1ad85921647294.scope - libcontainer container 58664bf08ebd2fb5bd85270ed35573925c17d3bbc16dc60efe1ad85921647294. Jul 6 23:25:24.254616 containerd[1482]: time="2025-07-06T23:25:24.254573760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5bfwc,Uid:f3b21596-4e73-4683-b790-53e033e56cb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cafae8a3ad1f6d82a22aa1409b357c1a4566225c5286631a8acdd13defa5d70\"" Jul 6 23:25:24.257360 kubelet[2603]: E0706 23:25:24.256937 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:24.260319 containerd[1482]: time="2025-07-06T23:25:24.260291257Z" level=info msg="CreateContainer within sandbox \"8cafae8a3ad1f6d82a22aa1409b357c1a4566225c5286631a8acdd13defa5d70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:25:24.276398 containerd[1482]: time="2025-07-06T23:25:24.276371459Z" level=info msg="CreateContainer within sandbox \"8cafae8a3ad1f6d82a22aa1409b357c1a4566225c5286631a8acdd13defa5d70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a331e0c28321dd4c77a082e226c0e4832bbcb14ee2a82ee1b4a4ceda05735c66\"" Jul 6 23:25:24.279191 containerd[1482]: time="2025-07-06T23:25:24.277167609Z" level=info msg="StartContainer for \"a331e0c28321dd4c77a082e226c0e4832bbcb14ee2a82ee1b4a4ceda05735c66\"" Jul 6 23:25:24.310150 systemd[1]: Started cri-containerd-a331e0c28321dd4c77a082e226c0e4832bbcb14ee2a82ee1b4a4ceda05735c66.scope - libcontainer container a331e0c28321dd4c77a082e226c0e4832bbcb14ee2a82ee1b4a4ceda05735c66. Jul 6 23:25:24.315678 containerd[1482]: time="2025-07-06T23:25:24.315535050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7qd6,Uid:a40ba20e-13ec-4e80-b63e-b47884160139,Namespace:kube-system,Attempt:0,} returns sandbox id \"58664bf08ebd2fb5bd85270ed35573925c17d3bbc16dc60efe1ad85921647294\"" Jul 6 23:25:24.317557 kubelet[2603]: E0706 23:25:24.317540 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:24.320690 containerd[1482]: time="2025-07-06T23:25:24.320641787Z" level=info msg="CreateContainer within sandbox \"58664bf08ebd2fb5bd85270ed35573925c17d3bbc16dc60efe1ad85921647294\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:25:24.336116 containerd[1482]: time="2025-07-06T23:25:24.334869700Z" level=info msg="CreateContainer within sandbox \"58664bf08ebd2fb5bd85270ed35573925c17d3bbc16dc60efe1ad85921647294\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ade83ab3de11f68d41a70e931a4cf774392d38bc65a8b1cb384ea1612f15b2e\"" Jul 6 23:25:24.338697 containerd[1482]: time="2025-07-06T23:25:24.337687069Z" level=info msg="StartContainer for \"1ade83ab3de11f68d41a70e931a4cf774392d38bc65a8b1cb384ea1612f15b2e\"" Jul 6 23:25:24.358981 containerd[1482]: time="2025-07-06T23:25:24.358934168Z" level=info msg="StartContainer for \"a331e0c28321dd4c77a082e226c0e4832bbcb14ee2a82ee1b4a4ceda05735c66\" returns successfully" Jul 6 23:25:24.384336 systemd[1]: Started cri-containerd-1ade83ab3de11f68d41a70e931a4cf774392d38bc65a8b1cb384ea1612f15b2e.scope - libcontainer container 1ade83ab3de11f68d41a70e931a4cf774392d38bc65a8b1cb384ea1612f15b2e. Jul 6 23:25:24.431039 containerd[1482]: time="2025-07-06T23:25:24.430628572Z" level=info msg="StartContainer for \"1ade83ab3de11f68d41a70e931a4cf774392d38bc65a8b1cb384ea1612f15b2e\" returns successfully" Jul 6 23:25:25.285780 kubelet[2603]: E0706 23:25:25.285745 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:25.288363 kubelet[2603]: E0706 23:25:25.288338 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:25.310601 kubelet[2603]: I0706 23:25:25.310541 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5bfwc" podStartSLOduration=23.310527812 podStartE2EDuration="23.310527812s" podCreationTimestamp="2025-07-06 23:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:25.299832287 +0000 UTC m=+28.223107506" watchObservedRunningTime="2025-07-06 23:25:25.310527812 +0000 UTC m=+28.233803031" Jul 6 23:25:25.323507 kubelet[2603]: I0706 23:25:25.322969 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h7qd6" podStartSLOduration=23.322951396 podStartE2EDuration="23.322951396s" podCreationTimestamp="2025-07-06 23:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:25:25.311122772 +0000 UTC m=+28.234397991" watchObservedRunningTime="2025-07-06 23:25:25.322951396 +0000 UTC m=+28.246226615" Jul 6 23:25:26.289367 kubelet[2603]: E0706 23:25:26.289250 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:26.289367 kubelet[2603]: E0706 23:25:26.289250 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:27.290829 kubelet[2603]: E0706 23:25:27.290644 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:25:27.290829 kubelet[2603]: E0706 23:25:27.290740 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:16.162774 kubelet[2603]: E0706 23:26:16.162432 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:18.163368 kubelet[2603]: E0706 23:26:18.163327 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:21.163468 kubelet[2603]: E0706 23:26:21.162456 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:31.162913 kubelet[2603]: E0706 23:26:31.162461 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:33.163263 kubelet[2603]: E0706 23:26:33.162483 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:38.162499 kubelet[2603]: E0706 23:26:38.162466 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:40.162491 kubelet[2603]: E0706 23:26:40.162461 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:44.162267 kubelet[2603]: E0706 23:26:44.162222 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:26:56.299255 systemd[1]: Started sshd@7-172.237.135.91:22-147.75.109.163:33600.service - OpenSSH per-connection server daemon (147.75.109.163:33600). Jul 6 23:26:56.629104 sshd[3988]: Accepted publickey for core from 147.75.109.163 port 33600 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:26:56.630507 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:26:56.637693 systemd-logind[1463]: New session 8 of user core. Jul 6 23:26:56.643154 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:26:56.951418 sshd[3990]: Connection closed by 147.75.109.163 port 33600 Jul 6 23:26:56.952832 sshd-session[3988]: pam_unix(sshd:session): session closed for user core Jul 6 23:26:56.960491 systemd[1]: sshd@7-172.237.135.91:22-147.75.109.163:33600.service: Deactivated successfully. Jul 6 23:26:56.960580 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:26:56.963123 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:26:56.964054 systemd-logind[1463]: Removed session 8. Jul 6 23:27:02.025225 systemd[1]: Started sshd@8-172.237.135.91:22-147.75.109.163:33614.service - OpenSSH per-connection server daemon (147.75.109.163:33614). Jul 6 23:27:02.392890 sshd[4005]: Accepted publickey for core from 147.75.109.163 port 33614 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:02.394825 sshd-session[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:02.400148 systemd-logind[1463]: New session 9 of user core. Jul 6 23:27:02.404155 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:27:02.721255 sshd[4007]: Connection closed by 147.75.109.163 port 33614 Jul 6 23:27:02.722173 sshd-session[4005]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:02.726715 systemd[1]: sshd@8-172.237.135.91:22-147.75.109.163:33614.service: Deactivated successfully. Jul 6 23:27:02.729925 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:27:02.731092 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:27:02.732076 systemd-logind[1463]: Removed session 9. Jul 6 23:27:07.794424 systemd[1]: Started sshd@9-172.237.135.91:22-147.75.109.163:41788.service - OpenSSH per-connection server daemon (147.75.109.163:41788). Jul 6 23:27:08.162878 sshd[4022]: Accepted publickey for core from 147.75.109.163 port 41788 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:08.164437 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:08.170303 systemd-logind[1463]: New session 10 of user core. Jul 6 23:27:08.174146 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:27:08.508955 sshd[4024]: Connection closed by 147.75.109.163 port 41788 Jul 6 23:27:08.509778 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:08.514109 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:27:08.514594 systemd[1]: sshd@9-172.237.135.91:22-147.75.109.163:41788.service: Deactivated successfully. Jul 6 23:27:08.516765 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:27:08.517917 systemd-logind[1463]: Removed session 10. Jul 6 23:27:08.576212 systemd[1]: Started sshd@10-172.237.135.91:22-147.75.109.163:41796.service - OpenSSH per-connection server daemon (147.75.109.163:41796). Jul 6 23:27:08.932792 sshd[4037]: Accepted publickey for core from 147.75.109.163 port 41796 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:08.934670 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:08.939094 systemd-logind[1463]: New session 11 of user core. Jul 6 23:27:08.943155 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:27:09.277858 sshd[4039]: Connection closed by 147.75.109.163 port 41796 Jul 6 23:27:09.278738 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:09.282075 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:27:09.283016 systemd[1]: sshd@10-172.237.135.91:22-147.75.109.163:41796.service: Deactivated successfully. Jul 6 23:27:09.284994 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:27:09.285836 systemd-logind[1463]: Removed session 11. Jul 6 23:27:09.358325 systemd[1]: Started sshd@11-172.237.135.91:22-147.75.109.163:41798.service - OpenSSH per-connection server daemon (147.75.109.163:41798). Jul 6 23:27:09.728513 sshd[4049]: Accepted publickey for core from 147.75.109.163 port 41798 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:09.730139 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:09.734656 systemd-logind[1463]: New session 12 of user core. Jul 6 23:27:09.737174 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:27:10.068740 sshd[4051]: Connection closed by 147.75.109.163 port 41798 Jul 6 23:27:10.069697 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:10.077119 systemd[1]: sshd@11-172.237.135.91:22-147.75.109.163:41798.service: Deactivated successfully. Jul 6 23:27:10.079817 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:27:10.080612 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:27:10.081515 systemd-logind[1463]: Removed session 12. Jul 6 23:27:15.142232 systemd[1]: Started sshd@12-172.237.135.91:22-147.75.109.163:41802.service - OpenSSH per-connection server daemon (147.75.109.163:41802). Jul 6 23:27:15.513509 sshd[4064]: Accepted publickey for core from 147.75.109.163 port 41802 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:15.515271 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:15.520597 systemd-logind[1463]: New session 13 of user core. Jul 6 23:27:15.524184 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:27:15.832914 sshd[4066]: Connection closed by 147.75.109.163 port 41802 Jul 6 23:27:15.836140 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:15.840420 systemd[1]: sshd@12-172.237.135.91:22-147.75.109.163:41802.service: Deactivated successfully. Jul 6 23:27:15.843332 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:27:15.844918 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:27:15.845822 systemd-logind[1463]: Removed session 13. Jul 6 23:27:20.904373 systemd[1]: Started sshd@13-172.237.135.91:22-147.75.109.163:56016.service - OpenSSH per-connection server daemon (147.75.109.163:56016). Jul 6 23:27:21.274623 sshd[4079]: Accepted publickey for core from 147.75.109.163 port 56016 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:21.276073 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:21.280962 systemd-logind[1463]: New session 14 of user core. Jul 6 23:27:21.287136 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:27:21.602622 sshd[4081]: Connection closed by 147.75.109.163 port 56016 Jul 6 23:27:21.603139 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:21.606506 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:27:21.607333 systemd[1]: sshd@13-172.237.135.91:22-147.75.109.163:56016.service: Deactivated successfully. Jul 6 23:27:21.609789 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:27:21.610836 systemd-logind[1463]: Removed session 14. Jul 6 23:27:21.674240 systemd[1]: Started sshd@14-172.237.135.91:22-147.75.109.163:56030.service - OpenSSH per-connection server daemon (147.75.109.163:56030). Jul 6 23:27:22.044067 sshd[4093]: Accepted publickey for core from 147.75.109.163 port 56030 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:22.045552 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:22.049783 systemd-logind[1463]: New session 15 of user core. Jul 6 23:27:22.059160 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:27:22.534954 sshd[4095]: Connection closed by 147.75.109.163 port 56030 Jul 6 23:27:22.535572 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:22.539270 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:27:22.540280 systemd[1]: sshd@14-172.237.135.91:22-147.75.109.163:56030.service: Deactivated successfully. Jul 6 23:27:22.542637 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:27:22.543680 systemd-logind[1463]: Removed session 15. Jul 6 23:27:22.610795 systemd[1]: Started sshd@15-172.237.135.91:22-147.75.109.163:56046.service - OpenSSH per-connection server daemon (147.75.109.163:56046). Jul 6 23:27:22.994134 sshd[4105]: Accepted publickey for core from 147.75.109.163 port 56046 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:22.995682 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:23.000250 systemd-logind[1463]: New session 16 of user core. Jul 6 23:27:23.006170 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:27:23.811545 sshd[4107]: Connection closed by 147.75.109.163 port 56046 Jul 6 23:27:23.812290 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:23.816398 systemd[1]: sshd@15-172.237.135.91:22-147.75.109.163:56046.service: Deactivated successfully. Jul 6 23:27:23.818681 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:27:23.819852 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:27:23.820968 systemd-logind[1463]: Removed session 16. Jul 6 23:27:23.878220 systemd[1]: Started sshd@16-172.237.135.91:22-147.75.109.163:56060.service - OpenSSH per-connection server daemon (147.75.109.163:56060). Jul 6 23:27:24.162807 kubelet[2603]: E0706 23:27:24.162781 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:24.234674 sshd[4124]: Accepted publickey for core from 147.75.109.163 port 56060 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:24.236251 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:24.240557 systemd-logind[1463]: New session 17 of user core. Jul 6 23:27:24.250290 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:27:24.666847 sshd[4126]: Connection closed by 147.75.109.163 port 56060 Jul 6 23:27:24.667406 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:24.671081 systemd[1]: sshd@16-172.237.135.91:22-147.75.109.163:56060.service: Deactivated successfully. Jul 6 23:27:24.673135 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:27:24.674003 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:27:24.674869 systemd-logind[1463]: Removed session 17. Jul 6 23:27:24.743396 systemd[1]: Started sshd@17-172.237.135.91:22-147.75.109.163:56076.service - OpenSSH per-connection server daemon (147.75.109.163:56076). Jul 6 23:27:25.115816 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 56076 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:25.117468 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:25.122615 systemd-logind[1463]: New session 18 of user core. Jul 6 23:27:25.124359 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:27:25.439107 sshd[4138]: Connection closed by 147.75.109.163 port 56076 Jul 6 23:27:25.439791 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:25.443851 systemd[1]: sshd@17-172.237.135.91:22-147.75.109.163:56076.service: Deactivated successfully. Jul 6 23:27:25.446298 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:27:25.447156 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:27:25.448163 systemd-logind[1463]: Removed session 18. Jul 6 23:27:30.510253 systemd[1]: Started sshd@18-172.237.135.91:22-147.75.109.163:45184.service - OpenSSH per-connection server daemon (147.75.109.163:45184). Jul 6 23:27:30.879967 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 45184 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:30.881402 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:30.885252 systemd-logind[1463]: New session 19 of user core. Jul 6 23:27:30.896136 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:27:31.206667 sshd[4154]: Connection closed by 147.75.109.163 port 45184 Jul 6 23:27:31.207230 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:31.210886 systemd[1]: sshd@18-172.237.135.91:22-147.75.109.163:45184.service: Deactivated successfully. Jul 6 23:27:31.213551 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:27:31.214239 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:27:31.215671 systemd-logind[1463]: Removed session 19. Jul 6 23:27:34.164975 kubelet[2603]: E0706 23:27:34.163696 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:35.163393 kubelet[2603]: E0706 23:27:35.162718 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:36.277265 systemd[1]: Started sshd@19-172.237.135.91:22-147.75.109.163:59074.service - OpenSSH per-connection server daemon (147.75.109.163:59074). Jul 6 23:27:36.633864 sshd[4168]: Accepted publickey for core from 147.75.109.163 port 59074 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:36.635262 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:36.639833 systemd-logind[1463]: New session 20 of user core. Jul 6 23:27:36.646156 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:27:36.948899 sshd[4170]: Connection closed by 147.75.109.163 port 59074 Jul 6 23:27:36.949504 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:36.953993 systemd[1]: sshd@19-172.237.135.91:22-147.75.109.163:59074.service: Deactivated successfully. Jul 6 23:27:36.956620 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:27:36.957421 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:27:36.958637 systemd-logind[1463]: Removed session 20. Jul 6 23:27:41.164218 kubelet[2603]: E0706 23:27:41.163226 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:42.026260 systemd[1]: Started sshd@20-172.237.135.91:22-147.75.109.163:59078.service - OpenSSH per-connection server daemon (147.75.109.163:59078). Jul 6 23:27:42.395159 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 59078 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:42.396552 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:42.400634 systemd-logind[1463]: New session 21 of user core. Jul 6 23:27:42.407383 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:27:42.727281 sshd[4184]: Connection closed by 147.75.109.163 port 59078 Jul 6 23:27:42.728403 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:42.731662 systemd[1]: sshd@20-172.237.135.91:22-147.75.109.163:59078.service: Deactivated successfully. Jul 6 23:27:42.734009 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:27:42.735775 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:27:42.737589 systemd-logind[1463]: Removed session 21. Jul 6 23:27:42.794288 systemd[1]: Started sshd@21-172.237.135.91:22-147.75.109.163:59088.service - OpenSSH per-connection server daemon (147.75.109.163:59088). Jul 6 23:27:43.148399 sshd[4196]: Accepted publickey for core from 147.75.109.163 port 59088 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:43.149773 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:43.157152 systemd-logind[1463]: New session 22 of user core. Jul 6 23:27:43.160142 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:27:44.655360 containerd[1482]: time="2025-07-06T23:27:44.650942891Z" level=info msg="StopContainer for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" with timeout 30 (s)" Jul 6 23:27:44.656653 containerd[1482]: time="2025-07-06T23:27:44.656208521Z" level=info msg="Stop container \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" with signal terminated" Jul 6 23:27:44.665597 containerd[1482]: time="2025-07-06T23:27:44.665563676Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:27:44.674173 containerd[1482]: time="2025-07-06T23:27:44.674149082Z" level=info msg="StopContainer for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" with timeout 2 (s)" Jul 6 23:27:44.674933 containerd[1482]: time="2025-07-06T23:27:44.674800640Z" level=info msg="Stop container \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" with signal terminated" Jul 6 23:27:44.675174 systemd[1]: cri-containerd-9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570.scope: Deactivated successfully. Jul 6 23:27:44.684890 systemd-networkd[1396]: lxc_health: Link DOWN Jul 6 23:27:44.684900 systemd-networkd[1396]: lxc_health: Lost carrier Jul 6 23:27:44.703549 systemd[1]: cri-containerd-f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71.scope: Deactivated successfully. Jul 6 23:27:44.703885 systemd[1]: cri-containerd-f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71.scope: Consumed 6.639s CPU time, 125.5M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:27:44.717564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570-rootfs.mount: Deactivated successfully. Jul 6 23:27:44.725534 containerd[1482]: time="2025-07-06T23:27:44.725484266Z" level=info msg="shim disconnected" id=9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570 namespace=k8s.io Jul 6 23:27:44.725534 containerd[1482]: time="2025-07-06T23:27:44.725532486Z" level=warning msg="cleaning up after shim disconnected" id=9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570 namespace=k8s.io Jul 6 23:27:44.725864 containerd[1482]: time="2025-07-06T23:27:44.725541176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:44.740178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71-rootfs.mount: Deactivated successfully. Jul 6 23:27:44.743290 containerd[1482]: time="2025-07-06T23:27:44.743227628Z" level=info msg="shim disconnected" id=f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71 namespace=k8s.io Jul 6 23:27:44.743290 containerd[1482]: time="2025-07-06T23:27:44.743279968Z" level=warning msg="cleaning up after shim disconnected" id=f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71 namespace=k8s.io Jul 6 23:27:44.743290 containerd[1482]: time="2025-07-06T23:27:44.743288688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:44.746646 containerd[1482]: time="2025-07-06T23:27:44.746612535Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:27:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:27:44.748968 containerd[1482]: time="2025-07-06T23:27:44.748860436Z" level=info msg="StopContainer for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" returns successfully" Jul 6 23:27:44.751504 containerd[1482]: time="2025-07-06T23:27:44.749679133Z" level=info msg="StopPodSandbox for \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\"" Jul 6 23:27:44.751504 containerd[1482]: time="2025-07-06T23:27:44.749738672Z" level=info msg="Container to stop \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:27:44.755281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e-shm.mount: Deactivated successfully. Jul 6 23:27:44.763725 systemd[1]: cri-containerd-8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e.scope: Deactivated successfully. Jul 6 23:27:44.766756 containerd[1482]: time="2025-07-06T23:27:44.766669058Z" level=info msg="StopContainer for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" returns successfully" Jul 6 23:27:44.767183 containerd[1482]: time="2025-07-06T23:27:44.767156296Z" level=info msg="StopPodSandbox for \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\"" Jul 6 23:27:44.767223 containerd[1482]: time="2025-07-06T23:27:44.767189706Z" level=info msg="Container to stop \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:27:44.767223 containerd[1482]: time="2025-07-06T23:27:44.767217516Z" level=info msg="Container to stop \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:27:44.767381 containerd[1482]: time="2025-07-06T23:27:44.767226446Z" level=info msg="Container to stop \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:27:44.767381 containerd[1482]: time="2025-07-06T23:27:44.767235466Z" level=info msg="Container to stop \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:27:44.767381 containerd[1482]: time="2025-07-06T23:27:44.767242706Z" level=info msg="Container to stop \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:27:44.770852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640-shm.mount: Deactivated successfully. Jul 6 23:27:44.776288 systemd[1]: cri-containerd-773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640.scope: Deactivated successfully. Jul 6 23:27:44.800756 containerd[1482]: time="2025-07-06T23:27:44.800683878Z" level=info msg="shim disconnected" id=8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e namespace=k8s.io Jul 6 23:27:44.801267 containerd[1482]: time="2025-07-06T23:27:44.801073406Z" level=warning msg="cleaning up after shim disconnected" id=8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e namespace=k8s.io Jul 6 23:27:44.801267 containerd[1482]: time="2025-07-06T23:27:44.801089436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:44.813999 containerd[1482]: time="2025-07-06T23:27:44.813919917Z" level=info msg="shim disconnected" id=773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640 namespace=k8s.io Jul 6 23:27:44.814424 containerd[1482]: time="2025-07-06T23:27:44.814394525Z" level=warning msg="cleaning up after shim disconnected" id=773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640 namespace=k8s.io Jul 6 23:27:44.814424 containerd[1482]: time="2025-07-06T23:27:44.814415025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:44.825469 containerd[1482]: time="2025-07-06T23:27:44.825432953Z" level=info msg="TearDown network for sandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" successfully" Jul 6 23:27:44.825469 containerd[1482]: time="2025-07-06T23:27:44.825460573Z" level=info msg="StopPodSandbox for \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" returns successfully" Jul 6 23:27:44.832730 containerd[1482]: time="2025-07-06T23:27:44.832691175Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:27:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:27:44.834590 containerd[1482]: time="2025-07-06T23:27:44.834452898Z" level=info msg="TearDown network for sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" successfully" Jul 6 23:27:44.834590 containerd[1482]: time="2025-07-06T23:27:44.834474588Z" level=info msg="StopPodSandbox for \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" returns successfully" Jul 6 23:27:44.957672 kubelet[2603]: I0706 23:27:44.957544 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-bpf-maps\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.957672 kubelet[2603]: I0706 23:27:44.957582 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-net\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.957672 kubelet[2603]: I0706 23:27:44.957610 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-cilium-config-path\") pod \"bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77\" (UID: \"bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77\") " Jul 6 23:27:44.957672 kubelet[2603]: I0706 23:27:44.957628 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-cgroup\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.957672 kubelet[2603]: I0706 23:27:44.957645 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-259cd\" (UniqueName: \"kubernetes.io/projected/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-kube-api-access-259cd\") pod \"bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77\" (UID: \"bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77\") " Jul 6 23:27:44.957672 kubelet[2603]: I0706 23:27:44.957659 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-kernel\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960471 kubelet[2603]: I0706 23:27:44.957673 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-hostproc\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960471 kubelet[2603]: I0706 23:27:44.957687 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cni-path\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960471 kubelet[2603]: I0706 23:27:44.957703 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/350b79fb-a9de-4337-af9f-51a63aa99973-clustermesh-secrets\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960471 kubelet[2603]: I0706 23:27:44.957721 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-hubble-tls\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960471 kubelet[2603]: I0706 23:27:44.957734 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-xtables-lock\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960471 kubelet[2603]: I0706 23:27:44.957751 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-run\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960601 kubelet[2603]: I0706 23:27:44.957764 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-etc-cni-netd\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960601 kubelet[2603]: I0706 23:27:44.957780 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-config-path\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960601 kubelet[2603]: I0706 23:27:44.957794 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-lib-modules\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960601 kubelet[2603]: I0706 23:27:44.957811 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffcwg\" (UniqueName: \"kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-kube-api-access-ffcwg\") pod \"350b79fb-a9de-4337-af9f-51a63aa99973\" (UID: \"350b79fb-a9de-4337-af9f-51a63aa99973\") " Jul 6 23:27:44.960601 kubelet[2603]: I0706 23:27:44.958878 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cni-path" (OuterVolumeSpecName: "cni-path") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.960601 kubelet[2603]: I0706 23:27:44.958952 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.960755 kubelet[2603]: I0706 23:27:44.958970 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.961098 kubelet[2603]: I0706 23:27:44.961063 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.962362 kubelet[2603]: I0706 23:27:44.962340 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.962524 kubelet[2603]: I0706 23:27:44.962509 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-hostproc" (OuterVolumeSpecName: "hostproc") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.963218 kubelet[2603]: I0706 23:27:44.963202 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.963351 kubelet[2603]: I0706 23:27:44.963336 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.963443 kubelet[2603]: I0706 23:27:44.963429 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.963742 kubelet[2603]: I0706 23:27:44.963662 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-kube-api-access-ffcwg" (OuterVolumeSpecName: "kube-api-access-ffcwg") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "kube-api-access-ffcwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:27:44.963742 kubelet[2603]: I0706 23:27:44.963715 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:27:44.967517 kubelet[2603]: I0706 23:27:44.966938 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-kube-api-access-259cd" (OuterVolumeSpecName: "kube-api-access-259cd") pod "bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77" (UID: "bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77"). InnerVolumeSpecName "kube-api-access-259cd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:27:44.967517 kubelet[2603]: I0706 23:27:44.967479 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77" (UID: "bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:27:44.967686 kubelet[2603]: I0706 23:27:44.967670 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:27:44.969161 kubelet[2603]: I0706 23:27:44.969135 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/350b79fb-a9de-4337-af9f-51a63aa99973-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:27:44.970270 kubelet[2603]: I0706 23:27:44.970221 2603 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "350b79fb-a9de-4337-af9f-51a63aa99973" (UID: "350b79fb-a9de-4337-af9f-51a63aa99973"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:27:45.059086 kubelet[2603]: I0706 23:27:45.058780 2603 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-cgroup\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059086 kubelet[2603]: I0706 23:27:45.058828 2603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-259cd\" (UniqueName: \"kubernetes.io/projected/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-kube-api-access-259cd\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059086 kubelet[2603]: I0706 23:27:45.059070 2603 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-kernel\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059086 kubelet[2603]: I0706 23:27:45.059087 2603 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-hostproc\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059086 kubelet[2603]: I0706 23:27:45.059105 2603 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cni-path\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059119 2603 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/350b79fb-a9de-4337-af9f-51a63aa99973-clustermesh-secrets\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059133 2603 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-hubble-tls\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059146 2603 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-xtables-lock\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059159 2603 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-run\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059172 2603 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-etc-cni-netd\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059185 2603 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/350b79fb-a9de-4337-af9f-51a63aa99973-cilium-config-path\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059198 2603 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-lib-modules\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059354 kubelet[2603]: I0706 23:27:45.059210 2603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffcwg\" (UniqueName: \"kubernetes.io/projected/350b79fb-a9de-4337-af9f-51a63aa99973-kube-api-access-ffcwg\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059532 kubelet[2603]: I0706 23:27:45.059222 2603 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-bpf-maps\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059532 kubelet[2603]: I0706 23:27:45.059234 2603 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/350b79fb-a9de-4337-af9f-51a63aa99973-host-proc-sys-net\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.059532 kubelet[2603]: I0706 23:27:45.059247 2603 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77-cilium-config-path\") on node \"172-237-135-91\" DevicePath \"\"" Jul 6 23:27:45.172270 systemd[1]: Removed slice kubepods-besteffort-podbf6a07b2_b936_4a4b_ab91_f7bf64c8ff77.slice - libcontainer container kubepods-besteffort-podbf6a07b2_b936_4a4b_ab91_f7bf64c8ff77.slice. Jul 6 23:27:45.173930 systemd[1]: Removed slice kubepods-burstable-pod350b79fb_a9de_4337_af9f_51a63aa99973.slice - libcontainer container kubepods-burstable-pod350b79fb_a9de_4337_af9f_51a63aa99973.slice. Jul 6 23:27:45.174233 systemd[1]: kubepods-burstable-pod350b79fb_a9de_4337_af9f_51a63aa99973.slice: Consumed 6.736s CPU time, 125.9M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:27:45.523348 kubelet[2603]: I0706 23:27:45.523150 2603 scope.go:117] "RemoveContainer" containerID="9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570" Jul 6 23:27:45.525625 containerd[1482]: time="2025-07-06T23:27:45.525368332Z" level=info msg="RemoveContainer for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\"" Jul 6 23:27:45.530334 containerd[1482]: time="2025-07-06T23:27:45.529954324Z" level=info msg="RemoveContainer for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" returns successfully" Jul 6 23:27:45.531334 kubelet[2603]: I0706 23:27:45.530610 2603 scope.go:117] "RemoveContainer" containerID="9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570" Jul 6 23:27:45.531898 containerd[1482]: time="2025-07-06T23:27:45.531586008Z" level=error msg="ContainerStatus for \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\": not found" Jul 6 23:27:45.532254 kubelet[2603]: E0706 23:27:45.532011 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\": not found" containerID="9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570" Jul 6 23:27:45.532557 kubelet[2603]: I0706 23:27:45.532158 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570"} err="failed to get container status \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e46150e2861656b601d09ebc2b5756fc6e738ffb0bd80706a1f4234906ae570\": not found" Jul 6 23:27:45.532557 kubelet[2603]: I0706 23:27:45.532406 2603 scope.go:117] "RemoveContainer" containerID="f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71" Jul 6 23:27:45.534224 containerd[1482]: time="2025-07-06T23:27:45.533671990Z" level=info msg="RemoveContainer for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\"" Jul 6 23:27:45.537049 containerd[1482]: time="2025-07-06T23:27:45.537013388Z" level=info msg="RemoveContainer for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" returns successfully" Jul 6 23:27:45.537234 kubelet[2603]: I0706 23:27:45.537190 2603 scope.go:117] "RemoveContainer" containerID="ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f" Jul 6 23:27:45.540016 containerd[1482]: time="2025-07-06T23:27:45.539990197Z" level=info msg="RemoveContainer for \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\"" Jul 6 23:27:45.543520 containerd[1482]: time="2025-07-06T23:27:45.543497833Z" level=info msg="RemoveContainer for \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\" returns successfully" Jul 6 23:27:45.543775 kubelet[2603]: I0706 23:27:45.543655 2603 scope.go:117] "RemoveContainer" containerID="cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563" Jul 6 23:27:45.544362 containerd[1482]: time="2025-07-06T23:27:45.544339180Z" level=info msg="RemoveContainer for \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\"" Jul 6 23:27:45.546462 containerd[1482]: time="2025-07-06T23:27:45.546431972Z" level=info msg="RemoveContainer for \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\" returns successfully" Jul 6 23:27:45.546558 kubelet[2603]: I0706 23:27:45.546542 2603 scope.go:117] "RemoveContainer" containerID="ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194" Jul 6 23:27:45.549384 containerd[1482]: time="2025-07-06T23:27:45.549071753Z" level=info msg="RemoveContainer for \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\"" Jul 6 23:27:45.553428 containerd[1482]: time="2025-07-06T23:27:45.553402057Z" level=info msg="RemoveContainer for \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\" returns successfully" Jul 6 23:27:45.553851 kubelet[2603]: I0706 23:27:45.553814 2603 scope.go:117] "RemoveContainer" containerID="e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732" Jul 6 23:27:45.554680 containerd[1482]: time="2025-07-06T23:27:45.554659372Z" level=info msg="RemoveContainer for \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\"" Jul 6 23:27:45.563057 containerd[1482]: time="2025-07-06T23:27:45.562768931Z" level=info msg="RemoveContainer for \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\" returns successfully" Jul 6 23:27:45.563760 kubelet[2603]: I0706 23:27:45.563741 2603 scope.go:117] "RemoveContainer" containerID="f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71" Jul 6 23:27:45.563947 containerd[1482]: time="2025-07-06T23:27:45.563920456Z" level=error msg="ContainerStatus for \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\": not found" Jul 6 23:27:45.564213 kubelet[2603]: E0706 23:27:45.564092 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\": not found" containerID="f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71" Jul 6 23:27:45.564213 kubelet[2603]: I0706 23:27:45.564140 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71"} err="failed to get container status \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\": rpc error: code = NotFound desc = an error occurred when try to find container \"f98aa878b43d863a23dd4b10baf6784da68e51b7d22147e18b1bc2bde6564b71\": not found" Jul 6 23:27:45.564213 kubelet[2603]: I0706 23:27:45.564158 2603 scope.go:117] "RemoveContainer" containerID="ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f" Jul 6 23:27:45.564330 containerd[1482]: time="2025-07-06T23:27:45.564292695Z" level=error msg="ContainerStatus for \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\": not found" Jul 6 23:27:45.564559 kubelet[2603]: E0706 23:27:45.564542 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\": not found" containerID="ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f" Jul 6 23:27:45.564608 kubelet[2603]: I0706 23:27:45.564561 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f"} err="failed to get container status \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccc789f3181a8b556709cee76ccd0022b78084a11ad2f5abfdb6597d6aaf0e5f\": not found" Jul 6 23:27:45.564608 kubelet[2603]: I0706 23:27:45.564575 2603 scope.go:117] "RemoveContainer" containerID="cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563" Jul 6 23:27:45.564716 containerd[1482]: time="2025-07-06T23:27:45.564693954Z" level=error msg="ContainerStatus for \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\": not found" Jul 6 23:27:45.564860 kubelet[2603]: E0706 23:27:45.564827 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\": not found" containerID="cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563" Jul 6 23:27:45.565074 kubelet[2603]: I0706 23:27:45.564859 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563"} err="failed to get container status \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf1c91b67bcfcec4301f0faec71cdecbc6b96ae5bbb3f93afe5620f741f63563\": not found" Jul 6 23:27:45.565074 kubelet[2603]: I0706 23:27:45.564871 2603 scope.go:117] "RemoveContainer" containerID="ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194" Jul 6 23:27:45.565139 containerd[1482]: time="2025-07-06T23:27:45.564999103Z" level=error msg="ContainerStatus for \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\": not found" Jul 6 23:27:45.565162 kubelet[2603]: E0706 23:27:45.565109 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\": not found" containerID="ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194" Jul 6 23:27:45.565162 kubelet[2603]: I0706 23:27:45.565124 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194"} err="failed to get container status \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab8b1e9a918a82627ba6ac8ec0d7cee1359f480b5f89ea55f2c0010e2442f194\": not found" Jul 6 23:27:45.565162 kubelet[2603]: I0706 23:27:45.565136 2603 scope.go:117] "RemoveContainer" containerID="e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732" Jul 6 23:27:45.565262 containerd[1482]: time="2025-07-06T23:27:45.565236882Z" level=error msg="ContainerStatus for \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\": not found" Jul 6 23:27:45.565598 kubelet[2603]: E0706 23:27:45.565573 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\": not found" containerID="e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732" Jul 6 23:27:45.565626 kubelet[2603]: I0706 23:27:45.565599 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732"} err="failed to get container status \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\": rpc error: code = NotFound desc = an error occurred when try to find container \"e011096720aeaff334cbf000cec985fbd3dc9830b7038f39d1b3277e1d7e0732\": not found" Jul 6 23:27:45.652985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e-rootfs.mount: Deactivated successfully. Jul 6 23:27:45.653118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640-rootfs.mount: Deactivated successfully. Jul 6 23:27:45.653196 systemd[1]: var-lib-kubelet-pods-350b79fb\x2da9de\x2d4337\x2daf9f\x2d51a63aa99973-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffcwg.mount: Deactivated successfully. Jul 6 23:27:45.653281 systemd[1]: var-lib-kubelet-pods-bf6a07b2\x2db936\x2d4a4b\x2dab91\x2df7bf64c8ff77-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d259cd.mount: Deactivated successfully. Jul 6 23:27:45.653352 systemd[1]: var-lib-kubelet-pods-350b79fb\x2da9de\x2d4337\x2daf9f\x2d51a63aa99973-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:27:45.653566 systemd[1]: var-lib-kubelet-pods-350b79fb\x2da9de\x2d4337\x2daf9f\x2d51a63aa99973-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:27:46.652986 sshd[4198]: Connection closed by 147.75.109.163 port 59088 Jul 6 23:27:46.653848 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:46.657070 systemd[1]: sshd@21-172.237.135.91:22-147.75.109.163:59088.service: Deactivated successfully. Jul 6 23:27:46.659503 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:27:46.661140 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:27:46.662680 systemd-logind[1463]: Removed session 22. Jul 6 23:27:46.722227 systemd[1]: Started sshd@22-172.237.135.91:22-147.75.109.163:41164.service - OpenSSH per-connection server daemon (147.75.109.163:41164). Jul 6 23:27:47.082111 sshd[4362]: Accepted publickey for core from 147.75.109.163 port 41164 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:47.083265 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:47.088640 systemd-logind[1463]: New session 23 of user core. Jul 6 23:27:47.093211 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:27:47.164631 kubelet[2603]: I0706 23:27:47.164593 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350b79fb-a9de-4337-af9f-51a63aa99973" path="/var/lib/kubelet/pods/350b79fb-a9de-4337-af9f-51a63aa99973/volumes" Jul 6 23:27:47.165391 kubelet[2603]: I0706 23:27:47.165375 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77" path="/var/lib/kubelet/pods/bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77/volumes" Jul 6 23:27:47.281924 kubelet[2603]: E0706 23:27:47.281871 2603 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:27:47.765308 kubelet[2603]: I0706 23:27:47.765271 2603 memory_manager.go:355] "RemoveStaleState removing state" podUID="350b79fb-a9de-4337-af9f-51a63aa99973" containerName="cilium-agent" Jul 6 23:27:47.765308 kubelet[2603]: I0706 23:27:47.765299 2603 memory_manager.go:355] "RemoveStaleState removing state" podUID="bf6a07b2-b936-4a4b-ab91-f7bf64c8ff77" containerName="cilium-operator" Jul 6 23:27:47.773742 systemd[1]: Created slice kubepods-burstable-pod21c28911_0cd7_4d45_988f_8aa4ec2d4f26.slice - libcontainer container kubepods-burstable-pod21c28911_0cd7_4d45_988f_8aa4ec2d4f26.slice. Jul 6 23:27:47.808679 sshd[4364]: Connection closed by 147.75.109.163 port 41164 Jul 6 23:27:47.807507 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:47.810448 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:27:47.811440 systemd[1]: sshd@22-172.237.135.91:22-147.75.109.163:41164.service: Deactivated successfully. Jul 6 23:27:47.814812 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:27:47.817379 systemd-logind[1463]: Removed session 23. Jul 6 23:27:47.877481 kubelet[2603]: I0706 23:27:47.877436 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-cilium-ipsec-secrets\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877481 kubelet[2603]: I0706 23:27:47.877471 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-hubble-tls\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877620 kubelet[2603]: I0706 23:27:47.877490 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-cilium-config-path\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877620 kubelet[2603]: I0706 23:27:47.877507 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-lib-modules\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877620 kubelet[2603]: I0706 23:27:47.877520 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-xtables-lock\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877620 kubelet[2603]: I0706 23:27:47.877532 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbxt\" (UniqueName: \"kubernetes.io/projected/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-kube-api-access-pgbxt\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877620 kubelet[2603]: I0706 23:27:47.877545 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-bpf-maps\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877620 kubelet[2603]: I0706 23:27:47.877557 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-cilium-run\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877744 kubelet[2603]: I0706 23:27:47.877568 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-hostproc\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877744 kubelet[2603]: I0706 23:27:47.877582 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-cni-path\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877744 kubelet[2603]: I0706 23:27:47.877597 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-host-proc-sys-kernel\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877744 kubelet[2603]: I0706 23:27:47.877610 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-cilium-cgroup\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877744 kubelet[2603]: I0706 23:27:47.877624 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-etc-cni-netd\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877744 kubelet[2603]: I0706 23:27:47.877637 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-clustermesh-secrets\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.877863 kubelet[2603]: I0706 23:27:47.877652 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21c28911-0cd7-4d45-988f-8aa4ec2d4f26-host-proc-sys-net\") pod \"cilium-7nx8b\" (UID: \"21c28911-0cd7-4d45-988f-8aa4ec2d4f26\") " pod="kube-system/cilium-7nx8b" Jul 6 23:27:47.883208 systemd[1]: Started sshd@23-172.237.135.91:22-147.75.109.163:41174.service - OpenSSH per-connection server daemon (147.75.109.163:41174). Jul 6 23:27:48.077754 kubelet[2603]: E0706 23:27:48.077637 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:48.078312 containerd[1482]: time="2025-07-06T23:27:48.078268884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7nx8b,Uid:21c28911-0cd7-4d45-988f-8aa4ec2d4f26,Namespace:kube-system,Attempt:0,}" Jul 6 23:27:48.105100 containerd[1482]: time="2025-07-06T23:27:48.104899697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:27:48.105433 containerd[1482]: time="2025-07-06T23:27:48.105269876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:27:48.105433 containerd[1482]: time="2025-07-06T23:27:48.105296896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:27:48.105433 containerd[1482]: time="2025-07-06T23:27:48.105391186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:27:48.133159 systemd[1]: Started cri-containerd-cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65.scope - libcontainer container cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65. Jul 6 23:27:48.159375 containerd[1482]: time="2025-07-06T23:27:48.159331671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7nx8b,Uid:21c28911-0cd7-4d45-988f-8aa4ec2d4f26,Namespace:kube-system,Attempt:0,} returns sandbox id \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\"" Jul 6 23:27:48.160244 kubelet[2603]: E0706 23:27:48.160220 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:48.163007 kubelet[2603]: E0706 23:27:48.162984 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:48.164222 containerd[1482]: time="2025-07-06T23:27:48.163877484Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:27:48.175481 containerd[1482]: time="2025-07-06T23:27:48.175440082Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3\"" Jul 6 23:27:48.176117 containerd[1482]: time="2025-07-06T23:27:48.175874841Z" level=info msg="StartContainer for \"522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3\"" Jul 6 23:27:48.205152 systemd[1]: Started cri-containerd-522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3.scope - libcontainer container 522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3. Jul 6 23:27:48.246437 containerd[1482]: time="2025-07-06T23:27:48.246405876Z" level=info msg="StartContainer for \"522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3\" returns successfully" Jul 6 23:27:48.248547 systemd[1]: cri-containerd-522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3.scope: Deactivated successfully. Jul 6 23:27:48.253053 sshd[4374]: Accepted publickey for core from 147.75.109.163 port 41174 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:48.254253 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:48.262500 systemd-logind[1463]: New session 24 of user core. Jul 6 23:27:48.266976 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:27:48.283206 containerd[1482]: time="2025-07-06T23:27:48.283107853Z" level=info msg="shim disconnected" id=522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3 namespace=k8s.io Jul 6 23:27:48.283358 containerd[1482]: time="2025-07-06T23:27:48.283203693Z" level=warning msg="cleaning up after shim disconnected" id=522b0194ad28a5a8ca7d91e22c0d2255c0dc9db327dbab5b3091a633a1241ac3 namespace=k8s.io Jul 6 23:27:48.283358 containerd[1482]: time="2025-07-06T23:27:48.283219043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:48.523321 sshd[4470]: Connection closed by 147.75.109.163 port 41174 Jul 6 23:27:48.524062 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Jul 6 23:27:48.528674 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:27:48.529179 systemd[1]: sshd@23-172.237.135.91:22-147.75.109.163:41174.service: Deactivated successfully. Jul 6 23:27:48.532389 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:27:48.533378 systemd-logind[1463]: Removed session 24. Jul 6 23:27:48.539630 kubelet[2603]: E0706 23:27:48.539607 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:48.542045 containerd[1482]: time="2025-07-06T23:27:48.541993058Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:27:48.557919 containerd[1482]: time="2025-07-06T23:27:48.557886491Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68\"" Jul 6 23:27:48.560532 containerd[1482]: time="2025-07-06T23:27:48.560476782Z" level=info msg="StartContainer for \"e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68\"" Jul 6 23:27:48.607168 systemd[1]: Started cri-containerd-e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68.scope - libcontainer container e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68. Jul 6 23:27:48.611510 systemd[1]: Started sshd@24-172.237.135.91:22-147.75.109.163:41180.service - OpenSSH per-connection server daemon (147.75.109.163:41180). Jul 6 23:27:48.643158 containerd[1482]: time="2025-07-06T23:27:48.643127512Z" level=info msg="StartContainer for \"e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68\" returns successfully" Jul 6 23:27:48.650282 systemd[1]: cri-containerd-e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68.scope: Deactivated successfully. Jul 6 23:27:48.672406 containerd[1482]: time="2025-07-06T23:27:48.672334407Z" level=info msg="shim disconnected" id=e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68 namespace=k8s.io Jul 6 23:27:48.672571 containerd[1482]: time="2025-07-06T23:27:48.672417927Z" level=warning msg="cleaning up after shim disconnected" id=e989a9ef523923686d567e736e490bd4a84b303f99e042adfd1d3ccd1351df68 namespace=k8s.io Jul 6 23:27:48.672571 containerd[1482]: time="2025-07-06T23:27:48.672427637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:48.993647 sshd[4506]: Accepted publickey for core from 147.75.109.163 port 41180 ssh2: RSA SHA256:/eDCPZUdFWI+U3+wi39zDPruseM35VxqLVYPXblev1E Jul 6 23:27:48.995213 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:27:49.000309 systemd-logind[1463]: New session 25 of user core. Jul 6 23:27:49.006206 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:27:49.545275 kubelet[2603]: E0706 23:27:49.544143 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:49.548445 containerd[1482]: time="2025-07-06T23:27:49.548375329Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:27:49.565586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612515711.mount: Deactivated successfully. Jul 6 23:27:49.568308 containerd[1482]: time="2025-07-06T23:27:49.568274528Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d\"" Jul 6 23:27:49.569177 containerd[1482]: time="2025-07-06T23:27:49.569146944Z" level=info msg="StartContainer for \"b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d\"" Jul 6 23:27:49.618169 systemd[1]: Started cri-containerd-b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d.scope - libcontainer container b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d. Jul 6 23:27:49.674075 containerd[1482]: time="2025-07-06T23:27:49.674013101Z" level=info msg="StartContainer for \"b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d\" returns successfully" Jul 6 23:27:49.676217 systemd[1]: cri-containerd-b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d.scope: Deactivated successfully. Jul 6 23:27:49.704181 containerd[1482]: time="2025-07-06T23:27:49.703896174Z" level=info msg="shim disconnected" id=b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d namespace=k8s.io Jul 6 23:27:49.704181 containerd[1482]: time="2025-07-06T23:27:49.703975444Z" level=warning msg="cleaning up after shim disconnected" id=b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d namespace=k8s.io Jul 6 23:27:49.704181 containerd[1482]: time="2025-07-06T23:27:49.704009884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:49.948670 kubelet[2603]: I0706 23:27:49.948625 2603 setters.go:602] "Node became not ready" node="172-237-135-91" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:27:49Z","lastTransitionTime":"2025-07-06T23:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:27:49.987651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1957b9748dbf0dc0ec396b0edcc4d66d2a92a50a23e327ad3154e0321b73b8d-rootfs.mount: Deactivated successfully. Jul 6 23:27:50.162603 kubelet[2603]: E0706 23:27:50.162571 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:50.547380 kubelet[2603]: E0706 23:27:50.547343 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:50.549365 containerd[1482]: time="2025-07-06T23:27:50.549339548Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:27:50.564822 containerd[1482]: time="2025-07-06T23:27:50.564779703Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b\"" Jul 6 23:27:50.565470 containerd[1482]: time="2025-07-06T23:27:50.565422021Z" level=info msg="StartContainer for \"0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b\"" Jul 6 23:27:50.600164 systemd[1]: Started cri-containerd-0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b.scope - libcontainer container 0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b. Jul 6 23:27:50.622673 systemd[1]: cri-containerd-0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b.scope: Deactivated successfully. Jul 6 23:27:50.624439 containerd[1482]: time="2025-07-06T23:27:50.624379984Z" level=info msg="StartContainer for \"0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b\" returns successfully" Jul 6 23:27:50.645543 containerd[1482]: time="2025-07-06T23:27:50.645488080Z" level=info msg="shim disconnected" id=0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b namespace=k8s.io Jul 6 23:27:50.645543 containerd[1482]: time="2025-07-06T23:27:50.645528530Z" level=warning msg="cleaning up after shim disconnected" id=0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b namespace=k8s.io Jul 6 23:27:50.645543 containerd[1482]: time="2025-07-06T23:27:50.645536600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:27:50.988574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f046bd0576f5171b5c3d6eab08b2836b39275c96a7cb35ac4a00a4c89e5977b-rootfs.mount: Deactivated successfully. Jul 6 23:27:51.552106 kubelet[2603]: E0706 23:27:51.550924 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:51.552726 containerd[1482]: time="2025-07-06T23:27:51.552263778Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:27:51.567765 containerd[1482]: time="2025-07-06T23:27:51.567657415Z" level=info msg="CreateContainer within sandbox \"cad9dd2f116b3b0718788407414d3bd8c0beb00b3ec0e7de3e2ec58a3b25af65\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f975f2d9601587377ab6be75bc55eed362d13606421237ca930871ee9059b4d\"" Jul 6 23:27:51.570046 containerd[1482]: time="2025-07-06T23:27:51.568103223Z" level=info msg="StartContainer for \"6f975f2d9601587377ab6be75bc55eed362d13606421237ca930871ee9059b4d\"" Jul 6 23:27:51.569488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661499333.mount: Deactivated successfully. Jul 6 23:27:51.607157 systemd[1]: Started cri-containerd-6f975f2d9601587377ab6be75bc55eed362d13606421237ca930871ee9059b4d.scope - libcontainer container 6f975f2d9601587377ab6be75bc55eed362d13606421237ca930871ee9059b4d. Jul 6 23:27:51.634129 containerd[1482]: time="2025-07-06T23:27:51.634013444Z" level=info msg="StartContainer for \"6f975f2d9601587377ab6be75bc55eed362d13606421237ca930871ee9059b4d\" returns successfully" Jul 6 23:27:52.067058 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:27:52.554770 kubelet[2603]: E0706 23:27:52.554702 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:54.080042 kubelet[2603]: E0706 23:27:54.079950 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:55.012162 systemd-networkd[1396]: lxc_health: Link UP Jul 6 23:27:55.020728 systemd-networkd[1396]: lxc_health: Gained carrier Jul 6 23:27:56.076129 systemd-networkd[1396]: lxc_health: Gained IPv6LL Jul 6 23:27:56.081343 kubelet[2603]: E0706 23:27:56.081088 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:56.100523 kubelet[2603]: I0706 23:27:56.099758 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7nx8b" podStartSLOduration=9.099745722 podStartE2EDuration="9.099745722s" podCreationTimestamp="2025-07-06 23:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:27:52.568117529 +0000 UTC m=+175.491392748" watchObservedRunningTime="2025-07-06 23:27:56.099745722 +0000 UTC m=+179.023020941" Jul 6 23:27:56.561877 kubelet[2603]: E0706 23:27:56.561825 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:57.165280 containerd[1482]: time="2025-07-06T23:27:57.165135999Z" level=info msg="StopPodSandbox for \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\"" Jul 6 23:27:57.165280 containerd[1482]: time="2025-07-06T23:27:57.165234389Z" level=info msg="TearDown network for sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" successfully" Jul 6 23:27:57.165280 containerd[1482]: time="2025-07-06T23:27:57.165244949Z" level=info msg="StopPodSandbox for \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" returns successfully" Jul 6 23:27:57.169050 containerd[1482]: time="2025-07-06T23:27:57.168280410Z" level=info msg="RemovePodSandbox for \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\"" Jul 6 23:27:57.169050 containerd[1482]: time="2025-07-06T23:27:57.168304820Z" level=info msg="Forcibly stopping sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\"" Jul 6 23:27:57.169050 containerd[1482]: time="2025-07-06T23:27:57.168349289Z" level=info msg="TearDown network for sandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" successfully" Jul 6 23:27:57.172597 containerd[1482]: time="2025-07-06T23:27:57.172569445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:27:57.172721 containerd[1482]: time="2025-07-06T23:27:57.172704435Z" level=info msg="RemovePodSandbox \"773e16ba7139203572fc1f18bdda682ed8323d3838b6de16ff21898541a50640\" returns successfully" Jul 6 23:27:57.173148 containerd[1482]: time="2025-07-06T23:27:57.173121093Z" level=info msg="StopPodSandbox for \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\"" Jul 6 23:27:57.173391 containerd[1482]: time="2025-07-06T23:27:57.173340903Z" level=info msg="TearDown network for sandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" successfully" Jul 6 23:27:57.173391 containerd[1482]: time="2025-07-06T23:27:57.173355253Z" level=info msg="StopPodSandbox for \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" returns successfully" Jul 6 23:27:57.174858 containerd[1482]: time="2025-07-06T23:27:57.173844711Z" level=info msg="RemovePodSandbox for \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\"" Jul 6 23:27:57.174858 containerd[1482]: time="2025-07-06T23:27:57.173867311Z" level=info msg="Forcibly stopping sandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\"" Jul 6 23:27:57.174858 containerd[1482]: time="2025-07-06T23:27:57.173920631Z" level=info msg="TearDown network for sandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" successfully" Jul 6 23:27:57.176544 containerd[1482]: time="2025-07-06T23:27:57.176524193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:27:57.176668 containerd[1482]: time="2025-07-06T23:27:57.176652642Z" level=info msg="RemovePodSandbox \"8990f3edcdebe8951608ee85e58a4cca99cf733a32a0164384f14e1ca45d972e\" returns successfully" Jul 6 23:27:57.563772 kubelet[2603]: E0706 23:27:57.563424 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:27:57.642691 systemd[1]: run-containerd-runc-k8s.io-6f975f2d9601587377ab6be75bc55eed362d13606421237ca930871ee9059b4d-runc.7qCjed.mount: Deactivated successfully. Jul 6 23:28:01.163768 kubelet[2603]: E0706 23:28:01.162924 2603 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Jul 6 23:28:01.966314 sshd[4553]: Connection closed by 147.75.109.163 port 41180 Jul 6 23:28:01.967258 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Jul 6 23:28:01.970828 systemd[1]: sshd@24-172.237.135.91:22-147.75.109.163:41180.service: Deactivated successfully. Jul 6 23:28:01.973109 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:28:01.975777 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:28:01.977822 systemd-logind[1463]: Removed session 25.