Jun 20 19:58:55.855937 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:58:55.855957 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:58:55.855965 kernel: BIOS-provided physical RAM map: Jun 20 19:58:55.855973 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jun 20 19:58:55.855979 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jun 20 19:58:55.855984 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:58:55.855990 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jun 20 19:58:55.855996 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jun 20 19:58:55.856002 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 19:58:55.856008 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 19:58:55.856014 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:58:55.856019 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:58:55.856027 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jun 20 19:58:55.856033 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:58:55.856040 kernel: NX (Execute Disable) protection: active Jun 20 19:58:55.856046 kernel: APIC: Static calls initialized Jun 20 19:58:55.856052 kernel: SMBIOS 2.8 present. Jun 20 19:58:55.856060 kernel: DMI: Linode Compute Instance, BIOS Not Specified Jun 20 19:58:55.856066 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:58:55.856072 kernel: Hypervisor detected: KVM Jun 20 19:58:55.856077 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:58:55.856084 kernel: kvm-clock: using sched offset of 5245826321 cycles Jun 20 19:58:55.856090 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:58:55.856097 kernel: tsc: Detected 2000.000 MHz processor Jun 20 19:58:55.856103 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:58:55.856110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:58:55.856116 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jun 20 19:58:55.856124 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:58:55.856130 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:58:55.856136 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jun 20 19:58:55.856142 kernel: Using GB pages for direct mapping Jun 20 19:58:55.856149 kernel: ACPI: Early table checksum verification disabled Jun 20 19:58:55.856155 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) Jun 20 19:58:55.856161 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856167 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856174 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856182 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 20 19:58:55.856188 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856194 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856201 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856210 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:58:55.856216 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jun 20 19:58:55.856225 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jun 20 19:58:55.856231 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 20 19:58:55.856291 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jun 20 19:58:55.856300 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jun 20 19:58:55.856307 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jun 20 19:58:55.856313 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jun 20 19:58:55.856320 kernel: No NUMA configuration found Jun 20 19:58:55.856326 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jun 20 19:58:55.856336 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jun 20 19:58:55.856342 kernel: Zone ranges: Jun 20 19:58:55.856349 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:58:55.856355 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:58:55.856362 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jun 20 19:58:55.856368 kernel: Device empty Jun 20 19:58:55.856375 kernel: Movable zone start for each node Jun 20 19:58:55.856381 kernel: Early memory node ranges Jun 20 19:58:55.856388 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:58:55.856394 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jun 20 19:58:55.856402 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jun 20 19:58:55.856409 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jun 20 19:58:55.856415 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:58:55.856422 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:58:55.856428 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 20 19:58:55.856435 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:58:55.856441 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:58:55.856448 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:58:55.856454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:58:55.856463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:58:55.856469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:58:55.856475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:58:55.856482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:58:55.856489 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:58:55.856495 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:58:55.856501 kernel: TSC deadline timer available Jun 20 19:58:55.856508 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:58:55.856514 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:58:55.856523 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:58:55.856529 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:58:55.856535 kernel: CPU topo: Num. cores per package: 2 Jun 20 19:58:55.856542 kernel: CPU topo: Num. threads per package: 2 Jun 20 19:58:55.856548 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:58:55.856555 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:58:55.856561 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 20 19:58:55.856568 kernel: kvm-guest: setup PV sched yield Jun 20 19:58:55.856574 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 19:58:55.856582 kernel: Booting paravirtualized kernel on KVM Jun 20 19:58:55.856589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:58:55.856596 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:58:55.856602 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:58:55.856609 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:58:55.856615 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:58:55.856621 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:58:55.856628 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:58:55.856636 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:58:55.856645 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:58:55.856651 kernel: random: crng init done Jun 20 19:58:55.856658 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:58:55.856664 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:58:55.856671 kernel: Fallback order for Node 0: 0 Jun 20 19:58:55.856677 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 20 19:58:55.856684 kernel: Policy zone: Normal Jun 20 19:58:55.856690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:58:55.856698 kernel: software IO TLB: area num 2. Jun 20 19:58:55.856705 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:58:55.856711 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:58:55.856718 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:58:55.856724 kernel: Dynamic Preempt: voluntary Jun 20 19:58:55.856731 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:58:55.856738 kernel: rcu: RCU event tracing is enabled. Jun 20 19:58:55.856745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:58:55.856752 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:58:55.856758 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:58:55.856767 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:58:55.856773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:58:55.856779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:58:55.856786 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:58:55.856798 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:58:55.856807 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:58:55.856814 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:58:55.856821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:58:55.856827 kernel: Console: colour VGA+ 80x25 Jun 20 19:58:55.856834 kernel: printk: legacy console [tty0] enabled Jun 20 19:58:55.856840 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:58:55.856847 kernel: ACPI: Core revision 20240827 Jun 20 19:58:55.856855 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:58:55.856862 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:58:55.856868 kernel: x2apic enabled Jun 20 19:58:55.856875 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:58:55.856883 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 20 19:58:55.856890 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 20 19:58:55.856896 kernel: kvm-guest: setup PV IPIs Jun 20 19:58:55.856903 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:58:55.856910 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jun 20 19:58:55.856916 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jun 20 19:58:55.856923 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:58:55.856930 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 19:58:55.856936 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 19:58:55.856945 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:58:55.856951 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:58:55.856958 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:58:55.856965 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 20 19:58:55.856971 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:58:55.856978 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:58:55.856985 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 20 19:58:55.856992 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 20 19:58:55.856998 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 20 19:58:55.857007 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:58:55.857013 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:58:55.857020 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:58:55.857027 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 20 19:58:55.857033 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:58:55.857040 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jun 20 19:58:55.857046 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jun 20 19:58:55.857053 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:58:55.857061 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:58:55.857068 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:58:55.857074 kernel: landlock: Up and running. Jun 20 19:58:55.857081 kernel: SELinux: Initializing. Jun 20 19:58:55.857087 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:58:55.857094 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:58:55.857101 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jun 20 19:58:55.857107 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 19:58:55.857114 kernel: ... version: 0 Jun 20 19:58:55.857122 kernel: ... bit width: 48 Jun 20 19:58:55.857129 kernel: ... generic registers: 6 Jun 20 19:58:55.857135 kernel: ... value mask: 0000ffffffffffff Jun 20 19:58:55.857142 kernel: ... max period: 00007fffffffffff Jun 20 19:58:55.857148 kernel: ... fixed-purpose events: 0 Jun 20 19:58:55.857155 kernel: ... event mask: 000000000000003f Jun 20 19:58:55.857162 kernel: signal: max sigframe size: 3376 Jun 20 19:58:55.857168 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:58:55.857175 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:58:55.857181 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:58:55.857190 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:58:55.857196 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:58:55.857203 kernel: .... node #0, CPUs: #1 Jun 20 19:58:55.857209 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:58:55.857216 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jun 20 19:58:55.857223 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 20 19:58:55.857230 kernel: devtmpfs: initialized Jun 20 19:58:55.857236 kernel: x86/mm: Memory block size: 128MB Jun 20 19:58:55.857270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:58:55.857279 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:58:55.857286 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:58:55.857305 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:58:55.857312 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:58:55.857319 kernel: audit: type=2000 audit(1750449534.337:1): state=initialized audit_enabled=0 res=1 Jun 20 19:58:55.857325 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:58:55.857332 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:58:55.857338 kernel: cpuidle: using governor menu Jun 20 19:58:55.857347 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:58:55.857354 kernel: dca service started, version 1.12.1 Jun 20 19:58:55.857360 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jun 20 19:58:55.857367 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jun 20 19:58:55.857374 kernel: PCI: Using configuration type 1 for base access Jun 20 19:58:55.857381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:58:55.857387 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:58:55.857394 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:58:55.857401 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:58:55.857409 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:58:55.857415 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:58:55.857422 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:58:55.857429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:58:55.857435 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:58:55.857442 kernel: ACPI: Interpreter enabled Jun 20 19:58:55.857448 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:58:55.857455 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:58:55.857461 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:58:55.857470 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:58:55.857476 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 19:58:55.857483 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:58:55.857664 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:58:55.857780 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 19:58:55.857891 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 19:58:55.857900 kernel: PCI host bridge to bus 0000:00 Jun 20 19:58:55.858024 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:58:55.858128 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:58:55.858226 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:58:55.859922 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jun 20 19:58:55.860029 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:58:55.860128 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jun 20 19:58:55.860225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:58:55.860390 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:58:55.860519 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:58:55.860629 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jun 20 19:58:55.860738 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jun 20 19:58:55.860850 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jun 20 19:58:55.860955 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:58:55.861078 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:58:55.861190 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jun 20 19:58:55.861359 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jun 20 19:58:55.861470 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jun 20 19:58:55.861593 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:58:55.861700 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jun 20 19:58:55.861806 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jun 20 19:58:55.861911 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jun 20 19:58:55.862025 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jun 20 19:58:55.862142 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:58:55.862281 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 19:58:55.862410 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 20 19:58:55.862516 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jun 20 19:58:55.862620 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jun 20 19:58:55.862742 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 20 19:58:55.862848 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jun 20 19:58:55.862858 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:58:55.862865 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:58:55.862872 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:58:55.862878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:58:55.862885 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 19:58:55.862891 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 19:58:55.862901 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 19:58:55.862907 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 19:58:55.862914 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 19:58:55.862920 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 19:58:55.862927 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 19:58:55.862934 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 19:58:55.862941 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 19:58:55.862947 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 19:58:55.862954 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 19:58:55.862962 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 19:58:55.862969 kernel: iommu: Default domain type: Translated Jun 20 19:58:55.862976 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:58:55.862982 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:58:55.862989 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:58:55.862995 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jun 20 19:58:55.863002 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jun 20 19:58:55.863107 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 19:58:55.863212 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 19:58:55.865539 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:58:55.865553 kernel: vgaarb: loaded Jun 20 19:58:55.865561 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:58:55.865569 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:58:55.865576 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:58:55.865582 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:58:55.865589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:58:55.865596 kernel: pnp: PnP ACPI init Jun 20 19:58:55.865736 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 19:58:55.865748 kernel: pnp: PnP ACPI: found 5 devices Jun 20 19:58:55.865755 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:58:55.865762 kernel: NET: Registered PF_INET protocol family Jun 20 19:58:55.865769 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:58:55.865776 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:58:55.865783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:58:55.865789 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:58:55.865799 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:58:55.865806 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:58:55.865812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:58:55.865819 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:58:55.865826 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:58:55.865833 kernel: NET: Registered PF_XDP protocol family Jun 20 19:58:55.865934 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:58:55.866033 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:58:55.866130 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:58:55.866231 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jun 20 19:58:55.866350 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 19:58:55.866448 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jun 20 19:58:55.866458 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:58:55.866465 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:58:55.866472 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jun 20 19:58:55.866479 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jun 20 19:58:55.866486 kernel: Initialise system trusted keyrings Jun 20 19:58:55.866496 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:58:55.866503 kernel: Key type asymmetric registered Jun 20 19:58:55.866510 kernel: Asymmetric key parser 'x509' registered Jun 20 19:58:55.866517 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:58:55.866524 kernel: io scheduler mq-deadline registered Jun 20 19:58:55.866531 kernel: io scheduler kyber registered Jun 20 19:58:55.866537 kernel: io scheduler bfq registered Jun 20 19:58:55.866544 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:58:55.866551 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 19:58:55.866559 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 19:58:55.866567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:58:55.866574 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:58:55.866581 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:58:55.866588 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:58:55.866595 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:58:55.866708 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 20 19:58:55.866718 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:58:55.866817 kernel: rtc_cmos 00:03: registered as rtc0 Jun 20 19:58:55.866921 kernel: rtc_cmos 00:03: setting system clock to 2025-06-20T19:58:55 UTC (1750449535) Jun 20 19:58:55.867027 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 19:58:55.867037 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:58:55.867043 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:58:55.867050 kernel: Segment Routing with IPv6 Jun 20 19:58:55.867056 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:58:55.867063 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:58:55.867070 kernel: Key type dns_resolver registered Jun 20 19:58:55.867076 kernel: IPI shorthand broadcast: enabled Jun 20 19:58:55.867085 kernel: sched_clock: Marking stable (2487003971, 187670921)->(2699812900, -25138008) Jun 20 19:58:55.867092 kernel: registered taskstats version 1 Jun 20 19:58:55.867099 kernel: Loading compiled-in X.509 certificates Jun 20 19:58:55.867106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:58:55.867113 kernel: Demotion targets for Node 0: null Jun 20 19:58:55.867120 kernel: Key type .fscrypt registered Jun 20 19:58:55.867126 kernel: Key type fscrypt-provisioning registered Jun 20 19:58:55.867133 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:58:55.867142 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:58:55.867148 kernel: ima: No architecture policies found Jun 20 19:58:55.867155 kernel: clk: Disabling unused clocks Jun 20 19:58:55.867161 kernel: Warning: unable to open an initial console. Jun 20 19:58:55.867168 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:58:55.867175 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:58:55.867182 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:58:55.867188 kernel: Run /init as init process Jun 20 19:58:55.867195 kernel: with arguments: Jun 20 19:58:55.867203 kernel: /init Jun 20 19:58:55.867210 kernel: with environment: Jun 20 19:58:55.867216 kernel: HOME=/ Jun 20 19:58:55.867223 kernel: TERM=linux Jun 20 19:58:55.867229 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:58:55.867473 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:58:55.867488 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:58:55.867497 systemd[1]: Detected virtualization kvm. Jun 20 19:58:55.867506 systemd[1]: Detected architecture x86-64. Jun 20 19:58:55.867513 systemd[1]: Running in initrd. Jun 20 19:58:55.867520 systemd[1]: No hostname configured, using default hostname. Jun 20 19:58:55.867528 systemd[1]: Hostname set to . Jun 20 19:58:55.867535 systemd[1]: Initializing machine ID from random generator. Jun 20 19:58:55.867543 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:58:55.867550 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:58:55.867557 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:58:55.867568 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:58:55.867576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:58:55.867584 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:58:55.867592 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:58:55.867601 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:58:55.867609 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:58:55.867618 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:58:55.867626 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:58:55.867633 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:58:55.867640 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:58:55.867648 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:58:55.867655 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:58:55.867663 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:58:55.867670 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:58:55.867678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:58:55.867687 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:58:55.867695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:58:55.867702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:58:55.867709 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:58:55.867719 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:58:55.867727 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:58:55.867736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:58:55.867743 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:58:55.867751 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:58:55.867758 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:58:55.867766 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:58:55.867773 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:58:55.867780 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:58:55.867788 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:58:55.867818 systemd-journald[206]: Collecting audit messages is disabled. Jun 20 19:58:55.867839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:58:55.867847 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:58:55.867855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:58:55.867863 systemd-journald[206]: Journal started Jun 20 19:58:55.867880 systemd-journald[206]: Runtime Journal (/run/log/journal/240c7b621b6148c1a8b37cbc31d4ad26) is 8M, max 78.5M, 70.5M free. Jun 20 19:58:55.848271 systemd-modules-load[207]: Inserted module 'overlay' Jun 20 19:58:55.871267 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:58:55.873762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:58:55.876953 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:58:55.945759 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:58:55.945789 kernel: Bridge firewalling registered Jun 20 19:58:55.904460 systemd-modules-load[207]: Inserted module 'br_netfilter' Jun 20 19:58:55.908729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:58:55.913156 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:58:55.947430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:58:55.948343 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:58:55.952345 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:58:55.955405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:58:55.961350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:58:55.969275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:58:55.974826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:58:55.980666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:58:55.982532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:58:55.984778 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:58:56.005582 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:58:56.012918 systemd-resolved[237]: Positive Trust Anchors: Jun 20 19:58:56.012935 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:58:56.012958 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:58:56.018230 systemd-resolved[237]: Defaulting to hostname 'linux'. Jun 20 19:58:56.019164 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:58:56.019857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:58:56.102284 kernel: SCSI subsystem initialized Jun 20 19:58:56.111331 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:58:56.120275 kernel: iscsi: registered transport (tcp) Jun 20 19:58:56.137751 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:58:56.137790 kernel: QLogic iSCSI HBA Driver Jun 20 19:58:56.157702 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:58:56.175408 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:58:56.178086 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:58:56.234564 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:58:56.237224 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:58:56.282269 kernel: raid6: avx2x4 gen() 38790 MB/s Jun 20 19:58:56.300267 kernel: raid6: avx2x2 gen() 33965 MB/s Jun 20 19:58:56.318406 kernel: raid6: avx2x1 gen() 25041 MB/s Jun 20 19:58:56.318433 kernel: raid6: using algorithm avx2x4 gen() 38790 MB/s Jun 20 19:58:56.337555 kernel: raid6: .... xor() 5298 MB/s, rmw enabled Jun 20 19:58:56.337598 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:58:56.354271 kernel: xor: automatically using best checksumming function avx Jun 20 19:58:56.484288 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:58:56.492676 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:58:56.494686 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:58:56.515529 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 20 19:58:56.520567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:58:56.523213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:58:56.550445 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Jun 20 19:58:56.580753 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:58:56.582722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:58:56.656154 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:58:56.658135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:58:56.711322 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jun 20 19:58:56.723071 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:58:56.731288 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 20 19:58:56.738263 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:58:56.738285 kernel: libata version 3.00 loaded. Jun 20 19:58:56.816608 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:58:56.828086 kernel: AES CTR mode by8 optimization enabled Jun 20 19:58:56.822985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:58:56.823087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:58:56.847008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:58:56.862822 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 19:58:56.863057 kernel: sd 0:0:0:0: Power-on or device reset occurred Jun 20 19:58:56.863227 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 19:58:56.863256 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jun 20 19:58:56.863404 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 20 19:58:56.863536 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 20 19:58:56.863666 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 19:58:56.863793 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 19:58:56.863925 kernel: scsi host1: ahci Jun 20 19:58:56.864064 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jun 20 19:58:56.848697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:58:56.873501 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 19:58:56.873686 kernel: scsi host2: ahci Jun 20 19:58:56.873836 kernel: scsi host3: ahci Jun 20 19:58:56.857699 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:58:56.875256 kernel: scsi host4: ahci Jun 20 19:58:56.879565 kernel: scsi host5: ahci Jun 20 19:58:56.893802 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:58:56.893831 kernel: GPT:9289727 != 167739391 Jun 20 19:58:56.893842 kernel: scsi host6: ahci Jun 20 19:58:56.894000 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:58:56.894011 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Jun 20 19:58:56.894021 kernel: GPT:9289727 != 167739391 Jun 20 19:58:56.894030 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:58:56.894044 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:58:56.894053 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Jun 20 19:58:56.894062 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 19:58:56.894202 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Jun 20 19:58:56.900278 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Jun 20 19:58:56.900307 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Jun 20 19:58:56.900318 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Jun 20 19:58:56.960798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:58:57.207928 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 19:58:57.207953 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 19:58:57.208253 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 19:58:57.210802 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 19:58:57.211263 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jun 20 19:58:57.215261 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 20 19:58:57.261966 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 20 19:58:57.263473 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 20 19:58:57.271268 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 20 19:58:57.277811 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:58:57.285929 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 20 19:58:57.293412 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:58:57.294676 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:58:57.295424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:58:57.296644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:58:57.298465 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:58:57.300326 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:58:57.313851 disk-uuid[629]: Primary Header is updated. Jun 20 19:58:57.313851 disk-uuid[629]: Secondary Entries is updated. Jun 20 19:58:57.313851 disk-uuid[629]: Secondary Header is updated. Jun 20 19:58:57.316861 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:58:57.322322 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:58:57.329259 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:58:58.336326 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:58:58.337592 disk-uuid[634]: The operation has completed successfully. Jun 20 19:58:58.383548 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:58:58.383662 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:58:58.403596 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:58:58.419173 sh[651]: Success Jun 20 19:58:58.434547 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:58:58.434579 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:58:58.436620 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:58:58.444277 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:58:58.484276 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:58:58.487825 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:58:58.508781 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:58:58.520520 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:58:58.520552 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (663) Jun 20 19:58:58.523272 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:58:58.525542 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:58:58.528083 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:58:58.535185 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:58:58.536137 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:58:58.537028 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:58:58.537677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:58:58.540348 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:58:58.564633 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (696) Jun 20 19:58:58.564661 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:58:58.568208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:58:58.568228 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:58:58.579334 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:58:58.580313 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:58:58.583337 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:58:58.657173 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:58:58.662359 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:58:58.691020 ignition[759]: Ignition 2.21.0 Jun 20 19:58:58.691038 ignition[759]: Stage: fetch-offline Jun 20 19:58:58.691070 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:58:58.692886 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:58:58.691080 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:58:58.691144 ignition[759]: parsed url from cmdline: "" Jun 20 19:58:58.691147 ignition[759]: no config URL provided Jun 20 19:58:58.691151 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:58:58.691158 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:58:58.691162 ignition[759]: failed to fetch config: resource requires networking Jun 20 19:58:58.691306 ignition[759]: Ignition finished successfully Jun 20 19:58:58.701739 systemd-networkd[838]: lo: Link UP Jun 20 19:58:58.701750 systemd-networkd[838]: lo: Gained carrier Jun 20 19:58:58.702975 systemd-networkd[838]: Enumeration completed Jun 20 19:58:58.703045 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:58:58.703637 systemd[1]: Reached target network.target - Network. Jun 20 19:58:58.703939 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:58:58.703942 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:58:58.706666 systemd-networkd[838]: eth0: Link UP Jun 20 19:58:58.706670 systemd-networkd[838]: eth0: Gained carrier Jun 20 19:58:58.706678 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:58:58.706879 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:58:58.733620 ignition[842]: Ignition 2.21.0 Jun 20 19:58:58.733643 ignition[842]: Stage: fetch Jun 20 19:58:58.733755 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:58:58.733766 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:58:58.733832 ignition[842]: parsed url from cmdline: "" Jun 20 19:58:58.733836 ignition[842]: no config URL provided Jun 20 19:58:58.733841 ignition[842]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:58:58.733849 ignition[842]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:58:58.733880 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #1 Jun 20 19:58:58.734019 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 19:58:58.934641 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #2 Jun 20 19:58:58.934845 ignition[842]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 19:58:59.181308 systemd-networkd[838]: eth0: DHCPv4 address 172.237.156.205/24, gateway 172.237.156.1 acquired from 23.205.167.145 Jun 20 19:58:59.335338 ignition[842]: PUT http://169.254.169.254/v1/token: attempt #3 Jun 20 19:58:59.428869 ignition[842]: PUT result: OK Jun 20 19:58:59.428938 ignition[842]: GET http://169.254.169.254/v1/user-data: attempt #1 Jun 20 19:58:59.537628 ignition[842]: GET result: OK Jun 20 19:58:59.537739 ignition[842]: parsing config with SHA512: e109c2e0aee52a2fe52a53cdea07898583f3b6c0523036edc7b4fab5fc24b3c616882087455ee87e24fea643c15fc14143c6aabcb4bee66cdbbd6428620d49db Jun 20 19:58:59.543303 unknown[842]: fetched base config from "system" Jun 20 19:58:59.543313 unknown[842]: fetched base config from "system" Jun 20 19:58:59.543588 ignition[842]: fetch: fetch complete Jun 20 19:58:59.543319 unknown[842]: fetched user config from "akamai" Jun 20 19:58:59.543593 ignition[842]: fetch: fetch passed Jun 20 19:58:59.543635 ignition[842]: Ignition finished successfully Jun 20 19:58:59.546805 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:58:59.548856 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:58:59.595808 ignition[850]: Ignition 2.21.0 Jun 20 19:58:59.595822 ignition[850]: Stage: kargs Jun 20 19:58:59.595945 ignition[850]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:58:59.595955 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:58:59.596818 ignition[850]: kargs: kargs passed Jun 20 19:58:59.598377 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:58:59.596857 ignition[850]: Ignition finished successfully Jun 20 19:58:59.600570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:58:59.620263 ignition[857]: Ignition 2.21.0 Jun 20 19:58:59.620273 ignition[857]: Stage: disks Jun 20 19:58:59.620370 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:58:59.622879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:58:59.620380 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:58:59.623721 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:58:59.621088 ignition[857]: disks: disks passed Jun 20 19:58:59.624640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:58:59.621120 ignition[857]: Ignition finished successfully Jun 20 19:58:59.625864 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:58:59.627003 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:58:59.627968 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:58:59.629938 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:58:59.655512 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:58:59.657432 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:58:59.659987 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:58:59.769263 kernel: EXT4-fs (sda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:58:59.770584 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:58:59.772067 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:58:59.774487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:58:59.777324 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:58:59.779037 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:58:59.779958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:58:59.779984 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:58:59.787180 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:58:59.788744 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:58:59.799541 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (874) Jun 20 19:58:59.799569 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:58:59.801381 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:58:59.803334 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:58:59.809383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:58:59.838951 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:58:59.843439 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:58:59.848390 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:58:59.851911 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:58:59.944475 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:58:59.946496 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:58:59.948235 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:58:59.962905 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:58:59.965276 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:58:59.981025 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:58:59.991751 ignition[989]: INFO : Ignition 2.21.0 Jun 20 19:58:59.993339 ignition[989]: INFO : Stage: mount Jun 20 19:58:59.993339 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:58:59.993339 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:58:59.995327 ignition[989]: INFO : mount: mount passed Jun 20 19:58:59.995327 ignition[989]: INFO : Ignition finished successfully Jun 20 19:58:59.995117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:58:59.997221 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:59:00.719456 systemd-networkd[838]: eth0: Gained IPv6LL Jun 20 19:59:00.771308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:59:00.793294 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1000) Jun 20 19:59:00.793360 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:59:00.796289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:59:00.796309 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:59:00.802040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:59:00.829637 ignition[1016]: INFO : Ignition 2.21.0 Jun 20 19:59:00.829637 ignition[1016]: INFO : Stage: files Jun 20 19:59:00.830891 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:59:00.830891 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:59:00.830891 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:59:00.833521 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:59:00.833521 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:59:00.836230 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:59:00.837062 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:59:00.837062 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:59:00.836716 unknown[1016]: wrote ssh authorized keys file for user: core Jun 20 19:59:00.839194 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:59:00.839194 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:59:01.044561 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:59:01.168568 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:59:01.169616 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:59:01.169616 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:59:01.412467 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:59:01.525479 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:59:01.526587 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:59:01.533255 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:59:01.533255 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:59:01.533255 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:59:01.533255 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:59:01.533255 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:59:01.533255 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:59:02.081891 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:59:02.502546 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:59:02.502546 ignition[1016]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:59:02.504964 ignition[1016]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:59:02.506035 ignition[1016]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:59:02.513908 ignition[1016]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:59:02.513908 ignition[1016]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:59:02.513908 ignition[1016]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:59:02.513908 ignition[1016]: INFO : files: files passed Jun 20 19:59:02.513908 ignition[1016]: INFO : Ignition finished successfully Jun 20 19:59:02.508536 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:59:02.512372 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:59:02.516362 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:59:02.526596 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:59:02.527530 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:59:02.533479 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:59:02.533479 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:59:02.535300 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:59:02.536428 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:59:02.538300 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:59:02.539610 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:59:02.577546 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:59:02.577701 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:59:02.579199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:59:02.580080 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:59:02.581370 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:59:02.582059 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:59:02.620528 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:59:02.622442 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:59:02.637675 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:59:02.638404 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:59:02.639738 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:59:02.640960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:59:02.641093 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:59:02.643435 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:59:02.644052 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:59:02.645223 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:59:02.646378 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:59:02.647597 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:59:02.648859 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:59:02.650127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:59:02.651444 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:59:02.652750 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:59:02.654001 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:59:02.655201 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:59:02.656284 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:59:02.656432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:59:02.657696 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:59:02.658490 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:59:02.659747 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:59:02.660362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:59:02.660986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:59:02.661081 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:59:02.662720 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:59:02.662870 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:59:02.663579 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:59:02.663671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:59:02.667320 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:59:02.668704 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:59:02.670344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:59:02.670496 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:59:02.671646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:59:02.672171 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:59:02.679417 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:59:02.680087 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:59:02.694269 ignition[1071]: INFO : Ignition 2.21.0 Jun 20 19:59:02.694269 ignition[1071]: INFO : Stage: umount Jun 20 19:59:02.694269 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:59:02.694269 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jun 20 19:59:02.725169 ignition[1071]: INFO : umount: umount passed Jun 20 19:59:02.725169 ignition[1071]: INFO : Ignition finished successfully Jun 20 19:59:02.702335 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:59:02.702939 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:59:02.703055 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:59:02.722761 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:59:02.722842 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:59:02.725965 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:59:02.726024 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:59:02.727224 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:59:02.727311 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:59:02.727840 systemd[1]: Stopped target network.target - Network. Jun 20 19:59:02.728393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:59:02.728442 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:59:02.730929 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:59:02.733820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:59:02.737486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:59:02.738452 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:59:02.738913 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:59:02.739578 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:59:02.739622 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:59:02.740222 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:59:02.740297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:59:02.740832 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:59:02.740882 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:59:02.741443 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:59:02.741488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:59:02.742149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:59:02.744690 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:59:02.754774 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:59:02.754950 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:59:02.758671 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:59:02.758922 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:59:02.759041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:59:02.762232 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:59:02.762506 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:59:02.762632 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:59:02.764638 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:59:02.765758 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:59:02.765804 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:59:02.766974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:59:02.767028 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:59:02.769010 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:59:02.770568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:59:02.770619 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:59:02.772646 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:59:02.772703 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:59:02.775048 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:59:02.775096 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:59:02.776213 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:59:02.776288 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:59:02.778753 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:59:02.782049 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:59:02.782115 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:59:02.792578 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:59:02.792718 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:59:02.795653 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:59:02.795849 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:59:02.797470 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:59:02.797534 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:59:02.798411 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:59:02.798448 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:59:02.799630 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:59:02.799678 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:59:02.801384 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:59:02.801430 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:59:02.802578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:59:02.802629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:59:02.805355 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:59:02.806327 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:59:02.806379 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:59:02.809756 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:59:02.809810 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:59:02.810859 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:59:02.810906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:59:02.812374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:59:02.812419 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:59:02.813354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:59:02.813398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:59:02.818335 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:59:02.818392 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 20 19:59:02.818433 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:59:02.818478 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:59:02.822672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:59:02.822779 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:59:02.824996 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:59:02.827344 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:59:02.860581 systemd[1]: Switching root. Jun 20 19:59:02.889888 systemd-journald[206]: Journal stopped Jun 20 19:59:03.874359 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Jun 20 19:59:03.874386 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:59:03.874398 kernel: SELinux: policy capability open_perms=1 Jun 20 19:59:03.874410 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:59:03.874419 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:59:03.874427 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:59:03.874437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:59:03.874446 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:59:03.874454 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:59:03.874463 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:59:03.874474 kernel: audit: type=1403 audit(1750449543.016:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:59:03.874484 systemd[1]: Successfully loaded SELinux policy in 54.060ms. Jun 20 19:59:03.874494 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.940ms. Jun 20 19:59:03.874505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:59:03.874516 systemd[1]: Detected virtualization kvm. Jun 20 19:59:03.874528 systemd[1]: Detected architecture x86-64. Jun 20 19:59:03.874537 systemd[1]: Detected first boot. Jun 20 19:59:03.874547 systemd[1]: Initializing machine ID from random generator. Jun 20 19:59:03.874556 zram_generator::config[1114]: No configuration found. Jun 20 19:59:03.874566 kernel: Guest personality initialized and is inactive Jun 20 19:59:03.874575 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:59:03.874584 kernel: Initialized host personality Jun 20 19:59:03.874594 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:59:03.874604 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:59:03.874614 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:59:03.874624 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:59:03.874633 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:59:03.874642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:59:03.874652 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:59:03.874663 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:59:03.874673 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:59:03.874682 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:59:03.874692 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:59:03.874701 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:59:03.874711 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:59:03.874721 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:59:03.874733 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:59:03.874742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:59:03.874752 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:59:03.874762 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:59:03.874774 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:59:03.874784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:59:03.874794 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:59:03.874804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:59:03.874815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:59:03.874825 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:59:03.874835 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:59:03.874845 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:59:03.874854 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:59:03.874864 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:59:03.874873 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:59:03.874883 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:59:03.874894 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:59:03.874904 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:59:03.874914 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:59:03.874923 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:59:03.874934 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:59:03.874946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:59:03.874956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:59:03.874966 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:59:03.874975 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:59:03.874985 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:59:03.874995 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:59:03.875004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:03.875014 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:59:03.875025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:59:03.875035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:59:03.875045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:59:03.875055 systemd[1]: Reached target machines.target - Containers. Jun 20 19:59:03.875065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:59:03.875075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:59:03.875085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:59:03.875094 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:59:03.875106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:59:03.875115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:59:03.875125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:59:03.875135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:59:03.875145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:59:03.875166 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:59:03.875176 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:59:03.875186 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:59:03.875955 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:59:03.875975 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:59:03.875987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:59:03.875997 kernel: loop: module loaded Jun 20 19:59:03.876007 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:59:03.876017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:59:03.876027 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:59:03.876036 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:59:03.876046 kernel: ACPI: bus type drm_connector registered Jun 20 19:59:03.876058 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:59:03.876067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:59:03.876099 systemd-journald[1198]: Collecting audit messages is disabled. Jun 20 19:59:03.876121 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:59:03.876134 systemd[1]: Stopped verity-setup.service. Jun 20 19:59:03.876145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:03.876155 systemd-journald[1198]: Journal started Jun 20 19:59:03.876174 systemd-journald[1198]: Runtime Journal (/run/log/journal/2b225bd3ab5f4dc289f0a99134c26b38) is 8M, max 78.5M, 70.5M free. Jun 20 19:59:03.569557 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:59:03.583287 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:59:03.583846 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:59:03.883415 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:59:03.883444 kernel: fuse: init (API version 7.41) Jun 20 19:59:03.886024 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:59:03.886689 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:59:03.887402 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:59:03.889481 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:59:03.890146 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:59:03.890844 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:59:03.892310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:59:03.893678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:59:03.895726 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:59:03.895974 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:59:03.896859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:59:03.897690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:59:03.900634 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:59:03.900921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:59:03.903114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:59:03.903371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:59:03.904992 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:59:03.905390 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:59:03.906411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:59:03.907033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:59:03.908673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:59:03.912392 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:59:03.915409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:59:03.931135 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:59:03.935785 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:59:03.939316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:59:03.942389 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:59:03.943693 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:59:03.943776 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:59:03.945195 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:59:03.951485 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:59:03.952280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:59:03.956119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:59:03.959348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:59:03.959941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:59:03.963369 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:59:03.964657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:59:03.969193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:59:03.973359 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:59:03.978865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:59:03.982168 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:59:03.983686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:59:03.991036 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:59:03.994915 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:59:04.001709 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:59:04.009832 systemd-journald[1198]: Time spent on flushing to /var/log/journal/2b225bd3ab5f4dc289f0a99134c26b38 is 51.982ms for 1006 entries. Jun 20 19:59:04.009832 systemd-journald[1198]: System Journal (/var/log/journal/2b225bd3ab5f4dc289f0a99134c26b38) is 8M, max 195.6M, 187.6M free. Jun 20 19:59:04.080180 systemd-journald[1198]: Received client request to flush runtime journal. Jun 20 19:59:04.080225 kernel: loop0: detected capacity change from 0 to 8 Jun 20 19:59:04.080311 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:59:04.080334 kernel: loop1: detected capacity change from 0 to 146240 Jun 20 19:59:04.027306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:59:04.056591 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:59:04.058205 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:59:04.083772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:59:04.085145 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jun 20 19:59:04.085173 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jun 20 19:59:04.096333 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:59:04.101446 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:59:04.114286 kernel: loop2: detected capacity change from 0 to 224512 Jun 20 19:59:04.143883 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:59:04.146452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:59:04.153272 kernel: loop3: detected capacity change from 0 to 113872 Jun 20 19:59:04.182498 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jun 20 19:59:04.182516 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jun 20 19:59:04.194782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:59:04.203314 kernel: loop4: detected capacity change from 0 to 8 Jun 20 19:59:04.206288 kernel: loop5: detected capacity change from 0 to 146240 Jun 20 19:59:04.226268 kernel: loop6: detected capacity change from 0 to 224512 Jun 20 19:59:04.246299 kernel: loop7: detected capacity change from 0 to 113872 Jun 20 19:59:04.256008 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jun 20 19:59:04.256885 (sd-merge)[1265]: Merged extensions into '/usr'. Jun 20 19:59:04.261333 systemd[1]: Reload requested from client PID 1239 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:59:04.261472 systemd[1]: Reloading... Jun 20 19:59:04.349271 zram_generator::config[1290]: No configuration found. Jun 20 19:59:04.486835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:59:04.528320 ldconfig[1234]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:59:04.571316 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:59:04.571536 systemd[1]: Reloading finished in 309 ms. Jun 20 19:59:04.582096 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:59:04.583327 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:59:04.597349 systemd[1]: Starting ensure-sysext.service... Jun 20 19:59:04.601401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:59:04.626054 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:59:04.626157 systemd[1]: Reloading... Jun 20 19:59:04.637630 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:59:04.638002 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:59:04.638380 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:59:04.638730 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:59:04.639603 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:59:04.639916 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jun 20 19:59:04.640042 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jun 20 19:59:04.648212 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:59:04.648224 systemd-tmpfiles[1335]: Skipping /boot Jun 20 19:59:04.671137 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:59:04.673265 systemd-tmpfiles[1335]: Skipping /boot Jun 20 19:59:04.709362 zram_generator::config[1362]: No configuration found. Jun 20 19:59:04.801768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:59:04.872484 systemd[1]: Reloading finished in 245 ms. Jun 20 19:59:04.882194 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:59:04.898862 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:59:04.906922 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:59:04.909439 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:59:04.916112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:59:04.919487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:59:04.924550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:59:04.930084 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:59:04.938031 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:04.938432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:59:04.940312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:59:04.944698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:59:04.949525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:59:04.950427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:59:04.950515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:59:04.953094 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:59:04.954274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:04.960728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:04.961203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:59:04.962400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:59:04.962523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:59:04.962645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:04.967833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:04.968479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:59:04.975527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:59:04.976310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:59:04.977348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:59:04.977491 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:59:04.978300 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:59:04.984805 systemd[1]: Finished ensure-sysext.service. Jun 20 19:59:04.991442 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:59:05.000986 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:59:05.005087 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Jun 20 19:59:05.006426 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:59:05.018307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:59:05.019531 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:59:05.021841 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:59:05.022062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:59:05.025651 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:59:05.030093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:59:05.031596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:59:05.032638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:59:05.037442 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:59:05.038677 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:59:05.047930 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:59:05.060283 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:59:05.061420 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:59:05.062007 augenrules[1448]: No rules Jun 20 19:59:05.064757 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:59:05.065314 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:59:05.066696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:59:05.072456 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:59:05.075164 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:59:05.206384 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:59:05.236710 systemd-networkd[1458]: lo: Link UP Jun 20 19:59:05.236719 systemd-networkd[1458]: lo: Gained carrier Jun 20 19:59:05.238683 systemd-networkd[1458]: Enumeration completed Jun 20 19:59:05.238760 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:59:05.240949 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:59:05.240954 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:59:05.241589 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:59:05.244607 systemd-networkd[1458]: eth0: Link UP Jun 20 19:59:05.244776 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:59:05.244795 systemd-networkd[1458]: eth0: Gained carrier Jun 20 19:59:05.244810 systemd-networkd[1458]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:59:05.265777 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:59:05.322371 systemd-resolved[1410]: Positive Trust Anchors: Jun 20 19:59:05.322396 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:59:05.322423 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:59:05.329640 systemd-resolved[1410]: Defaulting to hostname 'linux'. Jun 20 19:59:05.333764 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:59:05.334862 systemd[1]: Reached target network.target - Network. Jun 20 19:59:05.336567 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:59:05.350807 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:59:05.351456 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:59:05.352184 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:59:05.353364 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:59:05.353952 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:59:05.355503 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:59:05.356096 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:59:05.356125 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:59:05.356633 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:59:05.357519 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:59:05.358404 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:59:05.359379 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:59:05.363478 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:59:05.368889 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:59:05.375389 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:59:05.376687 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:59:05.379076 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:59:05.378325 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:59:05.387548 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:59:05.389662 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:59:05.390883 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:59:05.392864 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:59:05.394082 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:59:05.395232 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:59:05.395298 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:59:05.397367 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:59:05.400436 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:59:05.402456 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:59:05.408438 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:59:05.414123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:59:05.411409 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:59:05.415847 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:59:05.416698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:59:05.421791 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 19:59:05.422063 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:59:05.431428 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:59:05.429262 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:59:05.443434 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:59:05.455090 jq[1508]: false Jun 20 19:59:05.455687 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:59:05.496470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:59:05.505021 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:59:05.512543 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:59:05.515226 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:59:05.516479 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:59:05.522817 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Refreshing passwd entry cache Jun 20 19:59:05.522951 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:59:05.525217 oslogin_cache_refresh[1510]: Refreshing passwd entry cache Jun 20 19:59:05.535739 extend-filesystems[1509]: Found /dev/sda6 Jun 20 19:59:05.531683 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:59:05.551272 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Failure getting users, quitting Jun 20 19:59:05.551272 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:59:05.551272 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Refreshing group entry cache Jun 20 19:59:05.551272 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Failure getting groups, quitting Jun 20 19:59:05.551272 google_oslogin_nss_cache[1510]: oslogin_cache_refresh[1510]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:59:05.550687 oslogin_cache_refresh[1510]: Failure getting users, quitting Jun 20 19:59:05.550704 oslogin_cache_refresh[1510]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:59:05.550745 oslogin_cache_refresh[1510]: Refreshing group entry cache Jun 20 19:59:05.551221 oslogin_cache_refresh[1510]: Failure getting groups, quitting Jun 20 19:59:05.551229 oslogin_cache_refresh[1510]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:59:05.554499 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:59:05.555522 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:59:05.555744 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:59:05.557992 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:59:05.558210 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:59:05.568766 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:59:05.569024 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:59:05.579469 extend-filesystems[1509]: Found /dev/sda9 Jun 20 19:59:05.595916 extend-filesystems[1509]: Checking size of /dev/sda9 Jun 20 19:59:05.601506 jq[1524]: true Jun 20 19:59:05.598869 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:59:05.608888 update_engine[1523]: I20250620 19:59:05.605937 1523 main.cc:92] Flatcar Update Engine starting Jun 20 19:59:05.620572 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:59:05.620865 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:59:05.629809 dbus-daemon[1506]: [system] SELinux support is enabled Jun 20 19:59:05.630209 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:59:05.633040 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:59:05.633074 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:59:05.634715 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:59:05.634786 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:59:05.640063 extend-filesystems[1509]: Resized partition /dev/sda9 Jun 20 19:59:05.643094 coreos-metadata[1505]: Jun 20 19:59:05.642 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jun 20 19:59:05.648896 tar[1530]: linux-amd64/LICENSE Jun 20 19:59:05.652321 tar[1530]: linux-amd64/helm Jun 20 19:59:05.652361 extend-filesystems[1556]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:59:05.657267 jq[1548]: true Jun 20 19:59:05.663301 update_engine[1523]: I20250620 19:59:05.658731 1523 update_check_scheduler.cc:74] Next update check in 9m24s Jun 20 19:59:05.659027 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:59:05.667016 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jun 20 19:59:05.667215 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:59:05.703202 systemd-logind[1522]: New seat seat0. Jun 20 19:59:05.705535 systemd-networkd[1458]: eth0: DHCPv4 address 172.237.156.205/24, gateway 172.237.156.1 acquired from 23.205.167.145 Jun 20 19:59:05.705460 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 19:59:05.711755 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Jun 20 19:59:05.714690 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 19:59:05.718394 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:59:05.811801 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:59:05.835263 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:59:05.836931 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:59:05.845039 systemd[1]: Starting sshkeys.service... Jun 20 19:59:07.234258 systemd-resolved[1410]: Clock change detected. Flushing caches. Jun 20 19:59:07.234262 systemd-timesyncd[1434]: Contacted time server 139.94.144.123:123 (0.flatcar.pool.ntp.org). Jun 20 19:59:07.235700 systemd-timesyncd[1434]: Initial clock synchronization to Fri 2025-06-20 19:59:07.234020 UTC. Jun 20 19:59:07.289052 containerd[1543]: time="2025-06-20T19:59:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:59:07.302742 containerd[1543]: time="2025-06-20T19:59:07.302702921Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:59:07.303546 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:59:07.306715 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:59:07.344384 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363116301Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.81µs" Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363154161Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363173761Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363417710Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363438470Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363466670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363544280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363555270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363783460Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363796470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363807330Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364214 containerd[1543]: time="2025-06-20T19:59:07.363815380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364623 containerd[1543]: time="2025-06-20T19:59:07.363911430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364623 containerd[1543]: time="2025-06-20T19:59:07.364145890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364623 containerd[1543]: time="2025-06-20T19:59:07.364177500Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:59:07.364623 containerd[1543]: time="2025-06-20T19:59:07.364187420Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:59:07.364732 extend-filesystems[1556]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 19:59:07.364732 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 10 Jun 20 19:59:07.364732 extend-filesystems[1556]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jun 20 19:59:07.368407 extend-filesystems[1509]: Resized filesystem in /dev/sda9 Jun 20 19:59:07.368165 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:59:07.369504 containerd[1543]: time="2025-06-20T19:59:07.366522849Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:59:07.369504 containerd[1543]: time="2025-06-20T19:59:07.369050248Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:59:07.369504 containerd[1543]: time="2025-06-20T19:59:07.369128028Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:59:07.369510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372547706Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372600636Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372616876Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372630006Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372676176Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372689046Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372702976Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372716326Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372727696Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372738086Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372746916Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372764656Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372869896Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:59:07.373152 containerd[1543]: time="2025-06-20T19:59:07.372890436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372904326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372915286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372925856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372935866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372946996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372956796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372967426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372978686Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.372989486Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.373114456Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:59:07.373515 containerd[1543]: time="2025-06-20T19:59:07.373130166Z" level=info msg="Start snapshots syncer" Jun 20 19:59:07.374165 containerd[1543]: time="2025-06-20T19:59:07.373708455Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:59:07.374165 containerd[1543]: time="2025-06-20T19:59:07.374038075Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:59:07.374330 containerd[1543]: time="2025-06-20T19:59:07.374100725Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:59:07.375874 containerd[1543]: time="2025-06-20T19:59:07.375852314Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376032714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376058804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376070174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376081234Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376094064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376104454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376114914Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376137204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376147354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:59:07.376320 containerd[1543]: time="2025-06-20T19:59:07.376159554Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.376911384Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.376936254Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377003424Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377015004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377022374Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377031554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377041784Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377062124Z" level=info msg="runtime interface created" Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377067064Z" level=info msg="created NRI interface" Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377073964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377084304Z" level=info msg="Connect containerd service" Jun 20 19:59:07.377147 containerd[1543]: time="2025-06-20T19:59:07.377118074Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:59:07.378248 containerd[1543]: time="2025-06-20T19:59:07.378228353Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:59:07.428610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:59:07.432412 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:59:07.478483 coreos-metadata[1591]: Jun 20 19:59:07.478 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jun 20 19:59:07.479740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:59:07.506133 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:59:07.517998 systemd-logind[1522]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:59:07.525222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:59:07.572471 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:59:07.574955 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 19:59:07.580563 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 19:59:07.583670 dbus-daemon[1506]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1563 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 19:59:07.588209 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 19:59:07.599454 coreos-metadata[1591]: Jun 20 19:59:07.599 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jun 20 19:59:07.644457 containerd[1543]: time="2025-06-20T19:59:07.644426830Z" level=info msg="Start subscribing containerd event" Jun 20 19:59:07.644614 containerd[1543]: time="2025-06-20T19:59:07.644563800Z" level=info msg="Start recovering state" Jun 20 19:59:07.645634 containerd[1543]: time="2025-06-20T19:59:07.645616029Z" level=info msg="Start event monitor" Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.646530919Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.646561319Z" level=info msg="Start streaming server" Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.646634409Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.646642599Z" level=info msg="runtime interface starting up..." Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.646648289Z" level=info msg="starting plugins..." Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.646665689Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:59:07.646775 containerd[1543]: time="2025-06-20T19:59:07.645705829Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:59:07.647190 containerd[1543]: time="2025-06-20T19:59:07.647150478Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:59:07.647959 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:59:07.648714 containerd[1543]: time="2025-06-20T19:59:07.648424808Z" level=info msg="containerd successfully booted in 0.362473s" Jun 20 19:59:07.691335 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:59:07.736648 coreos-metadata[1591]: Jun 20 19:59:07.736 INFO Fetch successful Jun 20 19:59:07.752635 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:59:07.781672 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:59:07.791656 update-ssh-keys[1643]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:59:07.806764 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:59:07.842458 systemd[1]: Finished sshkeys.service. Jun 20 19:59:07.859009 polkitd[1628]: Started polkitd version 126 Jun 20 19:59:07.864262 polkitd[1628]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 19:59:07.865639 polkitd[1628]: Loading rules from directory /run/polkit-1/rules.d Jun 20 19:59:07.866287 polkitd[1628]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 19:59:07.866740 polkitd[1628]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jun 20 19:59:07.866806 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:59:07.867035 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:59:07.867510 polkitd[1628]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jun 20 19:59:07.868094 polkitd[1628]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 19:59:07.869781 polkitd[1628]: Finished loading, compiling and executing 2 rules Jun 20 19:59:07.870275 dbus-daemon[1506]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 19:59:07.871609 polkitd[1628]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 19:59:07.888049 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 19:59:07.893616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:59:07.903537 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:59:07.911633 systemd-hostnamed[1563]: Hostname set to <172-237-156-205> (transient) Jun 20 19:59:07.911775 systemd-resolved[1410]: System hostname changed to '172-237-156-205'. Jun 20 19:59:07.933745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:59:07.937732 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:59:07.939601 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:59:07.940676 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:59:08.034276 coreos-metadata[1505]: Jun 20 19:59:08.034 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jun 20 19:59:08.124663 coreos-metadata[1505]: Jun 20 19:59:08.124 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jun 20 19:59:08.130052 tar[1530]: linux-amd64/README.md Jun 20 19:59:08.151170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:59:08.337990 coreos-metadata[1505]: Jun 20 19:59:08.337 INFO Fetch successful Jun 20 19:59:08.338064 coreos-metadata[1505]: Jun 20 19:59:08.337 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jun 20 19:59:08.500518 systemd-networkd[1458]: eth0: Gained IPv6LL Jun 20 19:59:08.503471 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:59:08.504533 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:59:08.507258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:59:08.510534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:59:08.534062 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:59:08.600347 coreos-metadata[1505]: Jun 20 19:59:08.600 INFO Fetch successful Jun 20 19:59:08.700679 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:59:08.701595 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:59:09.413941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:09.415088 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:59:09.416508 systemd[1]: Startup finished in 2.559s (kernel) + 7.375s (initrd) + 5.071s (userspace) = 15.007s. Jun 20 19:59:09.460886 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:59:09.972574 kubelet[1705]: E0620 19:59:09.972506 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:59:09.976728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:59:09.976944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:59:09.977703 systemd[1]: kubelet.service: Consumed 844ms CPU time, 264.1M memory peak. Jun 20 19:59:11.164306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:59:11.165910 systemd[1]: Started sshd@0-172.237.156.205:22-139.178.68.195:34464.service - OpenSSH per-connection server daemon (139.178.68.195:34464). Jun 20 19:59:11.510478 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 34464 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:11.512652 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:11.520308 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:59:11.521393 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:59:11.528755 systemd-logind[1522]: New session 1 of user core. Jun 20 19:59:11.543554 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:59:11.547583 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:59:11.558917 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:59:11.561491 systemd-logind[1522]: New session c1 of user core. Jun 20 19:59:11.691747 systemd[1720]: Queued start job for default target default.target. Jun 20 19:59:11.703624 systemd[1720]: Created slice app.slice - User Application Slice. Jun 20 19:59:11.703654 systemd[1720]: Reached target paths.target - Paths. Jun 20 19:59:11.703700 systemd[1720]: Reached target timers.target - Timers. Jun 20 19:59:11.705226 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:59:11.716212 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:59:11.716392 systemd[1720]: Reached target sockets.target - Sockets. Jun 20 19:59:11.716470 systemd[1720]: Reached target basic.target - Basic System. Jun 20 19:59:11.716517 systemd[1720]: Reached target default.target - Main User Target. Jun 20 19:59:11.716552 systemd[1720]: Startup finished in 148ms. Jun 20 19:59:11.716702 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:59:11.725484 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:59:11.985519 systemd[1]: Started sshd@1-172.237.156.205:22-139.178.68.195:34470.service - OpenSSH per-connection server daemon (139.178.68.195:34470). Jun 20 19:59:12.320660 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 34470 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:12.322619 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:12.328105 systemd-logind[1522]: New session 2 of user core. Jun 20 19:59:12.342508 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:59:12.569926 sshd[1733]: Connection closed by 139.178.68.195 port 34470 Jun 20 19:59:12.570759 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jun 20 19:59:12.575769 systemd[1]: sshd@1-172.237.156.205:22-139.178.68.195:34470.service: Deactivated successfully. Jun 20 19:59:12.578590 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:59:12.579418 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:59:12.581185 systemd-logind[1522]: Removed session 2. Jun 20 19:59:12.630228 systemd[1]: Started sshd@2-172.237.156.205:22-139.178.68.195:34486.service - OpenSSH per-connection server daemon (139.178.68.195:34486). Jun 20 19:59:12.961281 sshd[1739]: Accepted publickey for core from 139.178.68.195 port 34486 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:12.963279 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:12.969094 systemd-logind[1522]: New session 3 of user core. Jun 20 19:59:12.974495 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:59:13.203599 sshd[1741]: Connection closed by 139.178.68.195 port 34486 Jun 20 19:59:13.204214 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jun 20 19:59:13.209300 systemd[1]: sshd@2-172.237.156.205:22-139.178.68.195:34486.service: Deactivated successfully. Jun 20 19:59:13.211348 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:59:13.212605 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:59:13.214008 systemd-logind[1522]: Removed session 3. Jun 20 19:59:13.269823 systemd[1]: Started sshd@3-172.237.156.205:22-139.178.68.195:34494.service - OpenSSH per-connection server daemon (139.178.68.195:34494). Jun 20 19:59:13.620307 sshd[1747]: Accepted publickey for core from 139.178.68.195 port 34494 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:13.622161 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:13.627461 systemd-logind[1522]: New session 4 of user core. Jun 20 19:59:13.633470 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:59:13.873163 sshd[1749]: Connection closed by 139.178.68.195 port 34494 Jun 20 19:59:13.873897 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jun 20 19:59:13.878413 systemd[1]: sshd@3-172.237.156.205:22-139.178.68.195:34494.service: Deactivated successfully. Jun 20 19:59:13.880737 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:59:13.883056 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:59:13.885352 systemd-logind[1522]: Removed session 4. Jun 20 19:59:13.932313 systemd[1]: Started sshd@4-172.237.156.205:22-139.178.68.195:41198.service - OpenSSH per-connection server daemon (139.178.68.195:41198). Jun 20 19:59:14.265890 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 41198 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:14.268201 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:14.274307 systemd-logind[1522]: New session 5 of user core. Jun 20 19:59:14.281511 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:59:14.467249 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:59:14.467574 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:59:14.486575 sudo[1758]: pam_unix(sudo:session): session closed for user root Jun 20 19:59:14.536724 sshd[1757]: Connection closed by 139.178.68.195 port 41198 Jun 20 19:59:14.537600 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jun 20 19:59:14.541971 systemd[1]: sshd@4-172.237.156.205:22-139.178.68.195:41198.service: Deactivated successfully. Jun 20 19:59:14.544033 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:59:14.544922 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:59:14.546601 systemd-logind[1522]: Removed session 5. Jun 20 19:59:14.596256 systemd[1]: Started sshd@5-172.237.156.205:22-139.178.68.195:41200.service - OpenSSH per-connection server daemon (139.178.68.195:41200). Jun 20 19:59:14.928282 sshd[1764]: Accepted publickey for core from 139.178.68.195 port 41200 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:14.929650 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:14.934604 systemd-logind[1522]: New session 6 of user core. Jun 20 19:59:14.949480 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:59:15.122851 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:59:15.123138 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:59:15.127146 sudo[1768]: pam_unix(sudo:session): session closed for user root Jun 20 19:59:15.132077 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:59:15.132349 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:59:15.141411 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:59:15.183175 augenrules[1790]: No rules Jun 20 19:59:15.185218 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:59:15.185625 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:59:15.187315 sudo[1767]: pam_unix(sudo:session): session closed for user root Jun 20 19:59:15.237540 sshd[1766]: Connection closed by 139.178.68.195 port 41200 Jun 20 19:59:15.238039 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jun 20 19:59:15.242284 systemd[1]: sshd@5-172.237.156.205:22-139.178.68.195:41200.service: Deactivated successfully. Jun 20 19:59:15.244156 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:59:15.244950 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:59:15.246255 systemd-logind[1522]: Removed session 6. Jun 20 19:59:15.297732 systemd[1]: Started sshd@6-172.237.156.205:22-139.178.68.195:41212.service - OpenSSH per-connection server daemon (139.178.68.195:41212). Jun 20 19:59:15.633386 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 41212 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 19:59:15.635148 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:59:15.640885 systemd-logind[1522]: New session 7 of user core. Jun 20 19:59:15.646502 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:59:15.829843 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:59:15.830163 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:59:16.105492 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:59:16.120716 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:59:16.299803 dockerd[1820]: time="2025-06-20T19:59:16.299731121Z" level=info msg="Starting up" Jun 20 19:59:16.301532 dockerd[1820]: time="2025-06-20T19:59:16.301509330Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:59:16.336947 systemd[1]: var-lib-docker-metacopy\x2dcheck3332907235-merged.mount: Deactivated successfully. Jun 20 19:59:16.355213 dockerd[1820]: time="2025-06-20T19:59:16.354989104Z" level=info msg="Loading containers: start." Jun 20 19:59:16.367399 kernel: Initializing XFRM netlink socket Jun 20 19:59:16.611063 systemd-networkd[1458]: docker0: Link UP Jun 20 19:59:16.614031 dockerd[1820]: time="2025-06-20T19:59:16.613988084Z" level=info msg="Loading containers: done." Jun 20 19:59:16.628269 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck710622521-merged.mount: Deactivated successfully. Jun 20 19:59:16.630328 dockerd[1820]: time="2025-06-20T19:59:16.630285756Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:59:16.630449 dockerd[1820]: time="2025-06-20T19:59:16.630348056Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:59:16.630511 dockerd[1820]: time="2025-06-20T19:59:16.630480326Z" level=info msg="Initializing buildkit" Jun 20 19:59:16.654165 dockerd[1820]: time="2025-06-20T19:59:16.654104664Z" level=info msg="Completed buildkit initialization" Jun 20 19:59:16.662325 dockerd[1820]: time="2025-06-20T19:59:16.662290400Z" level=info msg="Daemon has completed initialization" Jun 20 19:59:16.662606 dockerd[1820]: time="2025-06-20T19:59:16.662346690Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:59:16.662747 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:59:17.578184 containerd[1543]: time="2025-06-20T19:59:17.578116522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:59:18.156163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215429339.mount: Deactivated successfully. Jun 20 19:59:19.484715 containerd[1543]: time="2025-06-20T19:59:19.484655278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:19.485634 containerd[1543]: time="2025-06-20T19:59:19.485414968Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799051" Jun 20 19:59:19.486343 containerd[1543]: time="2025-06-20T19:59:19.486285708Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:19.488380 containerd[1543]: time="2025-06-20T19:59:19.488322356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:19.489609 containerd[1543]: time="2025-06-20T19:59:19.489385706Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.911207644s" Jun 20 19:59:19.489647 containerd[1543]: time="2025-06-20T19:59:19.489615846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:59:19.490621 containerd[1543]: time="2025-06-20T19:59:19.490556495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:59:20.227212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:59:20.229475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:59:20.401011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:20.411580 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:59:20.457828 kubelet[2083]: E0620 19:59:20.457756 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:59:20.463983 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:59:20.464174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:59:20.464836 systemd[1]: kubelet.service: Consumed 190ms CPU time, 109.4M memory peak. Jun 20 19:59:21.180768 containerd[1543]: time="2025-06-20T19:59:21.179917320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:21.180768 containerd[1543]: time="2025-06-20T19:59:21.180733700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783918" Jun 20 19:59:21.181300 containerd[1543]: time="2025-06-20T19:59:21.181278500Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:21.183068 containerd[1543]: time="2025-06-20T19:59:21.183044669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:21.183887 containerd[1543]: time="2025-06-20T19:59:21.183863729Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.693278024s" Jun 20 19:59:21.183963 containerd[1543]: time="2025-06-20T19:59:21.183949358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:59:21.188483 containerd[1543]: time="2025-06-20T19:59:21.188445526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:59:22.484970 containerd[1543]: time="2025-06-20T19:59:22.484881658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:22.485624 containerd[1543]: time="2025-06-20T19:59:22.485549468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176922" Jun 20 19:59:22.487310 containerd[1543]: time="2025-06-20T19:59:22.486485737Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:22.488627 containerd[1543]: time="2025-06-20T19:59:22.488611086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:22.489221 containerd[1543]: time="2025-06-20T19:59:22.489197056Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.30071002s" Jun 20 19:59:22.489249 containerd[1543]: time="2025-06-20T19:59:22.489228306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:59:22.490167 containerd[1543]: time="2025-06-20T19:59:22.490145475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:59:23.486820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3605113072.mount: Deactivated successfully. Jun 20 19:59:23.855394 containerd[1543]: time="2025-06-20T19:59:23.855068453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:23.857657 containerd[1543]: time="2025-06-20T19:59:23.857081132Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895369" Jun 20 19:59:23.858510 containerd[1543]: time="2025-06-20T19:59:23.858229101Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:23.860737 containerd[1543]: time="2025-06-20T19:59:23.860661380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:23.861146 containerd[1543]: time="2025-06-20T19:59:23.861116040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.370947315s" Jun 20 19:59:23.861176 containerd[1543]: time="2025-06-20T19:59:23.861145060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:59:23.862905 containerd[1543]: time="2025-06-20T19:59:23.862873409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:59:24.357780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount841580513.mount: Deactivated successfully. Jun 20 19:59:25.069777 containerd[1543]: time="2025-06-20T19:59:25.069700915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:25.070768 containerd[1543]: time="2025-06-20T19:59:25.070727125Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jun 20 19:59:25.071382 containerd[1543]: time="2025-06-20T19:59:25.071323684Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:25.074956 containerd[1543]: time="2025-06-20T19:59:25.073609873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:25.074956 containerd[1543]: time="2025-06-20T19:59:25.074513343Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.211611954s" Jun 20 19:59:25.074956 containerd[1543]: time="2025-06-20T19:59:25.074565383Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:59:25.075244 containerd[1543]: time="2025-06-20T19:59:25.075202662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:59:25.542854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567457664.mount: Deactivated successfully. Jun 20 19:59:25.549016 containerd[1543]: time="2025-06-20T19:59:25.548967195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:59:25.549649 containerd[1543]: time="2025-06-20T19:59:25.549620765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jun 20 19:59:25.551153 containerd[1543]: time="2025-06-20T19:59:25.550223065Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:59:25.551823 containerd[1543]: time="2025-06-20T19:59:25.551786084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:59:25.552446 containerd[1543]: time="2025-06-20T19:59:25.552412904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 477.180462ms" Jun 20 19:59:25.552529 containerd[1543]: time="2025-06-20T19:59:25.552513024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:59:25.553330 containerd[1543]: time="2025-06-20T19:59:25.553207893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:59:26.201834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115537963.mount: Deactivated successfully. Jun 20 19:59:28.060795 containerd[1543]: time="2025-06-20T19:59:28.060738659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:28.061783 containerd[1543]: time="2025-06-20T19:59:28.061683899Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551366" Jun 20 19:59:28.062390 containerd[1543]: time="2025-06-20T19:59:28.062345968Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:28.064798 containerd[1543]: time="2025-06-20T19:59:28.064743127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:28.065843 containerd[1543]: time="2025-06-20T19:59:28.065690707Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.512275024s" Jun 20 19:59:28.065843 containerd[1543]: time="2025-06-20T19:59:28.065718597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:59:30.386996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:30.387674 systemd[1]: kubelet.service: Consumed 190ms CPU time, 109.4M memory peak. Jun 20 19:59:30.394502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:59:30.415115 systemd[1]: Reload requested from client PID 2244 ('systemctl') (unit session-7.scope)... Jun 20 19:59:30.415128 systemd[1]: Reloading... Jun 20 19:59:30.577405 zram_generator::config[2287]: No configuration found. Jun 20 19:59:30.666162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:59:30.778260 systemd[1]: Reloading finished in 362 ms. Jun 20 19:59:30.844274 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:59:30.844404 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:59:30.844761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:30.844814 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.3M memory peak. Jun 20 19:59:30.846893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:59:31.032091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:31.041001 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:59:31.076090 kubelet[2341]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:59:31.076090 kubelet[2341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:59:31.076090 kubelet[2341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:59:31.076090 kubelet[2341]: I0620 19:59:31.075965 2341 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:59:31.444864 kubelet[2341]: I0620 19:59:31.444734 2341 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:59:31.444864 kubelet[2341]: I0620 19:59:31.444762 2341 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:59:31.445034 kubelet[2341]: I0620 19:59:31.444947 2341 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:59:31.468523 kubelet[2341]: E0620 19:59:31.468481 2341 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.156.205:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.156.205:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:59:31.469178 kubelet[2341]: I0620 19:59:31.469063 2341 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:59:31.484866 kubelet[2341]: I0620 19:59:31.484826 2341 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:59:31.488743 kubelet[2341]: I0620 19:59:31.488695 2341 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:59:31.491369 kubelet[2341]: I0620 19:59:31.491310 2341 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:59:31.491562 kubelet[2341]: I0620 19:59:31.491347 2341 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-156-205","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:59:31.491663 kubelet[2341]: I0620 19:59:31.491563 2341 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:59:31.491663 kubelet[2341]: I0620 19:59:31.491575 2341 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:59:31.491767 kubelet[2341]: I0620 19:59:31.491738 2341 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:59:31.496251 kubelet[2341]: I0620 19:59:31.496128 2341 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:59:31.496251 kubelet[2341]: I0620 19:59:31.496172 2341 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:59:31.496251 kubelet[2341]: I0620 19:59:31.496201 2341 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:59:31.496251 kubelet[2341]: I0620 19:59:31.496215 2341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:59:31.502150 kubelet[2341]: I0620 19:59:31.502136 2341 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:59:31.502545 kubelet[2341]: I0620 19:59:31.502504 2341 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:59:31.504004 kubelet[2341]: W0620 19:59:31.503451 2341 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:59:31.505064 kubelet[2341]: I0620 19:59:31.505023 2341 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:59:31.505064 kubelet[2341]: I0620 19:59:31.505053 2341 server.go:1287] "Started kubelet" Jun 20 19:59:31.505777 kubelet[2341]: W0620 19:59:31.505166 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.156.205:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-156-205&limit=500&resourceVersion=0": dial tcp 172.237.156.205:6443: connect: connection refused Jun 20 19:59:31.505777 kubelet[2341]: E0620 19:59:31.505225 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.156.205:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-156-205&limit=500&resourceVersion=0\": dial tcp 172.237.156.205:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:59:31.506580 kubelet[2341]: W0620 19:59:31.506521 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.156.205:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.156.205:6443: connect: connection refused Jun 20 19:59:31.506580 kubelet[2341]: E0620 19:59:31.506582 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.156.205:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.156.205:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:59:31.506763 kubelet[2341]: I0620 19:59:31.506726 2341 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:59:31.508931 kubelet[2341]: I0620 19:59:31.508421 2341 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:59:31.508931 kubelet[2341]: I0620 19:59:31.508758 2341 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:59:31.509250 kubelet[2341]: I0620 19:59:31.509214 2341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:59:31.512103 kubelet[2341]: E0620 19:59:31.511060 2341 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.156.205:6443/api/v1/namespaces/default/events\": dial tcp 172.237.156.205:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-156-205.184ad89c6acae6db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-156-205,UID:172-237-156-205,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-156-205,},FirstTimestamp:2025-06-20 19:59:31.505039067 +0000 UTC m=+0.460068851,LastTimestamp:2025-06-20 19:59:31.505039067 +0000 UTC m=+0.460068851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-156-205,}" Jun 20 19:59:31.512835 kubelet[2341]: I0620 19:59:31.512821 2341 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:59:31.513949 kubelet[2341]: I0620 19:59:31.513931 2341 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:59:31.515633 kubelet[2341]: I0620 19:59:31.515601 2341 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:59:31.515849 kubelet[2341]: E0620 19:59:31.515819 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-156-205\" not found" Jun 20 19:59:31.516604 kubelet[2341]: E0620 19:59:31.516125 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.156.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-156-205?timeout=10s\": dial tcp 172.237.156.205:6443: connect: connection refused" interval="200ms" Jun 20 19:59:31.517085 kubelet[2341]: I0620 19:59:31.517065 2341 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:59:31.517523 kubelet[2341]: E0620 19:59:31.517507 2341 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:59:31.518614 kubelet[2341]: I0620 19:59:31.518136 2341 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:59:31.518614 kubelet[2341]: I0620 19:59:31.518169 2341 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:59:31.518776 kubelet[2341]: W0620 19:59:31.518747 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.156.205:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.156.205:6443: connect: connection refused Jun 20 19:59:31.518848 kubelet[2341]: E0620 19:59:31.518834 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.156.205:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.156.205:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:59:31.519114 kubelet[2341]: I0620 19:59:31.519099 2341 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:59:31.519176 kubelet[2341]: I0620 19:59:31.519167 2341 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:59:31.532448 kubelet[2341]: I0620 19:59:31.532386 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:59:31.535200 kubelet[2341]: I0620 19:59:31.535062 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:59:31.535200 kubelet[2341]: I0620 19:59:31.535086 2341 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:59:31.535200 kubelet[2341]: I0620 19:59:31.535111 2341 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:59:31.535200 kubelet[2341]: I0620 19:59:31.535119 2341 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:59:31.538772 kubelet[2341]: E0620 19:59:31.538544 2341 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:59:31.543242 kubelet[2341]: W0620 19:59:31.543177 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.156.205:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.156.205:6443: connect: connection refused Jun 20 19:59:31.543278 kubelet[2341]: E0620 19:59:31.543244 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.156.205:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.156.205:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:59:31.544722 kubelet[2341]: I0620 19:59:31.544555 2341 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:59:31.544722 kubelet[2341]: I0620 19:59:31.544567 2341 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:59:31.544722 kubelet[2341]: I0620 19:59:31.544580 2341 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:59:31.546230 kubelet[2341]: I0620 19:59:31.546198 2341 policy_none.go:49] "None policy: Start" Jun 20 19:59:31.546230 kubelet[2341]: I0620 19:59:31.546220 2341 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:59:31.546230 kubelet[2341]: I0620 19:59:31.546231 2341 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:59:31.553460 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:59:31.564935 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:59:31.568491 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:59:31.587331 kubelet[2341]: I0620 19:59:31.587257 2341 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:59:31.587590 kubelet[2341]: I0620 19:59:31.587467 2341 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:59:31.587590 kubelet[2341]: I0620 19:59:31.587484 2341 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:59:31.588623 kubelet[2341]: I0620 19:59:31.588028 2341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:59:31.589032 kubelet[2341]: E0620 19:59:31.588989 2341 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:59:31.589032 kubelet[2341]: E0620 19:59:31.589029 2341 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-156-205\" not found" Jun 20 19:59:31.652683 systemd[1]: Created slice kubepods-burstable-podf1a97e01f61251f8a8d58648d9bb798e.slice - libcontainer container kubepods-burstable-podf1a97e01f61251f8a8d58648d9bb798e.slice. Jun 20 19:59:31.659243 kubelet[2341]: E0620 19:59:31.659214 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:31.663695 systemd[1]: Created slice kubepods-burstable-pode8b7dcb3fb596de8c2bae8e478addb38.slice - libcontainer container kubepods-burstable-pode8b7dcb3fb596de8c2bae8e478addb38.slice. Jun 20 19:59:31.666237 kubelet[2341]: E0620 19:59:31.666213 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:31.677322 systemd[1]: Created slice kubepods-burstable-podb6009c5a59f12dc86c390b20111c51e8.slice - libcontainer container kubepods-burstable-podb6009c5a59f12dc86c390b20111c51e8.slice. Jun 20 19:59:31.679187 kubelet[2341]: E0620 19:59:31.679153 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:31.689389 kubelet[2341]: I0620 19:59:31.689319 2341 kubelet_node_status.go:75] "Attempting to register node" node="172-237-156-205" Jun 20 19:59:31.689961 kubelet[2341]: E0620 19:59:31.689935 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.156.205:6443/api/v1/nodes\": dial tcp 172.237.156.205:6443: connect: connection refused" node="172-237-156-205" Jun 20 19:59:31.717545 kubelet[2341]: E0620 19:59:31.717445 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.156.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-156-205?timeout=10s\": dial tcp 172.237.156.205:6443: connect: connection refused" interval="400ms" Jun 20 19:59:31.819284 kubelet[2341]: I0620 19:59:31.819227 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-k8s-certs\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:31.819284 kubelet[2341]: I0620 19:59:31.819273 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:31.819514 kubelet[2341]: I0620 19:59:31.819305 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a97e01f61251f8a8d58648d9bb798e-ca-certs\") pod \"kube-apiserver-172-237-156-205\" (UID: \"f1a97e01f61251f8a8d58648d9bb798e\") " pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:31.819514 kubelet[2341]: I0620 19:59:31.819330 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a97e01f61251f8a8d58648d9bb798e-k8s-certs\") pod \"kube-apiserver-172-237-156-205\" (UID: \"f1a97e01f61251f8a8d58648d9bb798e\") " pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:31.819514 kubelet[2341]: I0620 19:59:31.819381 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a97e01f61251f8a8d58648d9bb798e-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-156-205\" (UID: \"f1a97e01f61251f8a8d58648d9bb798e\") " pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:31.819514 kubelet[2341]: I0620 19:59:31.819406 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-ca-certs\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:31.819514 kubelet[2341]: I0620 19:59:31.819443 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-flexvolume-dir\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:31.819669 kubelet[2341]: I0620 19:59:31.819469 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6009c5a59f12dc86c390b20111c51e8-kubeconfig\") pod \"kube-scheduler-172-237-156-205\" (UID: \"b6009c5a59f12dc86c390b20111c51e8\") " pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:31.819669 kubelet[2341]: I0620 19:59:31.819493 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-kubeconfig\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:31.892377 kubelet[2341]: I0620 19:59:31.892330 2341 kubelet_node_status.go:75] "Attempting to register node" node="172-237-156-205" Jun 20 19:59:31.892791 kubelet[2341]: E0620 19:59:31.892760 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.156.205:6443/api/v1/nodes\": dial tcp 172.237.156.205:6443: connect: connection refused" node="172-237-156-205" Jun 20 19:59:31.960580 kubelet[2341]: E0620 19:59:31.960558 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:31.961406 containerd[1543]: time="2025-06-20T19:59:31.961319748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-156-205,Uid:f1a97e01f61251f8a8d58648d9bb798e,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:31.967733 kubelet[2341]: E0620 19:59:31.967618 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:31.968582 containerd[1543]: time="2025-06-20T19:59:31.968135435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-156-205,Uid:e8b7dcb3fb596de8c2bae8e478addb38,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:31.980330 kubelet[2341]: E0620 19:59:31.980180 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:31.991064 containerd[1543]: time="2025-06-20T19:59:31.990785014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-156-205,Uid:b6009c5a59f12dc86c390b20111c51e8,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:31.993765 containerd[1543]: time="2025-06-20T19:59:31.993721582Z" level=info msg="connecting to shim a571ee2d17540b551c9e00b0ffb26d99315792a1210ee5289d5b3a5e555898f8" address="unix:///run/containerd/s/c954924ae2e6140b0d7bf8040a111fb6ae3c05d2b0102d3194aa907276eac5d7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:32.006613 containerd[1543]: time="2025-06-20T19:59:32.006583476Z" level=info msg="connecting to shim 92b5469d94343560880cd7cb352241ce0aca2eceffd7bd317b5826c46064d673" address="unix:///run/containerd/s/c9b2ed3dc7c4709d14256761a0d395abcf487d1eb56dd691474a1f857305c35c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:32.033033 containerd[1543]: time="2025-06-20T19:59:32.032915623Z" level=info msg="connecting to shim e36dcadc9bd36d38761df4e3d1fe5401aa5ee109ebc76a9262bb499993a5b91d" address="unix:///run/containerd/s/dcc891ae1cedbdf23749e1a7e993dc4c7999b364247325b040462f8a60c008d2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:32.033592 systemd[1]: Started cri-containerd-a571ee2d17540b551c9e00b0ffb26d99315792a1210ee5289d5b3a5e555898f8.scope - libcontainer container a571ee2d17540b551c9e00b0ffb26d99315792a1210ee5289d5b3a5e555898f8. Jun 20 19:59:32.051539 systemd[1]: Started cri-containerd-92b5469d94343560880cd7cb352241ce0aca2eceffd7bd317b5826c46064d673.scope - libcontainer container 92b5469d94343560880cd7cb352241ce0aca2eceffd7bd317b5826c46064d673. Jun 20 19:59:32.066563 systemd[1]: Started cri-containerd-e36dcadc9bd36d38761df4e3d1fe5401aa5ee109ebc76a9262bb499993a5b91d.scope - libcontainer container e36dcadc9bd36d38761df4e3d1fe5401aa5ee109ebc76a9262bb499993a5b91d. Jun 20 19:59:32.119096 kubelet[2341]: E0620 19:59:32.118849 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.156.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-156-205?timeout=10s\": dial tcp 172.237.156.205:6443: connect: connection refused" interval="800ms" Jun 20 19:59:32.130194 containerd[1543]: time="2025-06-20T19:59:32.130163444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-156-205,Uid:e8b7dcb3fb596de8c2bae8e478addb38,Namespace:kube-system,Attempt:0,} returns sandbox id \"92b5469d94343560880cd7cb352241ce0aca2eceffd7bd317b5826c46064d673\"" Jun 20 19:59:32.132090 kubelet[2341]: E0620 19:59:32.131964 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:32.134234 containerd[1543]: time="2025-06-20T19:59:32.134109932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-156-205,Uid:f1a97e01f61251f8a8d58648d9bb798e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a571ee2d17540b551c9e00b0ffb26d99315792a1210ee5289d5b3a5e555898f8\"" Jun 20 19:59:32.135314 containerd[1543]: time="2025-06-20T19:59:32.135297161Z" level=info msg="CreateContainer within sandbox \"92b5469d94343560880cd7cb352241ce0aca2eceffd7bd317b5826c46064d673\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:59:32.135729 kubelet[2341]: E0620 19:59:32.135644 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:32.147011 containerd[1543]: time="2025-06-20T19:59:32.146993186Z" level=info msg="CreateContainer within sandbox \"a571ee2d17540b551c9e00b0ffb26d99315792a1210ee5289d5b3a5e555898f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:59:32.147775 containerd[1543]: time="2025-06-20T19:59:32.147727195Z" level=info msg="Container ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:32.154144 containerd[1543]: time="2025-06-20T19:59:32.154100852Z" level=info msg="CreateContainer within sandbox \"92b5469d94343560880cd7cb352241ce0aca2eceffd7bd317b5826c46064d673\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30\"" Jun 20 19:59:32.156343 containerd[1543]: time="2025-06-20T19:59:32.156289791Z" level=info msg="StartContainer for \"ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30\"" Jun 20 19:59:32.157323 containerd[1543]: time="2025-06-20T19:59:32.157288100Z" level=info msg="connecting to shim ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30" address="unix:///run/containerd/s/c9b2ed3dc7c4709d14256761a0d395abcf487d1eb56dd691474a1f857305c35c" protocol=ttrpc version=3 Jun 20 19:59:32.158596 containerd[1543]: time="2025-06-20T19:59:32.158562850Z" level=info msg="Container a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:32.160073 containerd[1543]: time="2025-06-20T19:59:32.160039279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-156-205,Uid:b6009c5a59f12dc86c390b20111c51e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e36dcadc9bd36d38761df4e3d1fe5401aa5ee109ebc76a9262bb499993a5b91d\"" Jun 20 19:59:32.160876 kubelet[2341]: E0620 19:59:32.160830 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:32.162499 containerd[1543]: time="2025-06-20T19:59:32.162452788Z" level=info msg="CreateContainer within sandbox \"e36dcadc9bd36d38761df4e3d1fe5401aa5ee109ebc76a9262bb499993a5b91d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:59:32.167759 containerd[1543]: time="2025-06-20T19:59:32.167707955Z" level=info msg="CreateContainer within sandbox \"a571ee2d17540b551c9e00b0ffb26d99315792a1210ee5289d5b3a5e555898f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb\"" Jun 20 19:59:32.168322 containerd[1543]: time="2025-06-20T19:59:32.168270635Z" level=info msg="StartContainer for \"a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb\"" Jun 20 19:59:32.169698 containerd[1543]: time="2025-06-20T19:59:32.169669964Z" level=info msg="connecting to shim a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb" address="unix:///run/containerd/s/c954924ae2e6140b0d7bf8040a111fb6ae3c05d2b0102d3194aa907276eac5d7" protocol=ttrpc version=3 Jun 20 19:59:32.173282 containerd[1543]: time="2025-06-20T19:59:32.173237743Z" level=info msg="Container ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:32.183169 containerd[1543]: time="2025-06-20T19:59:32.183124818Z" level=info msg="CreateContainer within sandbox \"e36dcadc9bd36d38761df4e3d1fe5401aa5ee109ebc76a9262bb499993a5b91d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7\"" Jun 20 19:59:32.183965 containerd[1543]: time="2025-06-20T19:59:32.183564327Z" level=info msg="StartContainer for \"ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7\"" Jun 20 19:59:32.185270 containerd[1543]: time="2025-06-20T19:59:32.185234797Z" level=info msg="connecting to shim ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7" address="unix:///run/containerd/s/dcc891ae1cedbdf23749e1a7e993dc4c7999b364247325b040462f8a60c008d2" protocol=ttrpc version=3 Jun 20 19:59:32.185668 systemd[1]: Started cri-containerd-ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30.scope - libcontainer container ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30. Jun 20 19:59:32.197583 systemd[1]: Started cri-containerd-a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb.scope - libcontainer container a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb. Jun 20 19:59:32.212477 systemd[1]: Started cri-containerd-ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7.scope - libcontainer container ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7. Jun 20 19:59:32.299910 containerd[1543]: time="2025-06-20T19:59:32.299824089Z" level=info msg="StartContainer for \"ef8629f0ef439ef5b7cd19a4a2a7c944f89cc96d13425e7470b714bb3b4def30\" returns successfully" Jun 20 19:59:32.304373 containerd[1543]: time="2025-06-20T19:59:32.304122937Z" level=info msg="StartContainer for \"ff5365218ed7dcbda8307062e1dd027a0a274fe675b6919f2aaa99908c0faba7\" returns successfully" Jun 20 19:59:32.307283 kubelet[2341]: I0620 19:59:32.307012 2341 kubelet_node_status.go:75] "Attempting to register node" node="172-237-156-205" Jun 20 19:59:32.311083 kubelet[2341]: E0620 19:59:32.309727 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.156.205:6443/api/v1/nodes\": dial tcp 172.237.156.205:6443: connect: connection refused" node="172-237-156-205" Jun 20 19:59:32.311544 containerd[1543]: time="2025-06-20T19:59:32.311530283Z" level=info msg="StartContainer for \"a9dab0bfb363ea1a88792b57bff7dc810865aae55b69ca9edbd394651e41dcbb\" returns successfully" Jun 20 19:59:32.554433 kubelet[2341]: E0620 19:59:32.554291 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:32.556484 kubelet[2341]: E0620 19:59:32.556466 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:32.556595 kubelet[2341]: E0620 19:59:32.556580 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:32.556867 kubelet[2341]: E0620 19:59:32.556754 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:32.558950 kubelet[2341]: E0620 19:59:32.558931 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:32.559065 kubelet[2341]: E0620 19:59:32.559050 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:33.111824 kubelet[2341]: I0620 19:59:33.111784 2341 kubelet_node_status.go:75] "Attempting to register node" node="172-237-156-205" Jun 20 19:59:33.434220 kubelet[2341]: E0620 19:59:33.434101 2341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-156-205\" not found" node="172-237-156-205" Jun 20 19:59:33.508504 kubelet[2341]: I0620 19:59:33.508250 2341 apiserver.go:52] "Watching apiserver" Jun 20 19:59:33.519310 kubelet[2341]: I0620 19:59:33.519275 2341 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:59:33.527064 kubelet[2341]: I0620 19:59:33.527041 2341 kubelet_node_status.go:78] "Successfully registered node" node="172-237-156-205" Jun 20 19:59:33.559093 kubelet[2341]: I0620 19:59:33.558694 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:33.559093 kubelet[2341]: I0620 19:59:33.558990 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:33.562777 kubelet[2341]: E0620 19:59:33.562762 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-156-205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:33.562982 kubelet[2341]: E0620 19:59:33.562961 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:33.563584 kubelet[2341]: E0620 19:59:33.563559 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-156-205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:33.563734 kubelet[2341]: E0620 19:59:33.563724 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:33.617093 kubelet[2341]: I0620 19:59:33.617050 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:33.618992 kubelet[2341]: E0620 19:59:33.618929 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-156-205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:33.618992 kubelet[2341]: I0620 19:59:33.618944 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:33.620293 kubelet[2341]: E0620 19:59:33.620268 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-156-205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:33.620423 kubelet[2341]: I0620 19:59:33.620374 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:33.621304 kubelet[2341]: E0620 19:59:33.621288 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-156-205\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:35.486274 systemd[1]: Reload requested from client PID 2613 ('systemctl') (unit session-7.scope)... Jun 20 19:59:35.486292 systemd[1]: Reloading... Jun 20 19:59:35.597445 zram_generator::config[2657]: No configuration found. Jun 20 19:59:35.681878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:59:35.780998 systemd[1]: Reloading finished in 294 ms. Jun 20 19:59:35.806170 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:59:35.831055 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:59:35.831308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:35.831381 systemd[1]: kubelet.service: Consumed 845ms CPU time, 131.4M memory peak. Jun 20 19:59:35.832832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:59:36.017884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:59:36.026795 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:59:36.070955 kubelet[2708]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:59:36.070955 kubelet[2708]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:59:36.070955 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:59:36.071280 kubelet[2708]: I0620 19:59:36.071234 2708 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:59:36.076237 kubelet[2708]: I0620 19:59:36.076043 2708 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:59:36.076237 kubelet[2708]: I0620 19:59:36.076062 2708 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:59:36.076237 kubelet[2708]: I0620 19:59:36.076185 2708 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:59:36.077211 kubelet[2708]: I0620 19:59:36.077178 2708 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:59:36.080580 kubelet[2708]: I0620 19:59:36.080554 2708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:59:36.087401 kubelet[2708]: I0620 19:59:36.086526 2708 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:59:36.089869 kubelet[2708]: I0620 19:59:36.089846 2708 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:59:36.090082 kubelet[2708]: I0620 19:59:36.090055 2708 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:59:36.090204 kubelet[2708]: I0620 19:59:36.090080 2708 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-156-205","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:59:36.090274 kubelet[2708]: I0620 19:59:36.090210 2708 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:59:36.090274 kubelet[2708]: I0620 19:59:36.090218 2708 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:59:36.090274 kubelet[2708]: I0620 19:59:36.090260 2708 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:59:36.090892 kubelet[2708]: I0620 19:59:36.090439 2708 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:59:36.090892 kubelet[2708]: I0620 19:59:36.090465 2708 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:59:36.090892 kubelet[2708]: I0620 19:59:36.090499 2708 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:59:36.090892 kubelet[2708]: I0620 19:59:36.090519 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:59:36.092787 kubelet[2708]: I0620 19:59:36.092772 2708 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:59:36.093174 kubelet[2708]: I0620 19:59:36.093161 2708 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:59:36.094006 kubelet[2708]: I0620 19:59:36.093761 2708 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:59:36.094087 kubelet[2708]: I0620 19:59:36.094069 2708 server.go:1287] "Started kubelet" Jun 20 19:59:36.095982 kubelet[2708]: I0620 19:59:36.095972 2708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:59:36.101636 kubelet[2708]: I0620 19:59:36.101616 2708 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:59:36.102385 kubelet[2708]: I0620 19:59:36.102373 2708 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:59:36.103135 kubelet[2708]: I0620 19:59:36.103098 2708 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:59:36.103596 kubelet[2708]: I0620 19:59:36.103582 2708 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:59:36.103677 kubelet[2708]: I0620 19:59:36.103669 2708 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:59:36.103862 kubelet[2708]: E0620 19:59:36.103849 2708 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-156-205\" not found" Jun 20 19:59:36.104640 kubelet[2708]: I0620 19:59:36.104612 2708 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:59:36.105395 kubelet[2708]: I0620 19:59:36.105382 2708 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:59:36.105526 kubelet[2708]: I0620 19:59:36.105516 2708 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:59:36.107115 kubelet[2708]: I0620 19:59:36.107013 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:59:36.109945 kubelet[2708]: I0620 19:59:36.109906 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:59:36.110049 kubelet[2708]: I0620 19:59:36.110040 2708 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:59:36.110414 kubelet[2708]: I0620 19:59:36.110112 2708 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:59:36.110414 kubelet[2708]: I0620 19:59:36.110141 2708 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:59:36.110414 kubelet[2708]: E0620 19:59:36.110193 2708 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:59:36.110414 kubelet[2708]: I0620 19:59:36.110394 2708 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:59:36.110524 kubelet[2708]: I0620 19:59:36.110499 2708 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:59:36.120235 kubelet[2708]: I0620 19:59:36.120220 2708 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:59:36.163298 kubelet[2708]: I0620 19:59:36.163268 2708 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:59:36.163298 kubelet[2708]: I0620 19:59:36.163284 2708 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:59:36.163298 kubelet[2708]: I0620 19:59:36.163298 2708 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:59:36.163465 kubelet[2708]: I0620 19:59:36.163448 2708 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:59:36.163495 kubelet[2708]: I0620 19:59:36.163463 2708 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:59:36.163495 kubelet[2708]: I0620 19:59:36.163483 2708 policy_none.go:49] "None policy: Start" Jun 20 19:59:36.163495 kubelet[2708]: I0620 19:59:36.163492 2708 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:59:36.163548 kubelet[2708]: I0620 19:59:36.163501 2708 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:59:36.163596 kubelet[2708]: I0620 19:59:36.163581 2708 state_mem.go:75] "Updated machine memory state" Jun 20 19:59:36.167926 kubelet[2708]: I0620 19:59:36.167904 2708 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:59:36.168085 kubelet[2708]: I0620 19:59:36.168058 2708 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:59:36.168123 kubelet[2708]: I0620 19:59:36.168082 2708 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:59:36.168704 kubelet[2708]: I0620 19:59:36.168519 2708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:59:36.170919 kubelet[2708]: E0620 19:59:36.170903 2708 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:59:36.211899 kubelet[2708]: I0620 19:59:36.211882 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:36.212225 kubelet[2708]: I0620 19:59:36.212191 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:36.212864 kubelet[2708]: I0620 19:59:36.212851 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:36.271898 kubelet[2708]: I0620 19:59:36.271867 2708 kubelet_node_status.go:75] "Attempting to register node" node="172-237-156-205" Jun 20 19:59:36.278288 kubelet[2708]: I0620 19:59:36.278256 2708 kubelet_node_status.go:124] "Node was previously registered" node="172-237-156-205" Jun 20 19:59:36.278338 kubelet[2708]: I0620 19:59:36.278315 2708 kubelet_node_status.go:78] "Successfully registered node" node="172-237-156-205" Jun 20 19:59:36.306668 kubelet[2708]: I0620 19:59:36.306646 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1a97e01f61251f8a8d58648d9bb798e-ca-certs\") pod \"kube-apiserver-172-237-156-205\" (UID: \"f1a97e01f61251f8a8d58648d9bb798e\") " pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:36.306746 kubelet[2708]: I0620 19:59:36.306672 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1a97e01f61251f8a8d58648d9bb798e-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-156-205\" (UID: \"f1a97e01f61251f8a8d58648d9bb798e\") " pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:36.306746 kubelet[2708]: I0620 19:59:36.306696 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-k8s-certs\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:36.306746 kubelet[2708]: I0620 19:59:36.306710 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-kubeconfig\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:36.306746 kubelet[2708]: I0620 19:59:36.306724 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6009c5a59f12dc86c390b20111c51e8-kubeconfig\") pod \"kube-scheduler-172-237-156-205\" (UID: \"b6009c5a59f12dc86c390b20111c51e8\") " pod="kube-system/kube-scheduler-172-237-156-205" Jun 20 19:59:36.306746 kubelet[2708]: I0620 19:59:36.306736 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1a97e01f61251f8a8d58648d9bb798e-k8s-certs\") pod \"kube-apiserver-172-237-156-205\" (UID: \"f1a97e01f61251f8a8d58648d9bb798e\") " pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:36.306861 kubelet[2708]: I0620 19:59:36.306748 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-ca-certs\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:36.306861 kubelet[2708]: I0620 19:59:36.306764 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-flexvolume-dir\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:36.306861 kubelet[2708]: I0620 19:59:36.306780 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8b7dcb3fb596de8c2bae8e478addb38-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-156-205\" (UID: \"e8b7dcb3fb596de8c2bae8e478addb38\") " pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:36.488576 sudo[2742]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:59:36.488923 sudo[2742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:59:36.524693 kubelet[2708]: E0620 19:59:36.523649 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:36.524887 kubelet[2708]: E0620 19:59:36.524874 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:36.525022 kubelet[2708]: E0620 19:59:36.525010 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:36.986856 sudo[2742]: pam_unix(sudo:session): session closed for user root Jun 20 19:59:37.092020 kubelet[2708]: I0620 19:59:37.091968 2708 apiserver.go:52] "Watching apiserver" Jun 20 19:59:37.106147 kubelet[2708]: I0620 19:59:37.106074 2708 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:59:37.143556 kubelet[2708]: I0620 19:59:37.143283 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-156-205" podStartSLOduration=1.143261237 podStartE2EDuration="1.143261237s" podCreationTimestamp="2025-06-20 19:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:59:37.135796621 +0000 UTC m=+1.104037429" watchObservedRunningTime="2025-06-20 19:59:37.143261237 +0000 UTC m=+1.111502045" Jun 20 19:59:37.148374 kubelet[2708]: E0620 19:59:37.146880 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:37.148374 kubelet[2708]: I0620 19:59:37.147345 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:37.148694 kubelet[2708]: I0620 19:59:37.148661 2708 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:37.154698 kubelet[2708]: I0620 19:59:37.154655 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-156-205" podStartSLOduration=1.154646771 podStartE2EDuration="1.154646771s" podCreationTimestamp="2025-06-20 19:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:59:37.143629347 +0000 UTC m=+1.111870155" watchObservedRunningTime="2025-06-20 19:59:37.154646771 +0000 UTC m=+1.122887579" Jun 20 19:59:37.157137 kubelet[2708]: E0620 19:59:37.156931 2708 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-156-205\" already exists" pod="kube-system/kube-apiserver-172-237-156-205" Jun 20 19:59:37.157137 kubelet[2708]: E0620 19:59:37.157060 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:37.159201 kubelet[2708]: E0620 19:59:37.159153 2708 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-156-205\" already exists" pod="kube-system/kube-controller-manager-172-237-156-205" Jun 20 19:59:37.159297 kubelet[2708]: E0620 19:59:37.159269 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:37.169421 kubelet[2708]: I0620 19:59:37.169382 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-156-205" podStartSLOduration=1.169373444 podStartE2EDuration="1.169373444s" podCreationTimestamp="2025-06-20 19:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:59:37.155180151 +0000 UTC m=+1.123420959" watchObservedRunningTime="2025-06-20 19:59:37.169373444 +0000 UTC m=+1.137614252" Jun 20 19:59:37.951964 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 19:59:38.148112 kubelet[2708]: E0620 19:59:38.148057 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:38.148686 kubelet[2708]: E0620 19:59:38.148654 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:38.149479 kubelet[2708]: E0620 19:59:38.149196 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:38.820583 sudo[1802]: pam_unix(sudo:session): session closed for user root Jun 20 19:59:38.870945 sshd[1801]: Connection closed by 139.178.68.195 port 41212 Jun 20 19:59:38.871601 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jun 20 19:59:38.876596 systemd[1]: sshd@6-172.237.156.205:22-139.178.68.195:41212.service: Deactivated successfully. Jun 20 19:59:38.880125 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:59:38.880415 systemd[1]: session-7.scope: Consumed 4.703s CPU time, 268.3M memory peak. Jun 20 19:59:38.882047 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:59:38.884844 systemd-logind[1522]: Removed session 7. Jun 20 19:59:40.647835 kubelet[2708]: I0620 19:59:40.647782 2708 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:59:40.648310 containerd[1543]: time="2025-06-20T19:59:40.648145136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:59:40.648599 kubelet[2708]: I0620 19:59:40.648465 2708 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:59:41.575705 systemd[1]: Created slice kubepods-besteffort-podd610bdb4_8754_4eac_805c_c397ada4c9c2.slice - libcontainer container kubepods-besteffort-podd610bdb4_8754_4eac_805c_c397ada4c9c2.slice. Jun 20 19:59:41.587299 systemd[1]: Created slice kubepods-burstable-pod6652ac11_e062_498c_b924_57a62eca484a.slice - libcontainer container kubepods-burstable-pod6652ac11_e062_498c_b924_57a62eca484a.slice. Jun 20 19:59:41.639940 kubelet[2708]: I0620 19:59:41.639664 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d610bdb4-8754-4eac-805c-c397ada4c9c2-lib-modules\") pod \"kube-proxy-gpnq7\" (UID: \"d610bdb4-8754-4eac-805c-c397ada4c9c2\") " pod="kube-system/kube-proxy-gpnq7" Jun 20 19:59:41.640183 kubelet[2708]: I0620 19:59:41.640134 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-cgroup\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640260 kubelet[2708]: I0620 19:59:41.640164 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cni-path\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640471 kubelet[2708]: I0620 19:59:41.640338 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d610bdb4-8754-4eac-805c-c397ada4c9c2-kube-proxy\") pod \"kube-proxy-gpnq7\" (UID: \"d610bdb4-8754-4eac-805c-c397ada4c9c2\") " pod="kube-system/kube-proxy-gpnq7" Jun 20 19:59:41.640610 kubelet[2708]: I0620 19:59:41.640562 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d610bdb4-8754-4eac-805c-c397ada4c9c2-xtables-lock\") pod \"kube-proxy-gpnq7\" (UID: \"d610bdb4-8754-4eac-805c-c397ada4c9c2\") " pod="kube-system/kube-proxy-gpnq7" Jun 20 19:59:41.640610 kubelet[2708]: I0620 19:59:41.640590 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-xtables-lock\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640610 kubelet[2708]: I0620 19:59:41.640609 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6652ac11-e062-498c-b924-57a62eca484a-clustermesh-secrets\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640786 kubelet[2708]: I0620 19:59:41.640629 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-etc-cni-netd\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640786 kubelet[2708]: I0620 19:59:41.640647 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-lib-modules\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640786 kubelet[2708]: I0620 19:59:41.640665 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxp6g\" (UniqueName: \"kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-kube-api-access-mxp6g\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640786 kubelet[2708]: I0620 19:59:41.640690 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6652ac11-e062-498c-b924-57a62eca484a-cilium-config-path\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640786 kubelet[2708]: I0620 19:59:41.640710 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796ch\" (UniqueName: \"kubernetes.io/projected/d610bdb4-8754-4eac-805c-c397ada4c9c2-kube-api-access-796ch\") pod \"kube-proxy-gpnq7\" (UID: \"d610bdb4-8754-4eac-805c-c397ada4c9c2\") " pod="kube-system/kube-proxy-gpnq7" Jun 20 19:59:41.640892 kubelet[2708]: I0620 19:59:41.640727 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-run\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640892 kubelet[2708]: I0620 19:59:41.640741 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-hostproc\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640892 kubelet[2708]: I0620 19:59:41.640761 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-net\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640892 kubelet[2708]: I0620 19:59:41.640780 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-bpf-maps\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640892 kubelet[2708]: I0620 19:59:41.640797 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-hubble-tls\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.640892 kubelet[2708]: I0620 19:59:41.640817 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-kernel\") pod \"cilium-hflpf\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " pod="kube-system/cilium-hflpf" Jun 20 19:59:41.657186 systemd[1]: Created slice kubepods-besteffort-pod8595b000_4a1f_4f89_a025_4a6ecbe6f02b.slice - libcontainer container kubepods-besteffort-pod8595b000_4a1f_4f89_a025_4a6ecbe6f02b.slice. Jun 20 19:59:41.741850 kubelet[2708]: I0620 19:59:41.741802 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s9482\" (UID: \"8595b000-4a1f-4f89-a025-4a6ecbe6f02b\") " pod="kube-system/cilium-operator-6c4d7847fc-s9482" Jun 20 19:59:41.742349 kubelet[2708]: I0620 19:59:41.741904 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46fm\" (UniqueName: \"kubernetes.io/projected/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-kube-api-access-d46fm\") pod \"cilium-operator-6c4d7847fc-s9482\" (UID: \"8595b000-4a1f-4f89-a025-4a6ecbe6f02b\") " pod="kube-system/cilium-operator-6c4d7847fc-s9482" Jun 20 19:59:41.802763 kubelet[2708]: E0620 19:59:41.802729 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:41.885923 kubelet[2708]: E0620 19:59:41.885604 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:41.886656 containerd[1543]: time="2025-06-20T19:59:41.886600790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gpnq7,Uid:d610bdb4-8754-4eac-805c-c397ada4c9c2,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:41.892738 kubelet[2708]: E0620 19:59:41.892589 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:41.894380 containerd[1543]: time="2025-06-20T19:59:41.893285150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hflpf,Uid:6652ac11-e062-498c-b924-57a62eca484a,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:41.913020 containerd[1543]: time="2025-06-20T19:59:41.912959470Z" level=info msg="connecting to shim 33b90a14eb2d388b5e761f4e647ecfdeea6c907a8a7d1faad1c1787b76cc767f" address="unix:///run/containerd/s/a20eaa5541681ad233bce9ce78adb432aff38be9709fa68ea5d47db482cec876" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:41.938118 systemd[1]: Started cri-containerd-33b90a14eb2d388b5e761f4e647ecfdeea6c907a8a7d1faad1c1787b76cc767f.scope - libcontainer container 33b90a14eb2d388b5e761f4e647ecfdeea6c907a8a7d1faad1c1787b76cc767f. Jun 20 19:59:41.939968 containerd[1543]: time="2025-06-20T19:59:41.939934139Z" level=info msg="connecting to shim 82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347" address="unix:///run/containerd/s/25356a9fa247a2983e5fefb3dbe9728050379a3292de6100d329585cb9ce77f4" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:41.961607 kubelet[2708]: E0620 19:59:41.961564 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:41.963837 containerd[1543]: time="2025-06-20T19:59:41.963793176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s9482,Uid:8595b000-4a1f-4f89-a025-4a6ecbe6f02b,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:41.968537 systemd[1]: Started cri-containerd-82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347.scope - libcontainer container 82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347. Jun 20 19:59:41.992446 containerd[1543]: time="2025-06-20T19:59:41.992418728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gpnq7,Uid:d610bdb4-8754-4eac-805c-c397ada4c9c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"33b90a14eb2d388b5e761f4e647ecfdeea6c907a8a7d1faad1c1787b76cc767f\"" Jun 20 19:59:41.994891 kubelet[2708]: E0620 19:59:41.994873 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:42.009893 containerd[1543]: time="2025-06-20T19:59:42.006500510Z" level=info msg="CreateContainer within sandbox \"33b90a14eb2d388b5e761f4e647ecfdeea6c907a8a7d1faad1c1787b76cc767f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:59:42.012929 containerd[1543]: time="2025-06-20T19:59:42.012885446Z" level=info msg="connecting to shim 4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9" address="unix:///run/containerd/s/50bf35a3570c46d1335d8d4702b23816b1a9aae5dcea3cdc5402131604fba7b9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:42.022795 containerd[1543]: time="2025-06-20T19:59:42.022664582Z" level=info msg="Container a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:42.029752 containerd[1543]: time="2025-06-20T19:59:42.029729743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hflpf,Uid:6652ac11-e062-498c-b924-57a62eca484a,Namespace:kube-system,Attempt:0,} returns sandbox id \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\"" Jun 20 19:59:42.030947 kubelet[2708]: E0620 19:59:42.030930 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:42.033451 containerd[1543]: time="2025-06-20T19:59:42.033011744Z" level=info msg="CreateContainer within sandbox \"33b90a14eb2d388b5e761f4e647ecfdeea6c907a8a7d1faad1c1787b76cc767f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531\"" Jun 20 19:59:42.034172 containerd[1543]: time="2025-06-20T19:59:42.033291198Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:59:42.034404 containerd[1543]: time="2025-06-20T19:59:42.034118601Z" level=info msg="StartContainer for \"a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531\"" Jun 20 19:59:42.036543 containerd[1543]: time="2025-06-20T19:59:42.036521681Z" level=info msg="connecting to shim a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531" address="unix:///run/containerd/s/a20eaa5541681ad233bce9ce78adb432aff38be9709fa68ea5d47db482cec876" protocol=ttrpc version=3 Jun 20 19:59:42.048512 systemd[1]: Started cri-containerd-4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9.scope - libcontainer container 4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9. Jun 20 19:59:42.064544 systemd[1]: Started cri-containerd-a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531.scope - libcontainer container a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531. Jun 20 19:59:42.113979 containerd[1543]: time="2025-06-20T19:59:42.113756922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s9482,Uid:8595b000-4a1f-4f89-a025-4a6ecbe6f02b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\"" Jun 20 19:59:42.115513 kubelet[2708]: E0620 19:59:42.115478 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:42.128179 containerd[1543]: time="2025-06-20T19:59:42.128136901Z" level=info msg="StartContainer for \"a54bc94cdbeb770e2c101fc1833eaf42fb8029041ccd654cb962bd96cd4f5531\" returns successfully" Jun 20 19:59:42.160346 kubelet[2708]: E0620 19:59:42.160157 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:42.171464 kubelet[2708]: I0620 19:59:42.170556 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gpnq7" podStartSLOduration=1.1705355530000001 podStartE2EDuration="1.170535553s" podCreationTimestamp="2025-06-20 19:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:59:42.169585752 +0000 UTC m=+6.137826560" watchObservedRunningTime="2025-06-20 19:59:42.170535553 +0000 UTC m=+6.138776361" Jun 20 19:59:42.667868 kubelet[2708]: E0620 19:59:42.667411 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:43.162473 kubelet[2708]: E0620 19:59:43.162409 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:44.041304 kubelet[2708]: E0620 19:59:44.041240 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:44.165058 kubelet[2708]: E0620 19:59:44.165019 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:44.165531 kubelet[2708]: E0620 19:59:44.165253 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:45.290605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616038492.mount: Deactivated successfully. Jun 20 19:59:46.743997 containerd[1543]: time="2025-06-20T19:59:46.743929948Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:46.745040 containerd[1543]: time="2025-06-20T19:59:46.744892393Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:59:46.745635 containerd[1543]: time="2025-06-20T19:59:46.745599941Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:46.746787 containerd[1543]: time="2025-06-20T19:59:46.746753772Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.712426846s" Jun 20 19:59:46.746865 containerd[1543]: time="2025-06-20T19:59:46.746851021Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:59:46.748565 containerd[1543]: time="2025-06-20T19:59:46.748525874Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:59:46.751699 containerd[1543]: time="2025-06-20T19:59:46.751656112Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:59:46.760754 containerd[1543]: time="2025-06-20T19:59:46.760142404Z" level=info msg="Container 6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:46.764273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107303959.mount: Deactivated successfully. Jun 20 19:59:46.769164 containerd[1543]: time="2025-06-20T19:59:46.769120928Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\"" Jun 20 19:59:46.769675 containerd[1543]: time="2025-06-20T19:59:46.769630419Z" level=info msg="StartContainer for \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\"" Jun 20 19:59:46.770605 containerd[1543]: time="2025-06-20T19:59:46.770569564Z" level=info msg="connecting to shim 6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e" address="unix:///run/containerd/s/25356a9fa247a2983e5fefb3dbe9728050379a3292de6100d329585cb9ce77f4" protocol=ttrpc version=3 Jun 20 19:59:46.796615 systemd[1]: Started cri-containerd-6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e.scope - libcontainer container 6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e. Jun 20 19:59:46.829209 containerd[1543]: time="2025-06-20T19:59:46.829148379Z" level=info msg="StartContainer for \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" returns successfully" Jun 20 19:59:46.844040 systemd[1]: cri-containerd-6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e.scope: Deactivated successfully. Jun 20 19:59:46.848354 containerd[1543]: time="2025-06-20T19:59:46.848324226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" id:\"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" pid:3127 exited_at:{seconds:1750449586 nanos:847902433}" Jun 20 19:59:46.848537 containerd[1543]: time="2025-06-20T19:59:46.848474974Z" level=info msg="received exit event container_id:\"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" id:\"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" pid:3127 exited_at:{seconds:1750449586 nanos:847902433}" Jun 20 19:59:46.873275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e-rootfs.mount: Deactivated successfully. Jun 20 19:59:47.173641 kubelet[2708]: E0620 19:59:47.172752 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:47.178520 containerd[1543]: time="2025-06-20T19:59:47.177807077Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:59:47.186167 containerd[1543]: time="2025-06-20T19:59:47.186129180Z" level=info msg="Container 6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:47.192529 containerd[1543]: time="2025-06-20T19:59:47.192493523Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\"" Jun 20 19:59:47.193034 containerd[1543]: time="2025-06-20T19:59:47.192998895Z" level=info msg="StartContainer for \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\"" Jun 20 19:59:47.193981 containerd[1543]: time="2025-06-20T19:59:47.193932310Z" level=info msg="connecting to shim 6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b" address="unix:///run/containerd/s/25356a9fa247a2983e5fefb3dbe9728050379a3292de6100d329585cb9ce77f4" protocol=ttrpc version=3 Jun 20 19:59:47.224510 systemd[1]: Started cri-containerd-6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b.scope - libcontainer container 6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b. Jun 20 19:59:47.257041 containerd[1543]: time="2025-06-20T19:59:47.256962146Z" level=info msg="StartContainer for \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" returns successfully" Jun 20 19:59:47.273313 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:59:47.273895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:59:47.274330 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:59:47.276764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:59:47.277094 systemd[1]: cri-containerd-6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b.scope: Deactivated successfully. Jun 20 19:59:47.279685 containerd[1543]: time="2025-06-20T19:59:47.279148106Z" level=info msg="received exit event container_id:\"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" id:\"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" pid:3172 exited_at:{seconds:1750449587 nanos:278784621}" Jun 20 19:59:47.279962 containerd[1543]: time="2025-06-20T19:59:47.279917354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" id:\"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" pid:3172 exited_at:{seconds:1750449587 nanos:278784621}" Jun 20 19:59:47.305336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:59:47.939884 containerd[1543]: time="2025-06-20T19:59:47.939840017Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:47.940855 containerd[1543]: time="2025-06-20T19:59:47.940665614Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:59:47.941377 containerd[1543]: time="2025-06-20T19:59:47.941331414Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:59:47.942485 containerd[1543]: time="2025-06-20T19:59:47.942464527Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.193896154s" Jun 20 19:59:47.942563 containerd[1543]: time="2025-06-20T19:59:47.942547446Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:59:47.944660 containerd[1543]: time="2025-06-20T19:59:47.944611824Z" level=info msg="CreateContainer within sandbox \"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:59:47.954395 containerd[1543]: time="2025-06-20T19:59:47.950254047Z" level=info msg="Container 7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:47.965195 containerd[1543]: time="2025-06-20T19:59:47.965174319Z" level=info msg="CreateContainer within sandbox \"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\"" Jun 20 19:59:47.966014 containerd[1543]: time="2025-06-20T19:59:47.965551863Z" level=info msg="StartContainer for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\"" Jun 20 19:59:47.966689 containerd[1543]: time="2025-06-20T19:59:47.966668506Z" level=info msg="connecting to shim 7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08" address="unix:///run/containerd/s/50bf35a3570c46d1335d8d4702b23816b1a9aae5dcea3cdc5402131604fba7b9" protocol=ttrpc version=3 Jun 20 19:59:47.987474 systemd[1]: Started cri-containerd-7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08.scope - libcontainer container 7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08. Jun 20 19:59:48.012149 containerd[1543]: time="2025-06-20T19:59:48.012118000Z" level=info msg="StartContainer for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" returns successfully" Jun 20 19:59:48.178162 kubelet[2708]: E0620 19:59:48.178119 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:48.181650 kubelet[2708]: E0620 19:59:48.181623 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:48.185228 containerd[1543]: time="2025-06-20T19:59:48.185173140Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:59:48.201573 containerd[1543]: time="2025-06-20T19:59:48.201477535Z" level=info msg="Container 785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:48.208633 containerd[1543]: time="2025-06-20T19:59:48.208606573Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\"" Jun 20 19:59:48.209146 containerd[1543]: time="2025-06-20T19:59:48.209117636Z" level=info msg="StartContainer for \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\"" Jun 20 19:59:48.210194 containerd[1543]: time="2025-06-20T19:59:48.210166540Z" level=info msg="connecting to shim 785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11" address="unix:///run/containerd/s/25356a9fa247a2983e5fefb3dbe9728050379a3292de6100d329585cb9ce77f4" protocol=ttrpc version=3 Jun 20 19:59:48.224831 kubelet[2708]: I0620 19:59:48.224769 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s9482" podStartSLOduration=1.398350562 podStartE2EDuration="7.224747521s" podCreationTimestamp="2025-06-20 19:59:41 +0000 UTC" firstStartedPulling="2025-06-20 19:59:42.116867446 +0000 UTC m=+6.085108254" lastFinishedPulling="2025-06-20 19:59:47.943264405 +0000 UTC m=+11.911505213" observedRunningTime="2025-06-20 19:59:48.222646591 +0000 UTC m=+12.190887399" watchObservedRunningTime="2025-06-20 19:59:48.224747521 +0000 UTC m=+12.192988319" Jun 20 19:59:48.237655 systemd[1]: Started cri-containerd-785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11.scope - libcontainer container 785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11. Jun 20 19:59:48.316899 containerd[1543]: time="2025-06-20T19:59:48.316847225Z" level=info msg="StartContainer for \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" returns successfully" Jun 20 19:59:48.323656 systemd[1]: cri-containerd-785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11.scope: Deactivated successfully. Jun 20 19:59:48.325136 containerd[1543]: time="2025-06-20T19:59:48.325081777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" id:\"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" pid:3267 exited_at:{seconds:1750449588 nanos:324608724}" Jun 20 19:59:48.325264 containerd[1543]: time="2025-06-20T19:59:48.325215484Z" level=info msg="received exit event container_id:\"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" id:\"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" pid:3267 exited_at:{seconds:1750449588 nanos:324608724}" Jun 20 19:59:49.195190 kubelet[2708]: E0620 19:59:49.194704 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:49.195190 kubelet[2708]: E0620 19:59:49.194980 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:49.198245 containerd[1543]: time="2025-06-20T19:59:49.198201934Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:59:49.209836 containerd[1543]: time="2025-06-20T19:59:49.209400562Z" level=info msg="Container 289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:49.214884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839582437.mount: Deactivated successfully. Jun 20 19:59:49.218467 containerd[1543]: time="2025-06-20T19:59:49.218444201Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\"" Jun 20 19:59:49.219669 containerd[1543]: time="2025-06-20T19:59:49.219571885Z" level=info msg="StartContainer for \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\"" Jun 20 19:59:49.220340 containerd[1543]: time="2025-06-20T19:59:49.220309905Z" level=info msg="connecting to shim 289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a" address="unix:///run/containerd/s/25356a9fa247a2983e5fefb3dbe9728050379a3292de6100d329585cb9ce77f4" protocol=ttrpc version=3 Jun 20 19:59:49.243459 systemd[1]: Started cri-containerd-289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a.scope - libcontainer container 289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a. Jun 20 19:59:49.268090 systemd[1]: cri-containerd-289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a.scope: Deactivated successfully. Jun 20 19:59:49.270954 containerd[1543]: time="2025-06-20T19:59:49.270905702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" id:\"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" pid:3304 exited_at:{seconds:1750449589 nanos:270659975}" Jun 20 19:59:49.271114 containerd[1543]: time="2025-06-20T19:59:49.270964320Z" level=info msg="received exit event container_id:\"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" id:\"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" pid:3304 exited_at:{seconds:1750449589 nanos:270659975}" Jun 20 19:59:49.271484 containerd[1543]: time="2025-06-20T19:59:49.271401994Z" level=info msg="StartContainer for \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" returns successfully" Jun 20 19:59:49.290341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a-rootfs.mount: Deactivated successfully. Jun 20 19:59:50.201167 kubelet[2708]: E0620 19:59:50.201126 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:50.206045 containerd[1543]: time="2025-06-20T19:59:50.205049865Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:59:50.220171 containerd[1543]: time="2025-06-20T19:59:50.220150084Z" level=info msg="Container 398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:50.226845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652143546.mount: Deactivated successfully. Jun 20 19:59:50.233141 containerd[1543]: time="2025-06-20T19:59:50.233065639Z" level=info msg="CreateContainer within sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\"" Jun 20 19:59:50.234736 containerd[1543]: time="2025-06-20T19:59:50.234718588Z" level=info msg="StartContainer for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\"" Jun 20 19:59:50.236948 containerd[1543]: time="2025-06-20T19:59:50.236814702Z" level=info msg="connecting to shim 398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b" address="unix:///run/containerd/s/25356a9fa247a2983e5fefb3dbe9728050379a3292de6100d329585cb9ce77f4" protocol=ttrpc version=3 Jun 20 19:59:50.264540 systemd[1]: Started cri-containerd-398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b.scope - libcontainer container 398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b. Jun 20 19:59:50.303066 containerd[1543]: time="2025-06-20T19:59:50.303015441Z" level=info msg="StartContainer for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" returns successfully" Jun 20 19:59:50.390045 containerd[1543]: time="2025-06-20T19:59:50.389773028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" id:\"1285a59fd3df54d588aef18f4f91bcf3fe6dcfdeb7e6b69ee716535102ddbf74\" pid:3374 exited_at:{seconds:1750449590 nanos:388536975}" Jun 20 19:59:50.448057 kubelet[2708]: I0620 19:59:50.448017 2708 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:59:50.483405 systemd[1]: Created slice kubepods-burstable-pode653801d_ff51_4212_b36a_638b2837e893.slice - libcontainer container kubepods-burstable-pode653801d_ff51_4212_b36a_638b2837e893.slice. Jun 20 19:59:50.502983 systemd[1]: Created slice kubepods-burstable-podbfcbedee_1682_49c6_a27a_3c606791f025.slice - libcontainer container kubepods-burstable-podbfcbedee_1682_49c6_a27a_3c606791f025.slice. Jun 20 19:59:50.506631 kubelet[2708]: I0620 19:59:50.506583 2708 status_manager.go:890] "Failed to get status for pod" podUID="e653801d-ff51-4212-b36a-638b2837e893" pod="kube-system/coredns-668d6bf9bc-mh44x" err="pods \"coredns-668d6bf9bc-mh44x\" is forbidden: User \"system:node:172-237-156-205\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-156-205' and this object" Jun 20 19:59:50.507973 kubelet[2708]: I0620 19:59:50.506888 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpn8q\" (UniqueName: \"kubernetes.io/projected/e653801d-ff51-4212-b36a-638b2837e893-kube-api-access-zpn8q\") pod \"coredns-668d6bf9bc-mh44x\" (UID: \"e653801d-ff51-4212-b36a-638b2837e893\") " pod="kube-system/coredns-668d6bf9bc-mh44x" Jun 20 19:59:50.507973 kubelet[2708]: I0620 19:59:50.507838 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e653801d-ff51-4212-b36a-638b2837e893-config-volume\") pod \"coredns-668d6bf9bc-mh44x\" (UID: \"e653801d-ff51-4212-b36a-638b2837e893\") " pod="kube-system/coredns-668d6bf9bc-mh44x" Jun 20 19:59:50.508374 kubelet[2708]: W0620 19:59:50.508277 2708 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-237-156-205" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-156-205' and this object Jun 20 19:59:50.508374 kubelet[2708]: E0620 19:59:50.508313 2708 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-237-156-205\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-156-205' and this object" logger="UnhandledError" Jun 20 19:59:50.608829 kubelet[2708]: I0620 19:59:50.608716 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkshx\" (UniqueName: \"kubernetes.io/projected/bfcbedee-1682-49c6-a27a-3c606791f025-kube-api-access-qkshx\") pod \"coredns-668d6bf9bc-zvc65\" (UID: \"bfcbedee-1682-49c6-a27a-3c606791f025\") " pod="kube-system/coredns-668d6bf9bc-zvc65" Jun 20 19:59:50.609874 kubelet[2708]: I0620 19:59:50.609834 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfcbedee-1682-49c6-a27a-3c606791f025-config-volume\") pod \"coredns-668d6bf9bc-zvc65\" (UID: \"bfcbedee-1682-49c6-a27a-3c606791f025\") " pod="kube-system/coredns-668d6bf9bc-zvc65" Jun 20 19:59:51.209224 kubelet[2708]: E0620 19:59:51.206421 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:51.690329 kubelet[2708]: E0620 19:59:51.689975 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:51.691267 containerd[1543]: time="2025-06-20T19:59:51.691186717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mh44x,Uid:e653801d-ff51-4212-b36a-638b2837e893,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:51.708114 kubelet[2708]: E0620 19:59:51.708061 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:51.709769 containerd[1543]: time="2025-06-20T19:59:51.709628507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvc65,Uid:bfcbedee-1682-49c6-a27a-3c606791f025,Namespace:kube-system,Attempt:0,}" Jun 20 19:59:51.807325 kubelet[2708]: E0620 19:59:51.807275 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:51.817233 kubelet[2708]: I0620 19:59:51.816944 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hflpf" podStartSLOduration=6.101391099 podStartE2EDuration="10.816917955s" podCreationTimestamp="2025-06-20 19:59:41 +0000 UTC" firstStartedPulling="2025-06-20 19:59:42.032626743 +0000 UTC m=+6.000867551" lastFinishedPulling="2025-06-20 19:59:46.748153599 +0000 UTC m=+10.716394407" observedRunningTime="2025-06-20 19:59:51.225866385 +0000 UTC m=+15.194107183" watchObservedRunningTime="2025-06-20 19:59:51.816917955 +0000 UTC m=+15.785158763" Jun 20 19:59:52.208700 kubelet[2708]: E0620 19:59:52.208655 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:52.411948 update_engine[1523]: I20250620 19:59:52.411852 1523 update_attempter.cc:509] Updating boot flags... Jun 20 19:59:52.580434 systemd-networkd[1458]: cilium_host: Link UP Jun 20 19:59:52.580620 systemd-networkd[1458]: cilium_net: Link UP Jun 20 19:59:52.580801 systemd-networkd[1458]: cilium_host: Gained carrier Jun 20 19:59:52.580971 systemd-networkd[1458]: cilium_net: Gained carrier Jun 20 19:59:52.793944 systemd-networkd[1458]: cilium_vxlan: Link UP Jun 20 19:59:52.793961 systemd-networkd[1458]: cilium_vxlan: Gained carrier Jun 20 19:59:53.004784 kernel: NET: Registered PF_ALG protocol family Jun 20 19:59:53.211338 kubelet[2708]: E0620 19:59:53.211285 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:53.364655 systemd-networkd[1458]: cilium_net: Gained IPv6LL Jun 20 19:59:53.492610 systemd-networkd[1458]: cilium_host: Gained IPv6LL Jun 20 19:59:53.623482 systemd-networkd[1458]: lxc_health: Link UP Jun 20 19:59:53.630457 systemd-networkd[1458]: lxc_health: Gained carrier Jun 20 19:59:53.744572 kernel: eth0: renamed from tmp921d6 Jun 20 19:59:53.746461 systemd-networkd[1458]: lxc04e1581db4cd: Link UP Jun 20 19:59:53.748266 systemd-networkd[1458]: lxc04e1581db4cd: Gained carrier Jun 20 19:59:54.196520 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL Jun 20 19:59:54.219764 kubelet[2708]: E0620 19:59:54.219722 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:54.235423 kernel: eth0: renamed from tmp82644 Jun 20 19:59:54.239826 systemd-networkd[1458]: lxcccda56ac5a78: Link UP Jun 20 19:59:54.241346 systemd-networkd[1458]: lxcccda56ac5a78: Gained carrier Jun 20 19:59:54.644931 systemd-networkd[1458]: lxc_health: Gained IPv6LL Jun 20 19:59:55.220928 systemd-networkd[1458]: lxc04e1581db4cd: Gained IPv6LL Jun 20 19:59:55.412571 systemd-networkd[1458]: lxcccda56ac5a78: Gained IPv6LL Jun 20 19:59:56.615159 containerd[1543]: time="2025-06-20T19:59:56.614941282Z" level=info msg="connecting to shim 921d61ea5a9b85bbee31d53c3bc8c0f8794e35436be083bcb4e004a7466d5e89" address="unix:///run/containerd/s/85000f75aa9e615c48cc9a16a2d801608393093c4e54a8093a29d7ba1b817b86" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:56.639267 containerd[1543]: time="2025-06-20T19:59:56.638828002Z" level=info msg="connecting to shim 82644f78fb0dda0b05d9101c11cd97c99a34cad4f2a36d9d83bbe88659497f5b" address="unix:///run/containerd/s/1158b8491bf74299d6e6e45c805c85a973baa85e8be7507995c5bc84074f9e99" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:59:56.666545 systemd[1]: Started cri-containerd-921d61ea5a9b85bbee31d53c3bc8c0f8794e35436be083bcb4e004a7466d5e89.scope - libcontainer container 921d61ea5a9b85bbee31d53c3bc8c0f8794e35436be083bcb4e004a7466d5e89. Jun 20 19:59:56.679623 systemd[1]: Started cri-containerd-82644f78fb0dda0b05d9101c11cd97c99a34cad4f2a36d9d83bbe88659497f5b.scope - libcontainer container 82644f78fb0dda0b05d9101c11cd97c99a34cad4f2a36d9d83bbe88659497f5b. Jun 20 19:59:56.763104 containerd[1543]: time="2025-06-20T19:59:56.763040621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zvc65,Uid:bfcbedee-1682-49c6-a27a-3c606791f025,Namespace:kube-system,Attempt:0,} returns sandbox id \"921d61ea5a9b85bbee31d53c3bc8c0f8794e35436be083bcb4e004a7466d5e89\"" Jun 20 19:59:56.764169 kubelet[2708]: E0620 19:59:56.764127 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:56.767764 containerd[1543]: time="2025-06-20T19:59:56.767497831Z" level=info msg="CreateContainer within sandbox \"921d61ea5a9b85bbee31d53c3bc8c0f8794e35436be083bcb4e004a7466d5e89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:59:56.770557 containerd[1543]: time="2025-06-20T19:59:56.770538585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mh44x,Uid:e653801d-ff51-4212-b36a-638b2837e893,Namespace:kube-system,Attempt:0,} returns sandbox id \"82644f78fb0dda0b05d9101c11cd97c99a34cad4f2a36d9d83bbe88659497f5b\"" Jun 20 19:59:56.771501 kubelet[2708]: E0620 19:59:56.771483 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:56.774238 containerd[1543]: time="2025-06-20T19:59:56.774201073Z" level=info msg="CreateContainer within sandbox \"82644f78fb0dda0b05d9101c11cd97c99a34cad4f2a36d9d83bbe88659497f5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:59:56.785148 containerd[1543]: time="2025-06-20T19:59:56.784865019Z" level=info msg="Container f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:56.790708 containerd[1543]: time="2025-06-20T19:59:56.790669768Z" level=info msg="Container 41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:59:56.797268 containerd[1543]: time="2025-06-20T19:59:56.796695215Z" level=info msg="CreateContainer within sandbox \"921d61ea5a9b85bbee31d53c3bc8c0f8794e35436be083bcb4e004a7466d5e89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a\"" Jun 20 19:59:56.798057 containerd[1543]: time="2025-06-20T19:59:56.797995383Z" level=info msg="StartContainer for \"f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a\"" Jun 20 19:59:56.798883 containerd[1543]: time="2025-06-20T19:59:56.798848096Z" level=info msg="connecting to shim f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a" address="unix:///run/containerd/s/85000f75aa9e615c48cc9a16a2d801608393093c4e54a8093a29d7ba1b817b86" protocol=ttrpc version=3 Jun 20 19:59:56.804857 containerd[1543]: time="2025-06-20T19:59:56.804827434Z" level=info msg="CreateContainer within sandbox \"82644f78fb0dda0b05d9101c11cd97c99a34cad4f2a36d9d83bbe88659497f5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667\"" Jun 20 19:59:56.805635 containerd[1543]: time="2025-06-20T19:59:56.805591566Z" level=info msg="StartContainer for \"41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667\"" Jun 20 19:59:56.806290 containerd[1543]: time="2025-06-20T19:59:56.806259261Z" level=info msg="connecting to shim 41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667" address="unix:///run/containerd/s/1158b8491bf74299d6e6e45c805c85a973baa85e8be7507995c5bc84074f9e99" protocol=ttrpc version=3 Jun 20 19:59:56.828656 systemd[1]: Started cri-containerd-f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a.scope - libcontainer container f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a. Jun 20 19:59:56.839530 systemd[1]: Started cri-containerd-41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667.scope - libcontainer container 41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667. Jun 20 19:59:56.875818 containerd[1543]: time="2025-06-20T19:59:56.875665701Z" level=info msg="StartContainer for \"f974d31d675fc52eaee613ee88739585506f6ee894ebac8d3eac2a5f5885073a\" returns successfully" Jun 20 19:59:56.883925 containerd[1543]: time="2025-06-20T19:59:56.883815299Z" level=info msg="StartContainer for \"41420678aa16f04c208ce00f984475033d6f16536ada810b084c64a60cc49667\" returns successfully" Jun 20 19:59:57.226705 kubelet[2708]: E0620 19:59:57.226331 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:57.232626 kubelet[2708]: E0620 19:59:57.232604 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:57.254677 kubelet[2708]: I0620 19:59:57.254638 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mh44x" podStartSLOduration=16.254621024 podStartE2EDuration="16.254621024s" podCreationTimestamp="2025-06-20 19:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:59:57.243617414 +0000 UTC m=+21.211858222" watchObservedRunningTime="2025-06-20 19:59:57.254621024 +0000 UTC m=+21.222861832" Jun 20 19:59:57.270890 kubelet[2708]: I0620 19:59:57.270853 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zvc65" podStartSLOduration=16.27083706 podStartE2EDuration="16.27083706s" podCreationTimestamp="2025-06-20 19:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:59:57.269513461 +0000 UTC m=+21.237754279" watchObservedRunningTime="2025-06-20 19:59:57.27083706 +0000 UTC m=+21.239077868" Jun 20 19:59:58.234157 kubelet[2708]: E0620 19:59:58.233718 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:58.234157 kubelet[2708]: E0620 19:59:58.233966 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:59.235881 kubelet[2708]: E0620 19:59:59.235715 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 19:59:59.235881 kubelet[2708]: E0620 19:59:59.235786 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:00:04.463398 kubelet[2708]: I0620 20:00:04.463284 2708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 20:00:04.464723 kubelet[2708]: E0620 20:00:04.464306 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:00:05.247375 kubelet[2708]: E0620 20:00:05.247323 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:00:44.113872 kubelet[2708]: E0620 20:00:44.111544 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:00:55.111472 kubelet[2708]: E0620 20:00:55.111432 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:12.111399 kubelet[2708]: E0620 20:01:12.110928 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:14.111089 kubelet[2708]: E0620 20:01:14.110747 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:20.111774 kubelet[2708]: E0620 20:01:20.111082 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:21.110892 kubelet[2708]: E0620 20:01:21.110859 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:26.111674 kubelet[2708]: E0620 20:01:26.111184 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:27.111127 kubelet[2708]: E0620 20:01:27.111085 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:45.110845 kubelet[2708]: E0620 20:01:45.110800 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:01:57.309047 systemd[1]: Started sshd@7-172.237.156.205:22-139.178.68.195:53866.service - OpenSSH per-connection server daemon (139.178.68.195:53866). Jun 20 20:01:57.652668 sshd[4045]: Accepted publickey for core from 139.178.68.195 port 53866 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:01:57.654211 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:01:57.659540 systemd-logind[1522]: New session 8 of user core. Jun 20 20:01:57.666495 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 20:01:57.976097 sshd[4047]: Connection closed by 139.178.68.195 port 53866 Jun 20 20:01:57.977549 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jun 20 20:01:57.982129 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Jun 20 20:01:57.982496 systemd[1]: sshd@7-172.237.156.205:22-139.178.68.195:53866.service: Deactivated successfully. Jun 20 20:01:57.984880 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 20:01:57.987042 systemd-logind[1522]: Removed session 8. Jun 20 20:02:03.040959 systemd[1]: Started sshd@8-172.237.156.205:22-139.178.68.195:53876.service - OpenSSH per-connection server daemon (139.178.68.195:53876). Jun 20 20:02:03.111643 kubelet[2708]: E0620 20:02:03.111571 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:03.383293 sshd[4060]: Accepted publickey for core from 139.178.68.195 port 53876 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:03.384522 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:03.389423 systemd-logind[1522]: New session 9 of user core. Jun 20 20:02:03.396516 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 20:02:03.688734 sshd[4062]: Connection closed by 139.178.68.195 port 53876 Jun 20 20:02:03.689745 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:03.694608 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Jun 20 20:02:03.695271 systemd[1]: sshd@8-172.237.156.205:22-139.178.68.195:53876.service: Deactivated successfully. Jun 20 20:02:03.697235 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 20:02:03.699164 systemd-logind[1522]: Removed session 9. Jun 20 20:02:08.747772 systemd[1]: Started sshd@9-172.237.156.205:22-139.178.68.195:47032.service - OpenSSH per-connection server daemon (139.178.68.195:47032). Jun 20 20:02:09.085628 sshd[4076]: Accepted publickey for core from 139.178.68.195 port 47032 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:09.091700 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:09.098418 systemd-logind[1522]: New session 10 of user core. Jun 20 20:02:09.112696 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 20:02:09.391624 sshd[4078]: Connection closed by 139.178.68.195 port 47032 Jun 20 20:02:09.392573 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:09.396932 systemd[1]: sshd@9-172.237.156.205:22-139.178.68.195:47032.service: Deactivated successfully. Jun 20 20:02:09.399694 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 20:02:09.400828 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Jun 20 20:02:09.402500 systemd-logind[1522]: Removed session 10. Jun 20 20:02:09.457125 systemd[1]: Started sshd@10-172.237.156.205:22-139.178.68.195:47048.service - OpenSSH per-connection server daemon (139.178.68.195:47048). Jun 20 20:02:09.800217 sshd[4092]: Accepted publickey for core from 139.178.68.195 port 47048 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:09.801651 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:09.807444 systemd-logind[1522]: New session 11 of user core. Jun 20 20:02:09.812546 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 20:02:10.139792 sshd[4094]: Connection closed by 139.178.68.195 port 47048 Jun 20 20:02:10.140549 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:10.145442 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Jun 20 20:02:10.146267 systemd[1]: sshd@10-172.237.156.205:22-139.178.68.195:47048.service: Deactivated successfully. Jun 20 20:02:10.148741 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 20:02:10.150985 systemd-logind[1522]: Removed session 11. Jun 20 20:02:10.198577 systemd[1]: Started sshd@11-172.237.156.205:22-139.178.68.195:47064.service - OpenSSH per-connection server daemon (139.178.68.195:47064). Jun 20 20:02:10.530055 sshd[4104]: Accepted publickey for core from 139.178.68.195 port 47064 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:10.531487 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:10.536245 systemd-logind[1522]: New session 12 of user core. Jun 20 20:02:10.541492 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 20:02:10.823867 sshd[4106]: Connection closed by 139.178.68.195 port 47064 Jun 20 20:02:10.824740 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:10.828571 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Jun 20 20:02:10.829247 systemd[1]: sshd@11-172.237.156.205:22-139.178.68.195:47064.service: Deactivated successfully. Jun 20 20:02:10.831231 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 20:02:10.832968 systemd-logind[1522]: Removed session 12. Jun 20 20:02:15.890558 systemd[1]: Started sshd@12-172.237.156.205:22-139.178.68.195:44692.service - OpenSSH per-connection server daemon (139.178.68.195:44692). Jun 20 20:02:16.228799 sshd[4120]: Accepted publickey for core from 139.178.68.195 port 44692 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:16.230069 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:16.234726 systemd-logind[1522]: New session 13 of user core. Jun 20 20:02:16.239495 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 20:02:16.531337 sshd[4122]: Connection closed by 139.178.68.195 port 44692 Jun 20 20:02:16.532586 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:16.537184 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Jun 20 20:02:16.537915 systemd[1]: sshd@12-172.237.156.205:22-139.178.68.195:44692.service: Deactivated successfully. Jun 20 20:02:16.540731 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 20:02:16.542970 systemd-logind[1522]: Removed session 13. Jun 20 20:02:21.589714 systemd[1]: Started sshd@13-172.237.156.205:22-139.178.68.195:44706.service - OpenSSH per-connection server daemon (139.178.68.195:44706). Jun 20 20:02:21.920031 sshd[4134]: Accepted publickey for core from 139.178.68.195 port 44706 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:21.921206 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:21.925409 systemd-logind[1522]: New session 14 of user core. Jun 20 20:02:21.932475 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 20:02:22.209654 sshd[4136]: Connection closed by 139.178.68.195 port 44706 Jun 20 20:02:22.210482 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:22.214683 systemd[1]: sshd@13-172.237.156.205:22-139.178.68.195:44706.service: Deactivated successfully. Jun 20 20:02:22.216585 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 20:02:22.217285 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Jun 20 20:02:22.218693 systemd-logind[1522]: Removed session 14. Jun 20 20:02:22.272506 systemd[1]: Started sshd@14-172.237.156.205:22-139.178.68.195:44718.service - OpenSSH per-connection server daemon (139.178.68.195:44718). Jun 20 20:02:22.618136 sshd[4148]: Accepted publickey for core from 139.178.68.195 port 44718 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:22.619667 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:22.624457 systemd-logind[1522]: New session 15 of user core. Jun 20 20:02:22.633472 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 20:02:22.926158 sshd[4150]: Connection closed by 139.178.68.195 port 44718 Jun 20 20:02:22.927237 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:22.931653 systemd[1]: sshd@14-172.237.156.205:22-139.178.68.195:44718.service: Deactivated successfully. Jun 20 20:02:22.933344 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 20:02:22.934805 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Jun 20 20:02:22.936710 systemd-logind[1522]: Removed session 15. Jun 20 20:02:22.984717 systemd[1]: Started sshd@15-172.237.156.205:22-139.178.68.195:44734.service - OpenSSH per-connection server daemon (139.178.68.195:44734). Jun 20 20:02:23.314950 sshd[4160]: Accepted publickey for core from 139.178.68.195 port 44734 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:23.316698 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:23.321424 systemd-logind[1522]: New session 16 of user core. Jun 20 20:02:23.328494 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 20:02:24.122341 sshd[4162]: Connection closed by 139.178.68.195 port 44734 Jun 20 20:02:24.122910 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:24.127428 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Jun 20 20:02:24.128172 systemd[1]: sshd@15-172.237.156.205:22-139.178.68.195:44734.service: Deactivated successfully. Jun 20 20:02:24.130636 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 20:02:24.133032 systemd-logind[1522]: Removed session 16. Jun 20 20:02:24.187841 systemd[1]: Started sshd@16-172.237.156.205:22-139.178.68.195:37216.service - OpenSSH per-connection server daemon (139.178.68.195:37216). Jun 20 20:02:24.542834 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 37216 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:24.543417 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:24.549180 systemd-logind[1522]: New session 17 of user core. Jun 20 20:02:24.553520 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 20:02:24.929704 sshd[4182]: Connection closed by 139.178.68.195 port 37216 Jun 20 20:02:24.930489 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:24.934976 systemd[1]: sshd@16-172.237.156.205:22-139.178.68.195:37216.service: Deactivated successfully. Jun 20 20:02:24.937146 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 20:02:24.938382 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Jun 20 20:02:24.940273 systemd-logind[1522]: Removed session 17. Jun 20 20:02:24.991544 systemd[1]: Started sshd@17-172.237.156.205:22-139.178.68.195:37220.service - OpenSSH per-connection server daemon (139.178.68.195:37220). Jun 20 20:02:25.329604 sshd[4192]: Accepted publickey for core from 139.178.68.195 port 37220 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:25.330968 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:25.335684 systemd-logind[1522]: New session 18 of user core. Jun 20 20:02:25.339468 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 20:02:25.623166 sshd[4194]: Connection closed by 139.178.68.195 port 37220 Jun 20 20:02:25.623982 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:25.627145 systemd[1]: sshd@17-172.237.156.205:22-139.178.68.195:37220.service: Deactivated successfully. Jun 20 20:02:25.629718 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 20:02:25.631186 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Jun 20 20:02:25.632604 systemd-logind[1522]: Removed session 18. Jun 20 20:02:30.694687 systemd[1]: Started sshd@18-172.237.156.205:22-139.178.68.195:37226.service - OpenSSH per-connection server daemon (139.178.68.195:37226). Jun 20 20:02:31.029509 sshd[4208]: Accepted publickey for core from 139.178.68.195 port 37226 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:31.031201 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:31.036410 systemd-logind[1522]: New session 19 of user core. Jun 20 20:02:31.050504 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 20:02:31.330103 sshd[4210]: Connection closed by 139.178.68.195 port 37226 Jun 20 20:02:31.331059 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:31.335103 systemd[1]: sshd@18-172.237.156.205:22-139.178.68.195:37226.service: Deactivated successfully. Jun 20 20:02:31.337775 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 20:02:31.338699 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Jun 20 20:02:31.340331 systemd-logind[1522]: Removed session 19. Jun 20 20:02:34.111932 kubelet[2708]: E0620 20:02:34.111200 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:36.394298 systemd[1]: Started sshd@19-172.237.156.205:22-139.178.68.195:44476.service - OpenSSH per-connection server daemon (139.178.68.195:44476). Jun 20 20:02:36.736946 sshd[4223]: Accepted publickey for core from 139.178.68.195 port 44476 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:36.738480 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:36.743381 systemd-logind[1522]: New session 20 of user core. Jun 20 20:02:36.747505 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 20:02:37.034966 sshd[4225]: Connection closed by 139.178.68.195 port 44476 Jun 20 20:02:37.035481 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:37.039798 systemd[1]: sshd@19-172.237.156.205:22-139.178.68.195:44476.service: Deactivated successfully. Jun 20 20:02:37.042100 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 20:02:37.044451 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Jun 20 20:02:37.046283 systemd-logind[1522]: Removed session 20. Jun 20 20:02:38.111881 kubelet[2708]: E0620 20:02:38.111145 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:42.110945 systemd[1]: Started sshd@20-172.237.156.205:22-139.178.68.195:44480.service - OpenSSH per-connection server daemon (139.178.68.195:44480). Jun 20 20:02:42.458045 sshd[4237]: Accepted publickey for core from 139.178.68.195 port 44480 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:42.458540 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:42.463616 systemd-logind[1522]: New session 21 of user core. Jun 20 20:02:42.468497 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 20:02:42.771814 sshd[4241]: Connection closed by 139.178.68.195 port 44480 Jun 20 20:02:42.772484 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:42.777424 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Jun 20 20:02:42.777922 systemd[1]: sshd@20-172.237.156.205:22-139.178.68.195:44480.service: Deactivated successfully. Jun 20 20:02:42.780279 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 20:02:42.782418 systemd-logind[1522]: Removed session 21. Jun 20 20:02:42.833754 systemd[1]: Started sshd@21-172.237.156.205:22-139.178.68.195:44490.service - OpenSSH per-connection server daemon (139.178.68.195:44490). Jun 20 20:02:43.111680 kubelet[2708]: E0620 20:02:43.111581 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:43.177351 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 44490 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:43.178905 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:43.184618 systemd-logind[1522]: New session 22 of user core. Jun 20 20:02:43.189467 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 20:02:44.653163 containerd[1543]: time="2025-06-20T20:02:44.653117821Z" level=info msg="StopContainer for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" with timeout 30 (s)" Jun 20 20:02:44.654677 containerd[1543]: time="2025-06-20T20:02:44.654177441Z" level=info msg="Stop container \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" with signal terminated" Jun 20 20:02:44.672478 systemd[1]: cri-containerd-7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08.scope: Deactivated successfully. Jun 20 20:02:44.673896 containerd[1543]: time="2025-06-20T20:02:44.673869600Z" level=info msg="received exit event container_id:\"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" id:\"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" pid:3234 exited_at:{seconds:1750449764 nanos:672198212}" Jun 20 20:02:44.674275 containerd[1543]: time="2025-06-20T20:02:44.674259370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" id:\"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" pid:3234 exited_at:{seconds:1750449764 nanos:672198212}" Jun 20 20:02:44.688835 containerd[1543]: time="2025-06-20T20:02:44.688797244Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 20:02:44.694906 containerd[1543]: time="2025-06-20T20:02:44.694875937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" id:\"6fa3ce7f760abc9fc47fd38ab4476b3c230f188f4937231c6d32d479a4da8d2a\" pid:4280 exited_at:{seconds:1750449764 nanos:694562498}" Jun 20 20:02:44.697482 containerd[1543]: time="2025-06-20T20:02:44.697444235Z" level=info msg="StopContainer for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" with timeout 2 (s)" Jun 20 20:02:44.698626 containerd[1543]: time="2025-06-20T20:02:44.698553073Z" level=info msg="Stop container \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" with signal terminated" Jun 20 20:02:44.703770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08-rootfs.mount: Deactivated successfully. Jun 20 20:02:44.710034 containerd[1543]: time="2025-06-20T20:02:44.710014642Z" level=info msg="StopContainer for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" returns successfully" Jun 20 20:02:44.711043 containerd[1543]: time="2025-06-20T20:02:44.711005171Z" level=info msg="StopPodSandbox for \"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\"" Jun 20 20:02:44.711089 containerd[1543]: time="2025-06-20T20:02:44.711051091Z" level=info msg="Container to stop \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 20:02:44.713282 systemd-networkd[1458]: lxc_health: Link DOWN Jun 20 20:02:44.713290 systemd-networkd[1458]: lxc_health: Lost carrier Jun 20 20:02:44.724834 systemd[1]: cri-containerd-4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9.scope: Deactivated successfully. Jun 20 20:02:44.725421 containerd[1543]: time="2025-06-20T20:02:44.725218286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" id:\"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" pid:2919 exit_status:137 exited_at:{seconds:1750449764 nanos:724958646}" Jun 20 20:02:44.735046 systemd[1]: cri-containerd-398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b.scope: Deactivated successfully. Jun 20 20:02:44.735333 systemd[1]: cri-containerd-398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b.scope: Consumed 5.870s CPU time, 129.1M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 20:02:44.738601 containerd[1543]: time="2025-06-20T20:02:44.738572432Z" level=info msg="received exit event container_id:\"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" id:\"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" pid:3344 exited_at:{seconds:1750449764 nanos:737935113}" Jun 20 20:02:44.802554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b-rootfs.mount: Deactivated successfully. Jun 20 20:02:44.802671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9-rootfs.mount: Deactivated successfully. Jun 20 20:02:44.808586 containerd[1543]: time="2025-06-20T20:02:44.808539779Z" level=info msg="shim disconnected" id=4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9 namespace=k8s.io Jun 20 20:02:44.808586 containerd[1543]: time="2025-06-20T20:02:44.808570108Z" level=warning msg="cleaning up after shim disconnected" id=4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9 namespace=k8s.io Jun 20 20:02:44.808689 containerd[1543]: time="2025-06-20T20:02:44.808577708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 20:02:44.818881 containerd[1543]: time="2025-06-20T20:02:44.818296538Z" level=info msg="StopContainer for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" returns successfully" Jun 20 20:02:44.819332 containerd[1543]: time="2025-06-20T20:02:44.819315218Z" level=info msg="StopPodSandbox for \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\"" Jun 20 20:02:44.819471 containerd[1543]: time="2025-06-20T20:02:44.819455588Z" level=info msg="Container to stop \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 20:02:44.820029 containerd[1543]: time="2025-06-20T20:02:44.820011747Z" level=info msg="Container to stop \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 20:02:44.820128 containerd[1543]: time="2025-06-20T20:02:44.820113197Z" level=info msg="Container to stop \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 20:02:44.820179 containerd[1543]: time="2025-06-20T20:02:44.820167977Z" level=info msg="Container to stop \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 20:02:44.820333 containerd[1543]: time="2025-06-20T20:02:44.820317946Z" level=info msg="Container to stop \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 20:02:44.835803 systemd[1]: cri-containerd-82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347.scope: Deactivated successfully. Jun 20 20:02:44.841390 containerd[1543]: time="2025-06-20T20:02:44.840800345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" id:\"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" pid:3344 exited_at:{seconds:1750449764 nanos:737935113}" Jun 20 20:02:44.841390 containerd[1543]: time="2025-06-20T20:02:44.840831575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" id:\"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" pid:2861 exit_status:137 exited_at:{seconds:1750449764 nanos:837635529}" Jun 20 20:02:44.842308 containerd[1543]: time="2025-06-20T20:02:44.841769985Z" level=info msg="received exit event sandbox_id:\"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" exit_status:137 exited_at:{seconds:1750449764 nanos:724958646}" Jun 20 20:02:44.843026 containerd[1543]: time="2025-06-20T20:02:44.843008563Z" level=info msg="TearDown network for sandbox \"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" successfully" Jun 20 20:02:44.843985 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9-shm.mount: Deactivated successfully. Jun 20 20:02:44.844084 containerd[1543]: time="2025-06-20T20:02:44.844067122Z" level=info msg="StopPodSandbox for \"4a73b2e1b34a32f03c6da9563a29b4efd7429fc562acbc666c9d961c01e24fd9\" returns successfully" Jun 20 20:02:44.870551 containerd[1543]: time="2025-06-20T20:02:44.870523474Z" level=info msg="received exit event sandbox_id:\"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" exit_status:137 exited_at:{seconds:1750449764 nanos:837635529}" Jun 20 20:02:44.871841 containerd[1543]: time="2025-06-20T20:02:44.871689123Z" level=info msg="shim disconnected" id=82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347 namespace=k8s.io Jun 20 20:02:44.871841 containerd[1543]: time="2025-06-20T20:02:44.871708233Z" level=warning msg="cleaning up after shim disconnected" id=82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347 namespace=k8s.io Jun 20 20:02:44.871841 containerd[1543]: time="2025-06-20T20:02:44.871715713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 20:02:44.871798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347-rootfs.mount: Deactivated successfully. Jun 20 20:02:44.875513 containerd[1543]: time="2025-06-20T20:02:44.871205714Z" level=info msg="TearDown network for sandbox \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" successfully" Jun 20 20:02:44.875513 containerd[1543]: time="2025-06-20T20:02:44.875170789Z" level=info msg="StopPodSandbox for \"82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347\" returns successfully" Jun 20 20:02:44.966457 kubelet[2708]: I0620 20:02:44.964553 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6652ac11-e062-498c-b924-57a62eca484a-cilium-config-path\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966457 kubelet[2708]: I0620 20:02:44.964611 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-cgroup\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966457 kubelet[2708]: I0620 20:02:44.964627 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-net\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966457 kubelet[2708]: I0620 20:02:44.964666 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-xtables-lock\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966457 kubelet[2708]: I0620 20:02:44.964684 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxp6g\" (UniqueName: \"kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-kube-api-access-mxp6g\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966457 kubelet[2708]: I0620 20:02:44.964700 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-run\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966952 kubelet[2708]: I0620 20:02:44.964715 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-hubble-tls\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966952 kubelet[2708]: I0620 20:02:44.964753 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-cilium-config-path\") pod \"8595b000-4a1f-4f89-a025-4a6ecbe6f02b\" (UID: \"8595b000-4a1f-4f89-a025-4a6ecbe6f02b\") " Jun 20 20:02:44.966952 kubelet[2708]: I0620 20:02:44.964768 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d46fm\" (UniqueName: \"kubernetes.io/projected/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-kube-api-access-d46fm\") pod \"8595b000-4a1f-4f89-a025-4a6ecbe6f02b\" (UID: \"8595b000-4a1f-4f89-a025-4a6ecbe6f02b\") " Jun 20 20:02:44.966952 kubelet[2708]: I0620 20:02:44.964783 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-kernel\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966952 kubelet[2708]: I0620 20:02:44.964797 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-etc-cni-netd\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.966952 kubelet[2708]: I0620 20:02:44.964832 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6652ac11-e062-498c-b924-57a62eca484a-clustermesh-secrets\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.967083 kubelet[2708]: I0620 20:02:44.964845 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-hostproc\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.967083 kubelet[2708]: I0620 20:02:44.964858 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-bpf-maps\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.967083 kubelet[2708]: I0620 20:02:44.964870 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-lib-modules\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.967083 kubelet[2708]: I0620 20:02:44.964899 2708 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cni-path\") pod \"6652ac11-e062-498c-b924-57a62eca484a\" (UID: \"6652ac11-e062-498c-b924-57a62eca484a\") " Jun 20 20:02:44.967083 kubelet[2708]: I0620 20:02:44.964938 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cni-path" (OuterVolumeSpecName: "cni-path") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.967083 kubelet[2708]: I0620 20:02:44.964984 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.967205 kubelet[2708]: I0620 20:02:44.965000 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.967205 kubelet[2708]: I0620 20:02:44.965015 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.967205 kubelet[2708]: I0620 20:02:44.966488 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.967205 kubelet[2708]: I0620 20:02:44.966512 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.968555 kubelet[2708]: I0620 20:02:44.968538 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.970426 kubelet[2708]: I0620 20:02:44.970318 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-hostproc" (OuterVolumeSpecName: "hostproc") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.970523 kubelet[2708]: I0620 20:02:44.970509 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.970600 kubelet[2708]: I0620 20:02:44.970587 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 20:02:44.970827 kubelet[2708]: I0620 20:02:44.970802 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-kube-api-access-mxp6g" (OuterVolumeSpecName: "kube-api-access-mxp6g") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "kube-api-access-mxp6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 20:02:44.972115 kubelet[2708]: I0620 20:02:44.972088 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6652ac11-e062-498c-b924-57a62eca484a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 20:02:44.973371 kubelet[2708]: I0620 20:02:44.973338 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6652ac11-e062-498c-b924-57a62eca484a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 20:02:44.974057 kubelet[2708]: I0620 20:02:44.974039 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6652ac11-e062-498c-b924-57a62eca484a" (UID: "6652ac11-e062-498c-b924-57a62eca484a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 20:02:44.974101 kubelet[2708]: I0620 20:02:44.973671 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8595b000-4a1f-4f89-a025-4a6ecbe6f02b" (UID: "8595b000-4a1f-4f89-a025-4a6ecbe6f02b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 20:02:44.974805 kubelet[2708]: I0620 20:02:44.974785 2708 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-kube-api-access-d46fm" (OuterVolumeSpecName: "kube-api-access-d46fm") pod "8595b000-4a1f-4f89-a025-4a6ecbe6f02b" (UID: "8595b000-4a1f-4f89-a025-4a6ecbe6f02b"). InnerVolumeSpecName "kube-api-access-d46fm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 20:02:45.065056 kubelet[2708]: I0620 20:02:45.065022 2708 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-kernel\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065056 kubelet[2708]: I0620 20:02:45.065053 2708 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-etc-cni-netd\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065056 kubelet[2708]: I0620 20:02:45.065063 2708 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6652ac11-e062-498c-b924-57a62eca484a-clustermesh-secrets\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065056 kubelet[2708]: I0620 20:02:45.065075 2708 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-hostproc\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065085 2708 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-bpf-maps\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065094 2708 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-lib-modules\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065103 2708 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cni-path\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065112 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6652ac11-e062-498c-b924-57a62eca484a-cilium-config-path\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065121 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-cgroup\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065131 2708 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-host-proc-sys-net\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065140 2708 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-xtables-lock\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065226 kubelet[2708]: I0620 20:02:45.065150 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxp6g\" (UniqueName: \"kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-kube-api-access-mxp6g\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065723 kubelet[2708]: I0620 20:02:45.065159 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6652ac11-e062-498c-b924-57a62eca484a-cilium-run\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065723 kubelet[2708]: I0620 20:02:45.065169 2708 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6652ac11-e062-498c-b924-57a62eca484a-hubble-tls\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065723 kubelet[2708]: I0620 20:02:45.065179 2708 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-cilium-config-path\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.065723 kubelet[2708]: I0620 20:02:45.065199 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d46fm\" (UniqueName: \"kubernetes.io/projected/8595b000-4a1f-4f89-a025-4a6ecbe6f02b-kube-api-access-d46fm\") on node \"172-237-156-205\" DevicePath \"\"" Jun 20 20:02:45.110853 kubelet[2708]: E0620 20:02:45.110744 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:45.540515 kubelet[2708]: I0620 20:02:45.540484 2708 scope.go:117] "RemoveContainer" containerID="7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08" Jun 20 20:02:45.545304 containerd[1543]: time="2025-06-20T20:02:45.545083252Z" level=info msg="RemoveContainer for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\"" Jun 20 20:02:45.550420 systemd[1]: Removed slice kubepods-besteffort-pod8595b000_4a1f_4f89_a025_4a6ecbe6f02b.slice - libcontainer container kubepods-besteffort-pod8595b000_4a1f_4f89_a025_4a6ecbe6f02b.slice. Jun 20 20:02:45.555231 containerd[1543]: time="2025-06-20T20:02:45.555172211Z" level=info msg="RemoveContainer for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" returns successfully" Jun 20 20:02:45.557138 kubelet[2708]: I0620 20:02:45.557083 2708 scope.go:117] "RemoveContainer" containerID="7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08" Jun 20 20:02:45.557688 containerd[1543]: time="2025-06-20T20:02:45.557610059Z" level=error msg="ContainerStatus for \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\": not found" Jun 20 20:02:45.557848 kubelet[2708]: E0620 20:02:45.557818 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\": not found" containerID="7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08" Jun 20 20:02:45.557921 kubelet[2708]: I0620 20:02:45.557847 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08"} err="failed to get container status \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f6faa6d8883a6b052b528bbf95d803be75fefef4cddbcb9911b82a051144a08\": not found" Jun 20 20:02:45.557921 kubelet[2708]: I0620 20:02:45.557906 2708 scope.go:117] "RemoveContainer" containerID="398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b" Jun 20 20:02:45.562870 systemd[1]: Removed slice kubepods-burstable-pod6652ac11_e062_498c_b924_57a62eca484a.slice - libcontainer container kubepods-burstable-pod6652ac11_e062_498c_b924_57a62eca484a.slice. Jun 20 20:02:45.562966 systemd[1]: kubepods-burstable-pod6652ac11_e062_498c_b924_57a62eca484a.slice: Consumed 5.962s CPU time, 129.6M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 20:02:45.563589 containerd[1543]: time="2025-06-20T20:02:45.563482803Z" level=info msg="RemoveContainer for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\"" Jun 20 20:02:45.569027 containerd[1543]: time="2025-06-20T20:02:45.568970747Z" level=info msg="RemoveContainer for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" returns successfully" Jun 20 20:02:45.569116 kubelet[2708]: I0620 20:02:45.569092 2708 scope.go:117] "RemoveContainer" containerID="289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a" Jun 20 20:02:45.572005 containerd[1543]: time="2025-06-20T20:02:45.571978804Z" level=info msg="RemoveContainer for \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\"" Jun 20 20:02:45.575465 containerd[1543]: time="2025-06-20T20:02:45.575432080Z" level=info msg="RemoveContainer for \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" returns successfully" Jun 20 20:02:45.575702 kubelet[2708]: I0620 20:02:45.575660 2708 scope.go:117] "RemoveContainer" containerID="785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11" Jun 20 20:02:45.578926 containerd[1543]: time="2025-06-20T20:02:45.578428367Z" level=info msg="RemoveContainer for \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\"" Jun 20 20:02:45.582289 containerd[1543]: time="2025-06-20T20:02:45.582262383Z" level=info msg="RemoveContainer for \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" returns successfully" Jun 20 20:02:45.582481 kubelet[2708]: I0620 20:02:45.582426 2708 scope.go:117] "RemoveContainer" containerID="6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b" Jun 20 20:02:45.583663 containerd[1543]: time="2025-06-20T20:02:45.583637332Z" level=info msg="RemoveContainer for \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\"" Jun 20 20:02:45.585910 containerd[1543]: time="2025-06-20T20:02:45.585883290Z" level=info msg="RemoveContainer for \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" returns successfully" Jun 20 20:02:45.586030 kubelet[2708]: I0620 20:02:45.586002 2708 scope.go:117] "RemoveContainer" containerID="6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e" Jun 20 20:02:45.587536 containerd[1543]: time="2025-06-20T20:02:45.587113488Z" level=info msg="RemoveContainer for \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\"" Jun 20 20:02:45.589434 containerd[1543]: time="2025-06-20T20:02:45.589408306Z" level=info msg="RemoveContainer for \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" returns successfully" Jun 20 20:02:45.589624 kubelet[2708]: I0620 20:02:45.589601 2708 scope.go:117] "RemoveContainer" containerID="398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b" Jun 20 20:02:45.589912 containerd[1543]: time="2025-06-20T20:02:45.589878526Z" level=error msg="ContainerStatus for \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\": not found" Jun 20 20:02:45.590117 kubelet[2708]: E0620 20:02:45.590085 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\": not found" containerID="398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b" Jun 20 20:02:45.590207 kubelet[2708]: I0620 20:02:45.590176 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b"} err="failed to get container status \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"398600b28d288e22da178d377bf05cc26a7a6ea1cb0e4e4e9fb7871aa31bac4b\": not found" Jun 20 20:02:45.590207 kubelet[2708]: I0620 20:02:45.590200 2708 scope.go:117] "RemoveContainer" containerID="289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a" Jun 20 20:02:45.590461 containerd[1543]: time="2025-06-20T20:02:45.590338714Z" level=error msg="ContainerStatus for \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\": not found" Jun 20 20:02:45.590614 kubelet[2708]: E0620 20:02:45.590593 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\": not found" containerID="289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a" Jun 20 20:02:45.590670 kubelet[2708]: I0620 20:02:45.590612 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a"} err="failed to get container status \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\": rpc error: code = NotFound desc = an error occurred when try to find container \"289c3cb45ceb171960e6770527ef40d591dc21a27f2bc6044e89c653eb33b31a\": not found" Jun 20 20:02:45.590670 kubelet[2708]: I0620 20:02:45.590625 2708 scope.go:117] "RemoveContainer" containerID="785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11" Jun 20 20:02:45.590821 containerd[1543]: time="2025-06-20T20:02:45.590789334Z" level=error msg="ContainerStatus for \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\": not found" Jun 20 20:02:45.590929 kubelet[2708]: E0620 20:02:45.590909 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\": not found" containerID="785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11" Jun 20 20:02:45.590958 kubelet[2708]: I0620 20:02:45.590928 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11"} err="failed to get container status \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\": rpc error: code = NotFound desc = an error occurred when try to find container \"785adc14ac52497b992a52bf772cc6d2777ffe50a8dff18bb0d7620c3570cb11\": not found" Jun 20 20:02:45.590958 kubelet[2708]: I0620 20:02:45.590940 2708 scope.go:117] "RemoveContainer" containerID="6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b" Jun 20 20:02:45.591179 containerd[1543]: time="2025-06-20T20:02:45.591092884Z" level=error msg="ContainerStatus for \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\": not found" Jun 20 20:02:45.591257 kubelet[2708]: E0620 20:02:45.591233 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\": not found" containerID="6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b" Jun 20 20:02:45.591257 kubelet[2708]: I0620 20:02:45.591253 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b"} err="failed to get container status \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e110e0062a86db2ad58f7cb9e310a5b605acdc5e48ceaa5a18f7d53d4d1674b\": not found" Jun 20 20:02:45.591329 kubelet[2708]: I0620 20:02:45.591284 2708 scope.go:117] "RemoveContainer" containerID="6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e" Jun 20 20:02:45.591476 containerd[1543]: time="2025-06-20T20:02:45.591447864Z" level=error msg="ContainerStatus for \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\": not found" Jun 20 20:02:45.591592 kubelet[2708]: E0620 20:02:45.591555 2708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\": not found" containerID="6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e" Jun 20 20:02:45.591646 kubelet[2708]: I0620 20:02:45.591622 2708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e"} err="failed to get container status \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6aed80e581f11c91a9943456e999fdcd8e1c9da959898eb51accf2106708681e\": not found" Jun 20 20:02:45.700405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82c1f2c83509c932bda442f7c58c8d6a23f82b8319721d0c4c75255f797fe347-shm.mount: Deactivated successfully. Jun 20 20:02:45.700514 systemd[1]: var-lib-kubelet-pods-8595b000\x2d4a1f\x2d4f89\x2da025\x2d4a6ecbe6f02b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd46fm.mount: Deactivated successfully. Jun 20 20:02:45.700590 systemd[1]: var-lib-kubelet-pods-6652ac11\x2de062\x2d498c\x2db924\x2d57a62eca484a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxp6g.mount: Deactivated successfully. Jun 20 20:02:45.700667 systemd[1]: var-lib-kubelet-pods-6652ac11\x2de062\x2d498c\x2db924\x2d57a62eca484a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 20:02:45.700739 systemd[1]: var-lib-kubelet-pods-6652ac11\x2de062\x2d498c\x2db924\x2d57a62eca484a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 20:02:46.115466 kubelet[2708]: I0620 20:02:46.114389 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6652ac11-e062-498c-b924-57a62eca484a" path="/var/lib/kubelet/pods/6652ac11-e062-498c-b924-57a62eca484a/volumes" Jun 20 20:02:46.115466 kubelet[2708]: I0620 20:02:46.115138 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8595b000-4a1f-4f89-a025-4a6ecbe6f02b" path="/var/lib/kubelet/pods/8595b000-4a1f-4f89-a025-4a6ecbe6f02b/volumes" Jun 20 20:02:46.209267 kubelet[2708]: E0620 20:02:46.209230 2708 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 20:02:46.662212 sshd[4255]: Connection closed by 139.178.68.195 port 44490 Jun 20 20:02:46.662897 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:46.666429 systemd[1]: sshd@21-172.237.156.205:22-139.178.68.195:44490.service: Deactivated successfully. Jun 20 20:02:46.671329 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 20:02:46.673812 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Jun 20 20:02:46.675691 systemd-logind[1522]: Removed session 22. Jun 20 20:02:46.723097 systemd[1]: Started sshd@22-172.237.156.205:22-139.178.68.195:49810.service - OpenSSH per-connection server daemon (139.178.68.195:49810). Jun 20 20:02:47.053283 sshd[4409]: Accepted publickey for core from 139.178.68.195 port 49810 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:47.054755 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:47.059427 systemd-logind[1522]: New session 23 of user core. Jun 20 20:02:47.067488 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 20:02:47.613690 kubelet[2708]: I0620 20:02:47.613610 2708 memory_manager.go:355] "RemoveStaleState removing state" podUID="8595b000-4a1f-4f89-a025-4a6ecbe6f02b" containerName="cilium-operator" Jun 20 20:02:47.613690 kubelet[2708]: I0620 20:02:47.613637 2708 memory_manager.go:355] "RemoveStaleState removing state" podUID="6652ac11-e062-498c-b924-57a62eca484a" containerName="cilium-agent" Jun 20 20:02:47.625017 systemd[1]: Created slice kubepods-burstable-pod897815c5_1874_4a22_bbfa_1491c1537a34.slice - libcontainer container kubepods-burstable-pod897815c5_1874_4a22_bbfa_1491c1537a34.slice. Jun 20 20:02:47.649546 sshd[4411]: Connection closed by 139.178.68.195 port 49810 Jun 20 20:02:47.650559 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:47.653780 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Jun 20 20:02:47.655804 systemd[1]: sshd@22-172.237.156.205:22-139.178.68.195:49810.service: Deactivated successfully. Jun 20 20:02:47.659238 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 20:02:47.663417 systemd-logind[1522]: Removed session 23. Jun 20 20:02:47.678488 kubelet[2708]: I0620 20:02:47.678443 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/897815c5-1874-4a22-bbfa-1491c1537a34-cilium-config-path\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678488 kubelet[2708]: I0620 20:02:47.678476 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/897815c5-1874-4a22-bbfa-1491c1537a34-cilium-ipsec-secrets\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678605 kubelet[2708]: I0620 20:02:47.678496 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-etc-cni-netd\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678605 kubelet[2708]: I0620 20:02:47.678512 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-cni-path\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678605 kubelet[2708]: I0620 20:02:47.678528 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-host-proc-sys-kernel\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678605 kubelet[2708]: I0620 20:02:47.678544 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-hostproc\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678605 kubelet[2708]: I0620 20:02:47.678558 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-bpf-maps\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678605 kubelet[2708]: I0620 20:02:47.678571 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-xtables-lock\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678773 kubelet[2708]: I0620 20:02:47.678585 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-host-proc-sys-net\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678773 kubelet[2708]: I0620 20:02:47.678602 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-lib-modules\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678773 kubelet[2708]: I0620 20:02:47.678619 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jgs\" (UniqueName: \"kubernetes.io/projected/897815c5-1874-4a22-bbfa-1491c1537a34-kube-api-access-45jgs\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678773 kubelet[2708]: I0620 20:02:47.678634 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-cilium-cgroup\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678773 kubelet[2708]: I0620 20:02:47.678648 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/897815c5-1874-4a22-bbfa-1491c1537a34-clustermesh-secrets\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678773 kubelet[2708]: I0620 20:02:47.678663 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/897815c5-1874-4a22-bbfa-1491c1537a34-hubble-tls\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.678925 kubelet[2708]: I0620 20:02:47.678678 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/897815c5-1874-4a22-bbfa-1491c1537a34-cilium-run\") pod \"cilium-7pzzt\" (UID: \"897815c5-1874-4a22-bbfa-1491c1537a34\") " pod="kube-system/cilium-7pzzt" Jun 20 20:02:47.716929 systemd[1]: Started sshd@23-172.237.156.205:22-139.178.68.195:49826.service - OpenSSH per-connection server daemon (139.178.68.195:49826). Jun 20 20:02:47.932770 kubelet[2708]: E0620 20:02:47.932216 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:47.933404 containerd[1543]: time="2025-06-20T20:02:47.933373432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7pzzt,Uid:897815c5-1874-4a22-bbfa-1491c1537a34,Namespace:kube-system,Attempt:0,}" Jun 20 20:02:47.947493 containerd[1543]: time="2025-06-20T20:02:47.947465478Z" level=info msg="connecting to shim 1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58" address="unix:///run/containerd/s/b86968ec7833314db146a373cb3220e90077a4ae7302c4ac84fdf71dd53b83c2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 20:02:47.978477 systemd[1]: Started cri-containerd-1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58.scope - libcontainer container 1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58. Jun 20 20:02:48.001249 containerd[1543]: time="2025-06-20T20:02:48.001221392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7pzzt,Uid:897815c5-1874-4a22-bbfa-1491c1537a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\"" Jun 20 20:02:48.001890 kubelet[2708]: E0620 20:02:48.001833 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:48.006121 containerd[1543]: time="2025-06-20T20:02:48.005890527Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 20:02:48.013792 containerd[1543]: time="2025-06-20T20:02:48.013767228Z" level=info msg="Container a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208: CDI devices from CRI Config.CDIDevices: []" Jun 20 20:02:48.017762 containerd[1543]: time="2025-06-20T20:02:48.017728294Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\"" Jun 20 20:02:48.018608 containerd[1543]: time="2025-06-20T20:02:48.018569834Z" level=info msg="StartContainer for \"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\"" Jun 20 20:02:48.019669 containerd[1543]: time="2025-06-20T20:02:48.019593502Z" level=info msg="connecting to shim a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208" address="unix:///run/containerd/s/b86968ec7833314db146a373cb3220e90077a4ae7302c4ac84fdf71dd53b83c2" protocol=ttrpc version=3 Jun 20 20:02:48.046486 systemd[1]: Started cri-containerd-a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208.scope - libcontainer container a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208. Jun 20 20:02:48.069344 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 49826 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:48.070875 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:48.076291 systemd-logind[1522]: New session 24 of user core. Jun 20 20:02:48.080609 containerd[1543]: time="2025-06-20T20:02:48.080318810Z" level=info msg="StartContainer for \"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\" returns successfully" Jun 20 20:02:48.083686 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 20:02:48.084030 systemd[1]: cri-containerd-a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208.scope: Deactivated successfully. Jun 20 20:02:48.088864 containerd[1543]: time="2025-06-20T20:02:48.088833911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\" id:\"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\" pid:4485 exited_at:{seconds:1750449768 nanos:88617181}" Jun 20 20:02:48.088912 containerd[1543]: time="2025-06-20T20:02:48.088889061Z" level=info msg="received exit event container_id:\"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\" id:\"a41a0fcc9952e2676eda1d20a1483f931d5ba83b5eb79b4e96ae48f1cf4eb208\" pid:4485 exited_at:{seconds:1750449768 nanos:88617181}" Jun 20 20:02:48.317150 sshd[4505]: Connection closed by 139.178.68.195 port 49826 Jun 20 20:02:48.317886 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Jun 20 20:02:48.322584 systemd[1]: sshd@23-172.237.156.205:22-139.178.68.195:49826.service: Deactivated successfully. Jun 20 20:02:48.325064 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 20:02:48.327044 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Jun 20 20:02:48.329743 systemd-logind[1522]: Removed session 24. Jun 20 20:02:48.386500 systemd[1]: Started sshd@24-172.237.156.205:22-139.178.68.195:49836.service - OpenSSH per-connection server daemon (139.178.68.195:49836). Jun 20 20:02:48.577468 kubelet[2708]: E0620 20:02:48.574853 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:48.577692 containerd[1543]: time="2025-06-20T20:02:48.577528166Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 20:02:48.586468 containerd[1543]: time="2025-06-20T20:02:48.586434157Z" level=info msg="Container de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804: CDI devices from CRI Config.CDIDevices: []" Jun 20 20:02:48.591106 containerd[1543]: time="2025-06-20T20:02:48.591073552Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\"" Jun 20 20:02:48.592210 containerd[1543]: time="2025-06-20T20:02:48.592174301Z" level=info msg="StartContainer for \"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\"" Jun 20 20:02:48.597530 containerd[1543]: time="2025-06-20T20:02:48.597501505Z" level=info msg="connecting to shim de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804" address="unix:///run/containerd/s/b86968ec7833314db146a373cb3220e90077a4ae7302c4ac84fdf71dd53b83c2" protocol=ttrpc version=3 Jun 20 20:02:48.622486 systemd[1]: Started cri-containerd-de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804.scope - libcontainer container de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804. Jun 20 20:02:48.656916 containerd[1543]: time="2025-06-20T20:02:48.656873504Z" level=info msg="StartContainer for \"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\" returns successfully" Jun 20 20:02:48.668658 systemd[1]: cri-containerd-de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804.scope: Deactivated successfully. Jun 20 20:02:48.668967 containerd[1543]: time="2025-06-20T20:02:48.668933322Z" level=info msg="received exit event container_id:\"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\" id:\"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\" pid:4538 exited_at:{seconds:1750449768 nanos:668691552}" Jun 20 20:02:48.669223 containerd[1543]: time="2025-06-20T20:02:48.669184062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\" id:\"de570daa6de1d75cfe2872be3499b3ad292a2cbdb1451cc9fcb7bf0e67980804\" pid:4538 exited_at:{seconds:1750449768 nanos:668691552}" Jun 20 20:02:48.735458 sshd[4524]: Accepted publickey for core from 139.178.68.195 port 49836 ssh2: RSA SHA256:DORs9jqbzw/nKviDUfk2h2gSdkTiP7QQB1UTf2L7NeA Jun 20 20:02:48.737097 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 20:02:48.742431 systemd-logind[1522]: New session 25 of user core. Jun 20 20:02:48.747490 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 20:02:49.111669 kubelet[2708]: E0620 20:02:49.111644 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:49.580253 kubelet[2708]: E0620 20:02:49.578643 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:49.583792 containerd[1543]: time="2025-06-20T20:02:49.583749886Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 20:02:49.597382 containerd[1543]: time="2025-06-20T20:02:49.597051123Z" level=info msg="Container f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967: CDI devices from CRI Config.CDIDevices: []" Jun 20 20:02:49.602145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233806327.mount: Deactivated successfully. Jun 20 20:02:49.606374 containerd[1543]: time="2025-06-20T20:02:49.606324363Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\"" Jun 20 20:02:49.606917 containerd[1543]: time="2025-06-20T20:02:49.606873393Z" level=info msg="StartContainer for \"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\"" Jun 20 20:02:49.608509 containerd[1543]: time="2025-06-20T20:02:49.608446971Z" level=info msg="connecting to shim f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967" address="unix:///run/containerd/s/b86968ec7833314db146a373cb3220e90077a4ae7302c4ac84fdf71dd53b83c2" protocol=ttrpc version=3 Jun 20 20:02:49.632492 systemd[1]: Started cri-containerd-f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967.scope - libcontainer container f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967. Jun 20 20:02:49.671646 containerd[1543]: time="2025-06-20T20:02:49.671609416Z" level=info msg="StartContainer for \"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\" returns successfully" Jun 20 20:02:49.673934 systemd[1]: cri-containerd-f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967.scope: Deactivated successfully. Jun 20 20:02:49.676824 containerd[1543]: time="2025-06-20T20:02:49.676754351Z" level=info msg="received exit event container_id:\"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\" id:\"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\" pid:4588 exited_at:{seconds:1750449769 nanos:676208071}" Jun 20 20:02:49.676824 containerd[1543]: time="2025-06-20T20:02:49.676803761Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\" id:\"f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967\" pid:4588 exited_at:{seconds:1750449769 nanos:676208071}" Jun 20 20:02:49.704021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f030631f052079249b0010a581237b26230f5f9c1aad9f6f45c619969ce84967-rootfs.mount: Deactivated successfully. Jun 20 20:02:50.172261 kubelet[2708]: I0620 20:02:50.172209 2708 setters.go:602] "Node became not ready" node="172-237-156-205" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T20:02:50Z","lastTransitionTime":"2025-06-20T20:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 20:02:50.584194 kubelet[2708]: E0620 20:02:50.584162 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:50.587376 containerd[1543]: time="2025-06-20T20:02:50.587305892Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 20:02:50.607844 containerd[1543]: time="2025-06-20T20:02:50.605123113Z" level=info msg="Container 59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b: CDI devices from CRI Config.CDIDevices: []" Jun 20 20:02:50.613806 containerd[1543]: time="2025-06-20T20:02:50.613774465Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\"" Jun 20 20:02:50.615209 containerd[1543]: time="2025-06-20T20:02:50.614351554Z" level=info msg="StartContainer for \"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\"" Jun 20 20:02:50.615397 containerd[1543]: time="2025-06-20T20:02:50.615345993Z" level=info msg="connecting to shim 59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b" address="unix:///run/containerd/s/b86968ec7833314db146a373cb3220e90077a4ae7302c4ac84fdf71dd53b83c2" protocol=ttrpc version=3 Jun 20 20:02:50.641546 systemd[1]: Started cri-containerd-59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b.scope - libcontainer container 59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b. Jun 20 20:02:50.676726 systemd[1]: cri-containerd-59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b.scope: Deactivated successfully. Jun 20 20:02:50.679796 containerd[1543]: time="2025-06-20T20:02:50.679761427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\" id:\"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\" pid:4628 exited_at:{seconds:1750449770 nanos:678140499}" Jun 20 20:02:50.680099 containerd[1543]: time="2025-06-20T20:02:50.680056567Z" level=info msg="received exit event container_id:\"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\" id:\"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\" pid:4628 exited_at:{seconds:1750449770 nanos:678140499}" Jun 20 20:02:50.680420 containerd[1543]: time="2025-06-20T20:02:50.680332717Z" level=info msg="StartContainer for \"59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b\" returns successfully" Jun 20 20:02:50.699171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59313123ec6b8682ee38d5c2e48bedec11b45e9f5aff44cbe8b164659365ca9b-rootfs.mount: Deactivated successfully. Jun 20 20:02:51.210524 kubelet[2708]: E0620 20:02:51.210473 2708 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 20:02:51.588304 kubelet[2708]: E0620 20:02:51.588238 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:51.591936 containerd[1543]: time="2025-06-20T20:02:51.591903148Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 20:02:51.601462 containerd[1543]: time="2025-06-20T20:02:51.601423828Z" level=info msg="Container c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251: CDI devices from CRI Config.CDIDevices: []" Jun 20 20:02:51.604334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835906017.mount: Deactivated successfully. Jun 20 20:02:51.611135 containerd[1543]: time="2025-06-20T20:02:51.611100958Z" level=info msg="CreateContainer within sandbox \"1a48d3069a81a75e4c4ac7033195b07ef18316b9ac7236d2f654f306ccd46f58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\"" Jun 20 20:02:51.611653 containerd[1543]: time="2025-06-20T20:02:51.611626168Z" level=info msg="StartContainer for \"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\"" Jun 20 20:02:51.612710 containerd[1543]: time="2025-06-20T20:02:51.612682366Z" level=info msg="connecting to shim c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251" address="unix:///run/containerd/s/b86968ec7833314db146a373cb3220e90077a4ae7302c4ac84fdf71dd53b83c2" protocol=ttrpc version=3 Jun 20 20:02:51.633467 systemd[1]: Started cri-containerd-c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251.scope - libcontainer container c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251. Jun 20 20:02:51.664003 containerd[1543]: time="2025-06-20T20:02:51.663948954Z" level=info msg="StartContainer for \"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" returns successfully" Jun 20 20:02:51.725178 containerd[1543]: time="2025-06-20T20:02:51.725146411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" id:\"4524376eb12ed6a2aaae4de71f8f642156efdad8ce3b3ee9db14fa2b1a942533\" pid:4700 exited_at:{seconds:1750449771 nanos:724926571}" Jun 20 20:02:52.098185 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 20 20:02:52.111893 kubelet[2708]: E0620 20:02:52.111839 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:52.595140 kubelet[2708]: E0620 20:02:52.594434 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:53.085066 containerd[1543]: time="2025-06-20T20:02:53.085019878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" id:\"9b4d61b3bd839901a9d9eeae4911f53003fd739ebf819ec0e7187a713f776bd3\" pid:4776 exit_status:1 exited_at:{seconds:1750449773 nanos:84786457}" Jun 20 20:02:53.934742 kubelet[2708]: E0620 20:02:53.934673 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:54.871320 systemd-networkd[1458]: lxc_health: Link UP Jun 20 20:02:54.871658 systemd-networkd[1458]: lxc_health: Gained carrier Jun 20 20:02:55.212503 containerd[1543]: time="2025-06-20T20:02:55.211847499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" id:\"63dec4aa4064c7239bae14356cc449209dc3cd356bb575b06384db393b882720\" pid:5212 exited_at:{seconds:1750449775 nanos:211538289}" Jun 20 20:02:55.215779 kubelet[2708]: E0620 20:02:55.215714 2708 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43104->127.0.0.1:42309: write tcp 127.0.0.1:43104->127.0.0.1:42309: write: broken pipe Jun 20 20:02:55.934026 kubelet[2708]: E0620 20:02:55.933796 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:55.950551 kubelet[2708]: I0620 20:02:55.950314 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7pzzt" podStartSLOduration=8.950300433 podStartE2EDuration="8.950300433s" podCreationTimestamp="2025-06-20 20:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 20:02:52.608552972 +0000 UTC m=+196.576793780" watchObservedRunningTime="2025-06-20 20:02:55.950300433 +0000 UTC m=+199.918541241" Jun 20 20:02:56.084644 systemd-networkd[1458]: lxc_health: Gained IPv6LL Jun 20 20:02:56.600944 kubelet[2708]: E0620 20:02:56.600900 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:57.361848 containerd[1543]: time="2025-06-20T20:02:57.361778679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" id:\"666f576863d0a970230328636b8ad3a442a6a7d9648b453a194861c75cae1859\" pid:5243 exited_at:{seconds:1750449777 nanos:360995829}" Jun 20 20:02:57.603492 kubelet[2708]: E0620 20:02:57.603450 2708 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Jun 20 20:02:59.450699 containerd[1543]: time="2025-06-20T20:02:59.450615437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" id:\"e241dd08e02b54aad670f0427e0d542e29e32fb923105cd12d834c5e96bd2468\" pid:5285 exited_at:{seconds:1750449779 nanos:450111326}" Jun 20 20:03:01.600217 containerd[1543]: time="2025-06-20T20:03:01.600166093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b0860b73b89536349f6d110df317b86841b8de3dfa01964534370e84e43251\" id:\"86ce130920ac1cfed84ee2dc875622c7530966e8ea78d09e64ddfa09b9508d33\" pid:5307 exited_at:{seconds:1750449781 nanos:598979481}" Jun 20 20:03:01.661308 sshd[4569]: Connection closed by 139.178.68.195 port 49836 Jun 20 20:03:01.661862 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Jun 20 20:03:01.666285 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Jun 20 20:03:01.666948 systemd[1]: sshd@24-172.237.156.205:22-139.178.68.195:49836.service: Deactivated successfully. Jun 20 20:03:01.669281 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 20:03:01.671484 systemd-logind[1522]: Removed session 25.