Jul 7 06:08:04.839834 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:08:04.839861 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:08:04.839931 kernel: BIOS-provided physical RAM map: Jul 7 06:08:04.839938 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:08:04.839945 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 7 06:08:04.839951 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 06:08:04.839960 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 06:08:04.839967 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 06:08:04.839976 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 06:08:04.839983 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 06:08:04.839990 kernel: NX (Execute Disable) protection: active Jul 7 06:08:04.839997 kernel: APIC: Static calls initialized Jul 7 06:08:04.840003 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 7 06:08:04.840011 kernel: extended physical RAM map: Jul 7 06:08:04.840022 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:08:04.840029 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 7 06:08:04.840037 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 7 06:08:04.840045 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 7 06:08:04.840053 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 06:08:04.840060 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 06:08:04.840068 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 06:08:04.840075 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 06:08:04.840083 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 06:08:04.840091 kernel: efi: EFI v2.7 by EDK II Jul 7 06:08:04.840100 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 7 06:08:04.840108 kernel: secureboot: Secure boot disabled Jul 7 06:08:04.840116 kernel: SMBIOS 2.7 present. Jul 7 06:08:04.840123 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 7 06:08:04.840131 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:08:04.840138 kernel: Hypervisor detected: KVM Jul 7 06:08:04.840173 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:08:04.840183 kernel: kvm-clock: using sched offset of 5537255900 cycles Jul 7 06:08:04.840191 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:08:04.840199 kernel: tsc: Detected 2499.996 MHz processor Jul 7 06:08:04.840207 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:08:04.840218 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:08:04.840226 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 7 06:08:04.840234 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:08:04.840241 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:08:04.840249 kernel: Using GB pages for direct mapping Jul 7 06:08:04.840261 kernel: ACPI: Early table checksum verification disabled Jul 7 06:08:04.840272 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 7 06:08:04.840280 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 06:08:04.840288 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 06:08:04.840297 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 7 06:08:04.840305 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 7 06:08:04.840313 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 7 06:08:04.840322 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 06:08:04.840330 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 06:08:04.840340 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 7 06:08:04.840349 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 7 06:08:04.840357 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 06:08:04.840365 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 06:08:04.840374 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 7 06:08:04.840382 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 7 06:08:04.840390 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 7 06:08:04.840399 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 7 06:08:04.840409 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 7 06:08:04.840417 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 7 06:08:04.840425 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 7 06:08:04.840434 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 7 06:08:04.840442 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 7 06:08:04.840451 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 7 06:08:04.840459 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 7 06:08:04.840467 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 7 06:08:04.840475 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 7 06:08:04.840483 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 06:08:04.840493 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 7 06:08:04.840502 kernel: Zone ranges: Jul 7 06:08:04.840510 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:08:04.840519 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 7 06:08:04.840527 kernel: Normal empty Jul 7 06:08:04.840535 kernel: Device empty Jul 7 06:08:04.840543 kernel: Movable zone start for each node Jul 7 06:08:04.840552 kernel: Early memory node ranges Jul 7 06:08:04.840560 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:08:04.840570 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 7 06:08:04.840578 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 7 06:08:04.840586 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 7 06:08:04.840594 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:08:04.840603 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:08:04.840611 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 7 06:08:04.840620 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 7 06:08:04.840628 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 06:08:04.840636 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:08:04.840644 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 7 06:08:04.840655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:08:04.840663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:08:04.840672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:08:04.840680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:08:04.840689 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:08:04.840697 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:08:04.840705 kernel: TSC deadline timer available Jul 7 06:08:04.840714 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:08:04.840722 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:08:04.840732 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:08:04.840741 kernel: CPU topo: Max. threads per core: 2 Jul 7 06:08:04.840749 kernel: CPU topo: Num. cores per package: 1 Jul 7 06:08:04.840757 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:08:04.840766 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:08:04.840774 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:08:04.840782 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 7 06:08:04.840791 kernel: Booting paravirtualized kernel on KVM Jul 7 06:08:04.840799 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:08:04.840810 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:08:04.840818 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:08:04.840827 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:08:04.840835 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:08:04.840843 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:08:04.840852 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:08:04.840861 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:08:04.840870 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:08:04.840881 kernel: random: crng init done Jul 7 06:08:04.840889 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:08:04.840897 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:08:04.840906 kernel: Fallback order for Node 0: 0 Jul 7 06:08:04.840914 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 7 06:08:04.840923 kernel: Policy zone: DMA32 Jul 7 06:08:04.840939 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:08:04.840950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:08:04.840959 kernel: Kernel/User page tables isolation: enabled Jul 7 06:08:04.840968 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:08:04.840976 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:08:04.840987 kernel: Dynamic Preempt: voluntary Jul 7 06:08:04.840996 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:08:04.841006 kernel: rcu: RCU event tracing is enabled. Jul 7 06:08:04.841015 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:08:04.841024 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:08:04.841033 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:08:04.841044 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:08:04.841053 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:08:04.841061 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:08:04.841070 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:08:04.841079 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:08:04.841088 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:08:04.841097 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 06:08:04.841106 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:08:04.841117 kernel: Console: colour dummy device 80x25 Jul 7 06:08:04.841126 kernel: printk: legacy console [tty0] enabled Jul 7 06:08:04.841134 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:08:04.841143 kernel: ACPI: Core revision 20240827 Jul 7 06:08:04.841168 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 7 06:08:04.841177 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:08:04.841186 kernel: x2apic enabled Jul 7 06:08:04.841195 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:08:04.841204 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 06:08:04.841215 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 7 06:08:04.841224 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 06:08:04.841233 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 06:08:04.841242 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:08:04.841251 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:08:04.841259 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:08:04.841268 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 06:08:04.841277 kernel: RETBleed: Vulnerable Jul 7 06:08:04.841285 kernel: Speculative Store Bypass: Vulnerable Jul 7 06:08:04.841294 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 06:08:04.841303 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 06:08:04.841314 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 06:08:04.841323 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:08:04.841331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:08:04.841340 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:08:04.841349 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:08:04.841358 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 06:08:04.841366 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 06:08:04.841375 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 06:08:04.841384 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 06:08:04.841392 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 06:08:04.841404 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 06:08:04.841412 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:08:04.841421 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 06:08:04.841430 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 06:08:04.841438 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 7 06:08:04.841447 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 7 06:08:04.841455 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 7 06:08:04.841464 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 7 06:08:04.841473 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 7 06:08:04.841482 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:08:04.841491 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:08:04.841499 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:08:04.841511 kernel: landlock: Up and running. Jul 7 06:08:04.841519 kernel: SELinux: Initializing. Jul 7 06:08:04.841528 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:08:04.841537 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:08:04.841546 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 7 06:08:04.841555 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 7 06:08:04.841564 kernel: signal: max sigframe size: 3632 Jul 7 06:08:04.841573 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:08:04.841582 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:08:04.841591 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:08:04.841603 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:08:04.841612 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:08:04.841620 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:08:04.841629 kernel: .... node #0, CPUs: #1 Jul 7 06:08:04.841638 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 06:08:04.841648 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 06:08:04.841657 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:08:04.841665 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 7 06:08:04.841675 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 125188K reserved, 0K cma-reserved) Jul 7 06:08:04.841686 kernel: devtmpfs: initialized Jul 7 06:08:04.841695 kernel: x86/mm: Memory block size: 128MB Jul 7 06:08:04.841704 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 7 06:08:04.841713 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:08:04.841722 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:08:04.841731 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:08:04.841740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:08:04.841749 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:08:04.841760 kernel: audit: type=2000 audit(1751868483.308:1): state=initialized audit_enabled=0 res=1 Jul 7 06:08:04.841769 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:08:04.841777 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:08:04.841786 kernel: cpuidle: using governor menu Jul 7 06:08:04.841795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:08:04.841804 kernel: dca service started, version 1.12.1 Jul 7 06:08:04.841814 kernel: PCI: Using configuration type 1 for base access Jul 7 06:08:04.841822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:08:04.841832 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:08:04.841843 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:08:04.841852 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:08:04.841861 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:08:04.841870 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:08:04.841878 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:08:04.841887 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:08:04.841896 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 06:08:04.841905 kernel: ACPI: Interpreter enabled Jul 7 06:08:04.841914 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:08:04.841926 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:08:04.841934 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:08:04.841944 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:08:04.841953 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 06:08:04.841962 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:08:04.842123 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:08:04.842248 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 06:08:04.842340 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 06:08:04.842355 kernel: acpiphp: Slot [3] registered Jul 7 06:08:04.842365 kernel: acpiphp: Slot [4] registered Jul 7 06:08:04.842373 kernel: acpiphp: Slot [5] registered Jul 7 06:08:04.842382 kernel: acpiphp: Slot [6] registered Jul 7 06:08:04.842391 kernel: acpiphp: Slot [7] registered Jul 7 06:08:04.842400 kernel: acpiphp: Slot [8] registered Jul 7 06:08:04.842409 kernel: acpiphp: Slot [9] registered Jul 7 06:08:04.842418 kernel: acpiphp: Slot [10] registered Jul 7 06:08:04.842427 kernel: acpiphp: Slot [11] registered Jul 7 06:08:04.842438 kernel: acpiphp: Slot [12] registered Jul 7 06:08:04.842447 kernel: acpiphp: Slot [13] registered Jul 7 06:08:04.842456 kernel: acpiphp: Slot [14] registered Jul 7 06:08:04.842465 kernel: acpiphp: Slot [15] registered Jul 7 06:08:04.842474 kernel: acpiphp: Slot [16] registered Jul 7 06:08:04.842482 kernel: acpiphp: Slot [17] registered Jul 7 06:08:04.842491 kernel: acpiphp: Slot [18] registered Jul 7 06:08:04.842500 kernel: acpiphp: Slot [19] registered Jul 7 06:08:04.842509 kernel: acpiphp: Slot [20] registered Jul 7 06:08:04.842520 kernel: acpiphp: Slot [21] registered Jul 7 06:08:04.842528 kernel: acpiphp: Slot [22] registered Jul 7 06:08:04.842537 kernel: acpiphp: Slot [23] registered Jul 7 06:08:04.842546 kernel: acpiphp: Slot [24] registered Jul 7 06:08:04.842555 kernel: acpiphp: Slot [25] registered Jul 7 06:08:04.842564 kernel: acpiphp: Slot [26] registered Jul 7 06:08:04.842572 kernel: acpiphp: Slot [27] registered Jul 7 06:08:04.842581 kernel: acpiphp: Slot [28] registered Jul 7 06:08:04.842590 kernel: acpiphp: Slot [29] registered Jul 7 06:08:04.842598 kernel: acpiphp: Slot [30] registered Jul 7 06:08:04.842609 kernel: acpiphp: Slot [31] registered Jul 7 06:08:04.842618 kernel: PCI host bridge to bus 0000:00 Jul 7 06:08:04.842711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:08:04.842793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:08:04.842874 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:08:04.842960 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 06:08:04.843040 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 7 06:08:04.843124 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:08:04.843243 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:08:04.843346 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:08:04.843444 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 7 06:08:04.843535 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 06:08:04.843635 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 7 06:08:04.843727 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 7 06:08:04.843817 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 7 06:08:04.843906 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 7 06:08:04.843997 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 7 06:08:04.844086 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 7 06:08:04.844762 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:08:04.844870 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 7 06:08:04.844966 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 06:08:04.845057 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:08:04.845168 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 7 06:08:04.845260 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 7 06:08:04.845359 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 7 06:08:04.845450 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 7 06:08:04.845465 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:08:04.845474 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:08:04.845483 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:08:04.845492 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:08:04.845503 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 06:08:04.845513 kernel: iommu: Default domain type: Translated Jul 7 06:08:04.845521 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:08:04.845530 kernel: efivars: Registered efivars operations Jul 7 06:08:04.845539 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:08:04.845551 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:08:04.845560 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 7 06:08:04.845570 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 7 06:08:04.845578 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 7 06:08:04.845666 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 7 06:08:04.845755 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 7 06:08:04.845845 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:08:04.845857 kernel: vgaarb: loaded Jul 7 06:08:04.845866 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 7 06:08:04.845878 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 7 06:08:04.845887 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:08:04.845896 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:08:04.845905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:08:04.845914 kernel: pnp: PnP ACPI init Jul 7 06:08:04.845923 kernel: pnp: PnP ACPI: found 5 devices Jul 7 06:08:04.845932 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:08:04.845941 kernel: NET: Registered PF_INET protocol family Jul 7 06:08:04.845950 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:08:04.845962 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 06:08:04.848186 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:08:04.848202 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:08:04.848213 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 06:08:04.848223 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 06:08:04.848232 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:08:04.848241 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:08:04.848251 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:08:04.848264 kernel: NET: Registered PF_XDP protocol family Jul 7 06:08:04.848388 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:08:04.848473 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:08:04.848554 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:08:04.848634 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 06:08:04.848713 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 7 06:08:04.848811 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 06:08:04.848824 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:08:04.848833 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 06:08:04.848846 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 06:08:04.848856 kernel: clocksource: Switched to clocksource tsc Jul 7 06:08:04.848865 kernel: Initialise system trusted keyrings Jul 7 06:08:04.848874 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 06:08:04.848883 kernel: Key type asymmetric registered Jul 7 06:08:04.848892 kernel: Asymmetric key parser 'x509' registered Jul 7 06:08:04.848901 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:08:04.848910 kernel: io scheduler mq-deadline registered Jul 7 06:08:04.848919 kernel: io scheduler kyber registered Jul 7 06:08:04.848931 kernel: io scheduler bfq registered Jul 7 06:08:04.848940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:08:04.848949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:08:04.848958 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:08:04.848967 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:08:04.848976 kernel: i8042: Warning: Keylock active Jul 7 06:08:04.848985 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:08:04.848994 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:08:04.849111 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 06:08:04.849229 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 06:08:04.849314 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T06:08:04 UTC (1751868484) Jul 7 06:08:04.849397 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 06:08:04.849409 kernel: intel_pstate: CPU model not supported Jul 7 06:08:04.849437 kernel: efifb: probing for efifb Jul 7 06:08:04.849449 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 7 06:08:04.849458 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 06:08:04.849470 kernel: efifb: scrolling: redraw Jul 7 06:08:04.849479 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:08:04.849489 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 06:08:04.849499 kernel: fb0: EFI VGA frame buffer device Jul 7 06:08:04.849508 kernel: pstore: Using crash dump compression: deflate Jul 7 06:08:04.849518 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:08:04.849528 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:08:04.849537 kernel: Segment Routing with IPv6 Jul 7 06:08:04.849546 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:08:04.849558 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:08:04.849568 kernel: Key type dns_resolver registered Jul 7 06:08:04.849577 kernel: IPI shorthand broadcast: enabled Jul 7 06:08:04.849587 kernel: sched_clock: Marking stable (2664001973, 145599600)->(2877989609, -68388036) Jul 7 06:08:04.849596 kernel: registered taskstats version 1 Jul 7 06:08:04.849606 kernel: Loading compiled-in X.509 certificates Jul 7 06:08:04.849615 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:08:04.849624 kernel: Demotion targets for Node 0: null Jul 7 06:08:04.849634 kernel: Key type .fscrypt registered Jul 7 06:08:04.849643 kernel: Key type fscrypt-provisioning registered Jul 7 06:08:04.849656 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:08:04.849666 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:08:04.849675 kernel: ima: No architecture policies found Jul 7 06:08:04.849684 kernel: clk: Disabling unused clocks Jul 7 06:08:04.849694 kernel: Warning: unable to open an initial console. Jul 7 06:08:04.849703 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:08:04.849713 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:08:04.849722 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:08:04.849734 kernel: Run /init as init process Jul 7 06:08:04.849746 kernel: with arguments: Jul 7 06:08:04.849755 kernel: /init Jul 7 06:08:04.849764 kernel: with environment: Jul 7 06:08:04.849773 kernel: HOME=/ Jul 7 06:08:04.849783 kernel: TERM=linux Jul 7 06:08:04.849794 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:08:04.849805 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:08:04.849817 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:08:04.849828 systemd[1]: Detected virtualization amazon. Jul 7 06:08:04.849837 systemd[1]: Detected architecture x86-64. Jul 7 06:08:04.849847 systemd[1]: Running in initrd. Jul 7 06:08:04.849857 systemd[1]: No hostname configured, using default hostname. Jul 7 06:08:04.849869 systemd[1]: Hostname set to . Jul 7 06:08:04.849879 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:08:04.849888 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:08:04.849898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:04.849908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:04.849919 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:08:04.849929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:08:04.849939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:08:04.849952 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:08:04.849963 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:08:04.849973 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:08:04.849983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:04.849993 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:04.850003 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:08:04.850013 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:08:04.850025 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:08:04.850035 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:08:04.850045 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:08:04.850055 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:08:04.850065 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:08:04.850075 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:08:04.850085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:04.850095 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:04.850107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:04.850117 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:08:04.850127 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:08:04.850137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:08:04.850156 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:08:04.850167 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:08:04.850177 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:08:04.850187 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:08:04.850204 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:08:04.850217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:04.851186 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 06:08:04.851224 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:08:04.851240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:04.851251 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:08:04.851262 systemd-journald[207]: Journal started Jul 7 06:08:04.851287 systemd-journald[207]: Runtime Journal (/run/log/journal/ec23c38c4525991e8dcab1dac89168aa) is 4.8M, max 38.4M, 33.6M free. Jul 7 06:08:04.853487 systemd-modules-load[208]: Inserted module 'overlay' Jul 7 06:08:04.854561 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:08:04.858278 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:08:04.860327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:08:04.873832 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:04.879387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:08:04.882977 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:08:04.891171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:08:04.893578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:04.894976 kernel: Bridge firewalling registered Jul 7 06:08:04.894210 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 7 06:08:04.895570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:04.896180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:04.899295 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:08:04.901296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:08:04.916555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:04.922671 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:08:04.924450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:04.926867 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:04.934462 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:08:04.952716 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:08:04.993191 systemd-resolved[247]: Positive Trust Anchors: Jul 7 06:08:04.994100 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:08:04.994223 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:08:05.001521 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 7 06:08:05.007489 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:08:05.008428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:08:05.009198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:05.052182 kernel: SCSI subsystem initialized Jul 7 06:08:05.062178 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:08:05.074182 kernel: iscsi: registered transport (tcp) Jul 7 06:08:05.096381 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:08:05.096463 kernel: QLogic iSCSI HBA Driver Jul 7 06:08:05.115855 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:08:05.131579 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:05.133883 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:08:05.181206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:08:05.183350 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:08:05.236203 kernel: raid6: avx512x4 gen() 17721 MB/s Jul 7 06:08:05.254181 kernel: raid6: avx512x2 gen() 18177 MB/s Jul 7 06:08:05.272186 kernel: raid6: avx512x1 gen() 18221 MB/s Jul 7 06:08:05.290175 kernel: raid6: avx2x4 gen() 18027 MB/s Jul 7 06:08:05.308178 kernel: raid6: avx2x2 gen() 18095 MB/s Jul 7 06:08:05.326414 kernel: raid6: avx2x1 gen() 13930 MB/s Jul 7 06:08:05.326465 kernel: raid6: using algorithm avx512x1 gen() 18221 MB/s Jul 7 06:08:05.345389 kernel: raid6: .... xor() 21313 MB/s, rmw enabled Jul 7 06:08:05.345446 kernel: raid6: using avx512x2 recovery algorithm Jul 7 06:08:05.367192 kernel: xor: automatically using best checksumming function avx Jul 7 06:08:05.534182 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:08:05.540604 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:08:05.542715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:05.570185 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 7 06:08:05.576827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:05.580834 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:08:05.610591 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 7 06:08:05.622276 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 06:08:05.644135 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:08:05.646113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:08:05.702851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:05.706078 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:08:05.770522 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 06:08:05.770739 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 06:08:05.774592 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 06:08:05.774812 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 06:08:05.785399 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 7 06:08:05.789334 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 06:08:05.794209 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:08:05.798533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:05.809225 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:08:05.809260 kernel: GPT:9289727 != 16777215 Jul 7 06:08:05.809280 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:08:05.809299 kernel: GPT:9289727 != 16777215 Jul 7 06:08:05.809319 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:08:05.809339 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:08:05.798853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:05.810719 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:05.813816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:05.815702 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:13:23:1a:44:3d Jul 7 06:08:05.819350 (udev-worker)[504]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:08:05.831918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:05.832058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:05.838346 kernel: AES CTR mode by8 optimization enabled Jul 7 06:08:05.839326 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:08:05.853085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:05.892821 kernel: nvme nvme0: using unchecked data buffer Jul 7 06:08:05.901372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:05.975511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 06:08:05.987786 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 06:08:05.999518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 06:08:06.000281 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:08:06.026133 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 06:08:06.026693 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 06:08:06.027864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:08:06.028902 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:06.030030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:08:06.031668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:08:06.033497 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:08:06.051380 disk-uuid[696]: Primary Header is updated. Jul 7 06:08:06.051380 disk-uuid[696]: Secondary Entries is updated. Jul 7 06:08:06.051380 disk-uuid[696]: Secondary Header is updated. Jul 7 06:08:06.059267 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:08:06.059232 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:08:07.074677 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:08:07.074739 disk-uuid[700]: The operation has completed successfully. Jul 7 06:08:07.194043 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:08:07.194170 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:08:07.231507 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:08:07.251685 sh[964]: Success Jul 7 06:08:07.277175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:08:07.277244 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:08:07.280013 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:08:07.290165 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 06:08:07.399332 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:08:07.404256 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:08:07.418000 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:08:07.442676 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:08:07.442738 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (987) Jul 7 06:08:07.449173 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:08:07.449262 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:07.449285 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:08:07.567522 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:08:07.568469 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:08:07.569021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:08:07.569770 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:08:07.571738 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:08:07.607173 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1020) Jul 7 06:08:07.611430 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:07.611490 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:07.611503 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:08:07.624177 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:07.625316 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:08:07.627287 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:08:07.667673 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:08:07.670039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:08:07.710003 systemd-networkd[1156]: lo: Link UP Jul 7 06:08:07.710015 systemd-networkd[1156]: lo: Gained carrier Jul 7 06:08:07.711417 systemd-networkd[1156]: Enumeration completed Jul 7 06:08:07.711731 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:07.711735 systemd-networkd[1156]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:08:07.712607 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:08:07.714571 systemd-networkd[1156]: eth0: Link UP Jul 7 06:08:07.714575 systemd-networkd[1156]: eth0: Gained carrier Jul 7 06:08:07.714589 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:07.714806 systemd[1]: Reached target network.target - Network. Jul 7 06:08:07.726322 systemd-networkd[1156]: eth0: DHCPv4 address 172.31.29.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 06:08:08.077859 ignition[1100]: Ignition 2.21.0 Jul 7 06:08:08.077877 ignition[1100]: Stage: fetch-offline Jul 7 06:08:08.078101 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:08.078114 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:08.078507 ignition[1100]: Ignition finished successfully Jul 7 06:08:08.081184 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:08:08.082867 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:08:08.107748 ignition[1166]: Ignition 2.21.0 Jul 7 06:08:08.107764 ignition[1166]: Stage: fetch Jul 7 06:08:08.108141 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:08.108173 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:08.108294 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:08.179090 ignition[1166]: PUT result: OK Jul 7 06:08:08.189680 ignition[1166]: parsed url from cmdline: "" Jul 7 06:08:08.189690 ignition[1166]: no config URL provided Jul 7 06:08:08.189699 ignition[1166]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:08:08.189711 ignition[1166]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:08:08.189735 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:08.194723 ignition[1166]: PUT result: OK Jul 7 06:08:08.194805 ignition[1166]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 06:08:08.195540 ignition[1166]: GET result: OK Jul 7 06:08:08.195625 ignition[1166]: parsing config with SHA512: e4d6c0f5d5ea15527d9c869f0d4ad2e461194f5e4a61f4f4c98f1154b5a56fc82cc8e4e02e763fa0f2e5aff3beb18dcc537f29d3b5698e4b020006a6e45dcf7a Jul 7 06:08:08.202491 unknown[1166]: fetched base config from "system" Jul 7 06:08:08.202501 unknown[1166]: fetched base config from "system" Jul 7 06:08:08.202838 ignition[1166]: fetch: fetch complete Jul 7 06:08:08.202506 unknown[1166]: fetched user config from "aws" Jul 7 06:08:08.202842 ignition[1166]: fetch: fetch passed Jul 7 06:08:08.202880 ignition[1166]: Ignition finished successfully Jul 7 06:08:08.206165 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:08:08.207817 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:08:08.246037 ignition[1172]: Ignition 2.21.0 Jul 7 06:08:08.246054 ignition[1172]: Stage: kargs Jul 7 06:08:08.246359 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:08.246368 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:08.246446 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:08.247829 ignition[1172]: PUT result: OK Jul 7 06:08:08.250993 ignition[1172]: kargs: kargs passed Jul 7 06:08:08.251056 ignition[1172]: Ignition finished successfully Jul 7 06:08:08.252995 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:08:08.254449 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:08:08.279728 ignition[1179]: Ignition 2.21.0 Jul 7 06:08:08.279742 ignition[1179]: Stage: disks Jul 7 06:08:08.280125 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:08.280139 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:08.280278 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:08.281168 ignition[1179]: PUT result: OK Jul 7 06:08:08.285053 ignition[1179]: disks: disks passed Jul 7 06:08:08.286847 ignition[1179]: Ignition finished successfully Jul 7 06:08:08.288541 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:08:08.289456 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:08:08.290088 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:08:08.290500 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:08:08.291025 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:08:08.291587 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:08:08.293135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:08:08.337125 systemd-fsck[1188]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:08:08.339829 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:08:08.341561 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:08:08.534270 kernel: EXT4-fs (nvme0n1p9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:08:08.535389 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:08:08.536245 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:08:08.538834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:08:08.542286 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:08:08.543863 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:08:08.544564 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:08:08.544593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:08:08.548901 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:08:08.550905 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:08:08.570186 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1207) Jul 7 06:08:08.575102 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:08.575193 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:08.575215 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:08:08.584072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:08:08.976821 initrd-setup-root[1231]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:08:09.001716 initrd-setup-root[1238]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:08:09.007050 initrd-setup-root[1245]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:08:09.012578 initrd-setup-root[1252]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:08:09.308450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:08:09.310906 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:08:09.312402 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:08:09.325697 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:08:09.328163 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:09.351583 ignition[1319]: INFO : Ignition 2.21.0 Jul 7 06:08:09.351583 ignition[1319]: INFO : Stage: mount Jul 7 06:08:09.352794 ignition[1319]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:09.352794 ignition[1319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:09.352794 ignition[1319]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:09.355912 ignition[1319]: INFO : PUT result: OK Jul 7 06:08:09.355912 ignition[1319]: INFO : mount: mount passed Jul 7 06:08:09.355912 ignition[1319]: INFO : Ignition finished successfully Jul 7 06:08:09.358620 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:08:09.359772 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:08:09.360382 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:08:09.537420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:08:09.572204 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1332) Jul 7 06:08:09.577071 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:09.577144 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:09.577182 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:08:09.587205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:08:09.616634 ignition[1348]: INFO : Ignition 2.21.0 Jul 7 06:08:09.616634 ignition[1348]: INFO : Stage: files Jul 7 06:08:09.618140 ignition[1348]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:09.618140 ignition[1348]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:09.618140 ignition[1348]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:09.618140 ignition[1348]: INFO : PUT result: OK Jul 7 06:08:09.623228 ignition[1348]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:08:09.625106 ignition[1348]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:08:09.625106 ignition[1348]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:08:09.629215 ignition[1348]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:08:09.629899 ignition[1348]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:08:09.629899 ignition[1348]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:08:09.629613 unknown[1348]: wrote ssh authorized keys file for user: core Jul 7 06:08:09.633654 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:08:09.634658 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 06:08:09.711721 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:08:09.723289 systemd-networkd[1156]: eth0: Gained IPv6LL Jul 7 06:08:10.369143 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:08:10.369143 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:08:10.371032 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 06:08:10.743705 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:08:10.960275 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:08:10.961420 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:08:10.968266 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:08:10.968266 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:08:10.968266 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:10.971131 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:10.971131 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:10.971131 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 06:08:11.565536 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:08:11.989806 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:11.989806 ignition[1348]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:08:11.992213 ignition[1348]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:08:11.996554 ignition[1348]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:08:11.996554 ignition[1348]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:08:11.996554 ignition[1348]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:08:11.999135 ignition[1348]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:08:11.999135 ignition[1348]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:08:11.999135 ignition[1348]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:08:11.999135 ignition[1348]: INFO : files: files passed Jul 7 06:08:11.999135 ignition[1348]: INFO : Ignition finished successfully Jul 7 06:08:11.998376 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:08:12.001282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:08:12.003917 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:08:12.016350 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:08:12.016728 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:08:12.021346 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:12.022864 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:12.023660 initrd-setup-root-after-ignition[1379]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:12.023169 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:08:12.024440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:08:12.026430 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:08:12.074874 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:08:12.075013 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:08:12.076271 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:08:12.077331 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:08:12.078080 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:08:12.079386 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:08:12.118511 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:08:12.120756 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:08:12.146488 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:12.147171 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:12.148026 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:08:12.148808 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:08:12.148981 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:08:12.149887 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:08:12.150894 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:08:12.151677 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:08:12.152420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:08:12.153275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:08:12.154021 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:08:12.154893 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:08:12.155621 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:08:12.156481 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:08:12.157637 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:08:12.158517 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:08:12.159254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:08:12.159487 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:08:12.160502 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:12.161319 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:12.161986 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:08:12.162413 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:12.162932 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:08:12.163185 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:08:12.164477 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:08:12.164657 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:08:12.165414 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:08:12.165603 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:08:12.168264 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:08:12.172355 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:08:12.173529 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:08:12.173730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:12.175436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:08:12.175595 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:08:12.182866 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:08:12.186580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:08:12.201324 ignition[1403]: INFO : Ignition 2.21.0 Jul 7 06:08:12.203055 ignition[1403]: INFO : Stage: umount Jul 7 06:08:12.203055 ignition[1403]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:12.203055 ignition[1403]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:08:12.203055 ignition[1403]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:08:12.205316 ignition[1403]: INFO : PUT result: OK Jul 7 06:08:12.207793 ignition[1403]: INFO : umount: umount passed Jul 7 06:08:12.207793 ignition[1403]: INFO : Ignition finished successfully Jul 7 06:08:12.211951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:08:12.212983 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:08:12.213372 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:08:12.214000 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:08:12.214059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:08:12.214765 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:08:12.214823 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:08:12.215444 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:08:12.215500 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:08:12.216096 systemd[1]: Stopped target network.target - Network. Jul 7 06:08:12.216680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:08:12.216740 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:08:12.217410 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:08:12.217975 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:08:12.222325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:12.222774 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:08:12.223619 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:08:12.224271 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:08:12.224314 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:08:12.224808 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:08:12.224839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:08:12.225390 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:08:12.225446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:08:12.225982 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:08:12.226020 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:08:12.226735 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:08:12.227282 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:08:12.231457 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:08:12.231569 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:08:12.234744 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:08:12.235046 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:08:12.235091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:12.238353 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:08:12.238638 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:08:12.238728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:08:12.240442 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:08:12.240847 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:08:12.241490 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:08:12.241527 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:12.243589 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:08:12.243913 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:08:12.243961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:08:12.244408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:08:12.244446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:12.244862 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:08:12.244901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:12.245778 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:12.248180 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:08:12.259847 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:08:12.260001 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:12.261012 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:08:12.261049 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:12.261928 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:08:12.261961 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:12.262796 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:08:12.262849 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:08:12.263380 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:08:12.263428 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:08:12.264344 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:08:12.264399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:12.266388 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:08:12.268212 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:08:12.268273 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:12.269068 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:08:12.269116 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:12.273061 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:08:12.273119 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:12.273829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:08:12.273881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:12.274497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:12.274539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:12.275853 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:08:12.280270 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:08:12.285719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:08:12.285817 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:08:12.347436 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:08:12.347543 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:08:12.348598 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:08:12.349193 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:08:12.349253 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:08:12.352295 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:08:12.372282 systemd[1]: Switching root. Jul 7 06:08:12.415690 systemd-journald[207]: Journal stopped Jul 7 06:08:14.421790 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 06:08:14.421872 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:08:14.421893 kernel: SELinux: policy capability open_perms=1 Jul 7 06:08:14.421911 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:08:14.421942 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:08:14.421964 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:08:14.421987 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:08:14.422009 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:08:14.422027 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:08:14.422047 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:08:14.422069 kernel: audit: type=1403 audit(1751868492.935:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:08:14.422097 systemd[1]: Successfully loaded SELinux policy in 87.339ms. Jul 7 06:08:14.422119 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.077ms. Jul 7 06:08:14.422139 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:08:14.422190 systemd[1]: Detected virtualization amazon. Jul 7 06:08:14.422215 systemd[1]: Detected architecture x86-64. Jul 7 06:08:14.422234 systemd[1]: Detected first boot. Jul 7 06:08:14.422255 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:08:14.422276 zram_generator::config[1446]: No configuration found. Jul 7 06:08:14.422299 kernel: Guest personality initialized and is inactive Jul 7 06:08:14.422318 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:08:14.422338 kernel: Initialized host personality Jul 7 06:08:14.422357 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:08:14.422380 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:08:14.422402 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:08:14.422422 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:08:14.422442 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:08:14.422463 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:08:14.422484 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:08:14.422505 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:08:14.422525 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:08:14.422545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:08:14.422568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:08:14.422589 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:08:14.422609 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:08:14.422629 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:08:14.422648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:14.422669 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:14.422688 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:08:14.422708 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:08:14.422734 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:08:14.422755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:08:14.422775 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:08:14.422795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:14.422816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:14.422836 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:08:14.422857 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:08:14.422878 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:08:14.422901 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:08:14.422922 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:14.422942 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:08:14.422962 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:08:14.422983 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:08:14.423002 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:08:14.423023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:08:14.423043 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:08:14.423064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:14.423087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:14.423107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:14.423126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:08:14.425183 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:08:14.425234 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:08:14.425254 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:08:14.425275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:14.425294 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:08:14.425314 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:08:14.425338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:08:14.425359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:08:14.425380 systemd[1]: Reached target machines.target - Containers. Jul 7 06:08:14.425399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:08:14.425419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:14.425437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:08:14.425455 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:08:14.425473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:14.425494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:08:14.425513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:14.425532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:08:14.425550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:14.425569 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:08:14.425588 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:08:14.425606 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:08:14.425624 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:08:14.425643 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:08:14.425664 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:14.425683 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:08:14.425701 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:08:14.425719 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:08:14.425738 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:08:14.425756 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:08:14.425775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:08:14.425797 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:08:14.425817 systemd[1]: Stopped verity-setup.service. Jul 7 06:08:14.425835 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:14.425857 kernel: fuse: init (API version 7.41) Jul 7 06:08:14.425876 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:08:14.425895 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:08:14.425914 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:08:14.425934 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:08:14.425952 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:08:14.425982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:08:14.426000 kernel: ACPI: bus type drm_connector registered Jul 7 06:08:14.426018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:14.426041 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:08:14.426059 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:08:14.426079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:14.426098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:14.426115 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:08:14.426133 kernel: loop: module loaded Jul 7 06:08:14.426189 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:08:14.426207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:14.426224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:14.426248 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:08:14.426268 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:08:14.426285 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:14.426304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:14.426322 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:14.426340 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:08:14.426361 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:14.426423 systemd-journald[1529]: Collecting audit messages is disabled. Jul 7 06:08:14.426470 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:08:14.426491 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:08:14.426512 systemd-journald[1529]: Journal started Jul 7 06:08:14.426553 systemd-journald[1529]: Runtime Journal (/run/log/journal/ec23c38c4525991e8dcab1dac89168aa) is 4.8M, max 38.4M, 33.6M free. Jul 7 06:08:14.007421 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:08:14.032552 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 06:08:14.033059 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:08:14.435179 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:08:14.456176 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:08:14.456273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:08:14.461112 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:08:14.466182 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:08:14.472948 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:08:14.477178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:14.485187 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:08:14.490186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:14.499176 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:08:14.504174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:14.509891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:08:14.520173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:08:14.536179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:08:14.536274 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:08:14.542499 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:08:14.544784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:14.546540 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:08:14.548394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:08:14.566193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:08:14.580228 kernel: loop0: detected capacity change from 0 to 113872 Jul 7 06:08:14.578636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:08:14.584322 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:08:14.587333 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:08:14.603785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:14.608832 systemd-journald[1529]: Time spent on flushing to /var/log/journal/ec23c38c4525991e8dcab1dac89168aa is 64.445ms for 1023 entries. Jul 7 06:08:14.608832 systemd-journald[1529]: System Journal (/var/log/journal/ec23c38c4525991e8dcab1dac89168aa) is 8M, max 195.6M, 187.6M free. Jul 7 06:08:14.702666 systemd-journald[1529]: Received client request to flush runtime journal. Jul 7 06:08:14.626914 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Jul 7 06:08:14.626939 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Jul 7 06:08:14.640524 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:14.644352 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:08:14.706908 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:08:14.706729 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:08:14.710889 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:08:14.726341 kernel: loop1: detected capacity change from 0 to 72352 Jul 7 06:08:14.769375 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:08:14.772861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:08:14.790302 kernel: loop2: detected capacity change from 0 to 146240 Jul 7 06:08:14.809175 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jul 7 06:08:14.809586 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jul 7 06:08:14.817357 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:14.902308 kernel: loop3: detected capacity change from 0 to 224512 Jul 7 06:08:14.955173 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 06:08:14.974301 kernel: loop5: detected capacity change from 0 to 72352 Jul 7 06:08:14.997247 kernel: loop6: detected capacity change from 0 to 146240 Jul 7 06:08:15.029178 kernel: loop7: detected capacity change from 0 to 224512 Jul 7 06:08:15.034849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:08:15.071499 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 06:08:15.078979 (sd-merge)[1607]: Merged extensions into '/usr'. Jul 7 06:08:15.097356 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:08:15.097532 systemd[1]: Reloading... Jul 7 06:08:15.214211 zram_generator::config[1632]: No configuration found. Jul 7 06:08:15.436973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:15.602981 systemd[1]: Reloading finished in 504 ms. Jul 7 06:08:15.626255 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:08:15.633314 systemd[1]: Starting ensure-sysext.service... Jul 7 06:08:15.642691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:08:15.686812 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:08:15.686849 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:08:15.687080 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:08:15.687342 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:08:15.688158 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:08:15.688422 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jul 7 06:08:15.688476 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jul 7 06:08:15.693891 systemd[1]: Reload requested from client PID 1684 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:08:15.694007 systemd[1]: Reloading... Jul 7 06:08:15.696653 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:08:15.696664 systemd-tmpfiles[1685]: Skipping /boot Jul 7 06:08:15.711636 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:08:15.711649 systemd-tmpfiles[1685]: Skipping /boot Jul 7 06:08:15.782235 zram_generator::config[1713]: No configuration found. Jul 7 06:08:15.908185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:16.014263 systemd[1]: Reloading finished in 319 ms. Jul 7 06:08:16.027809 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:08:16.033909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:16.042388 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:08:16.048353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:08:16.053787 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:08:16.063332 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:08:16.066398 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:16.076407 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:08:16.092175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:16.093265 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:16.096701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:16.100632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:16.108461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:16.109974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:16.110541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:16.110711 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:16.120547 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:08:16.128839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:16.129179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:16.129425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:16.129552 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:16.129688 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:16.144369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:16.144789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:16.149368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:08:16.150862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:16.151808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:16.152095 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:08:16.154421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:16.155584 systemd-udevd[1774]: Using default interface naming scheme 'v255'. Jul 7 06:08:16.181198 systemd[1]: Finished ensure-sysext.service. Jul 7 06:08:16.184481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:16.184747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:16.188014 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:08:16.192687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:08:16.199775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:16.201021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:16.202965 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:08:16.205650 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:16.206394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:16.226830 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:16.228296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:16.258345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:08:16.279085 augenrules[1804]: No rules Jul 7 06:08:16.280582 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:08:16.281651 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:08:16.282559 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:08:16.295865 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:08:16.305796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:16.308222 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:08:16.314418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:08:16.320405 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:08:16.385240 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:08:16.386664 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:08:16.393201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:08:16.449843 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:08:16.457888 (udev-worker)[1818]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:08:16.593185 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 06:08:16.595568 systemd-resolved[1771]: Positive Trust Anchors: Jul 7 06:08:16.595590 systemd-resolved[1771]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:08:16.595644 systemd-resolved[1771]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:08:16.607471 systemd-resolved[1771]: Defaulting to hostname 'linux'. Jul 7 06:08:16.611589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:08:16.613369 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:16.614013 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:08:16.615005 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:08:16.616284 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:08:16.616840 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:08:16.617689 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:08:16.619381 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:08:16.619936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:08:16.620481 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:08:16.620521 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:08:16.621318 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:08:16.629607 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:08:16.638645 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:08:16.639170 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:08:16.647865 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:08:16.649174 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:08:16.652552 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:08:16.654791 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:08:16.655182 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 7 06:08:16.658425 systemd-networkd[1825]: lo: Link UP Jul 7 06:08:16.658441 systemd-networkd[1825]: lo: Gained carrier Jul 7 06:08:16.663416 systemd-networkd[1825]: Enumeration completed Jul 7 06:08:16.664576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:08:16.665652 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:16.665668 systemd-networkd[1825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:08:16.667132 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:08:16.670683 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:08:16.673906 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:08:16.674866 systemd-networkd[1825]: eth0: Link UP Jul 7 06:08:16.675056 systemd-networkd[1825]: eth0: Gained carrier Jul 7 06:08:16.675096 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:16.676997 systemd[1]: Reached target network.target - Network. Jul 7 06:08:16.677906 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:08:16.679291 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:08:16.680554 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:08:16.680605 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:08:16.683335 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:08:16.688671 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:08:16.692359 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:08:16.692923 systemd-networkd[1825]: eth0: DHCPv4 address 172.31.29.6/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 06:08:16.699379 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:08:16.703678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:08:16.709332 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:08:16.709957 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:08:16.712025 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:08:16.723403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:08:16.726006 jq[1864]: false Jul 7 06:08:16.733460 google_oslogin_nss_cache[1866]: oslogin_cache_refresh[1866]: Refreshing passwd entry cache Jul 7 06:08:16.732329 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 06:08:16.735255 oslogin_cache_refresh[1866]: Refreshing passwd entry cache Jul 7 06:08:16.742684 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:08:16.743167 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 06:08:16.751720 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 06:08:16.755172 google_oslogin_nss_cache[1866]: oslogin_cache_refresh[1866]: Failure getting users, quitting Jul 7 06:08:16.755172 google_oslogin_nss_cache[1866]: oslogin_cache_refresh[1866]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:08:16.755172 google_oslogin_nss_cache[1866]: oslogin_cache_refresh[1866]: Refreshing group entry cache Jul 7 06:08:16.753796 oslogin_cache_refresh[1866]: Failure getting users, quitting Jul 7 06:08:16.753821 oslogin_cache_refresh[1866]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:08:16.753880 oslogin_cache_refresh[1866]: Refreshing group entry cache Jul 7 06:08:16.759876 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:08:16.763781 google_oslogin_nss_cache[1866]: oslogin_cache_refresh[1866]: Failure getting groups, quitting Jul 7 06:08:16.763776 oslogin_cache_refresh[1866]: Failure getting groups, quitting Jul 7 06:08:16.763907 google_oslogin_nss_cache[1866]: oslogin_cache_refresh[1866]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:08:16.763794 oslogin_cache_refresh[1866]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:08:16.776399 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:08:16.793733 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:08:16.822394 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:08:16.826391 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:08:16.829819 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:08:16.830709 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:08:16.839425 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:08:16.849333 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:08:16.853764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:08:16.855584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:08:16.858891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:08:16.859785 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:08:16.860511 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:08:16.891969 extend-filesystems[1865]: Found /dev/nvme0n1p6 Jul 7 06:08:16.917767 extend-filesystems[1865]: Found /dev/nvme0n1p9 Jul 7 06:08:16.921091 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:08:16.938020 tar[1890]: linux-amd64/LICENSE Jul 7 06:08:16.938020 tar[1890]: linux-amd64/helm Jul 7 06:08:16.928958 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:08:16.946278 jq[1884]: true Jul 7 06:08:16.947189 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Jul 7 06:08:16.965406 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:08:16.964422 dbus-daemon[1862]: [system] SELinux support is enabled Jul 7 06:08:16.975060 jq[1919]: true Jul 7 06:08:16.977093 dbus-daemon[1862]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1825 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 06:08:16.977742 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:08:16.979230 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:08:16.986093 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:08:16.986170 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:08:16.987458 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:08:16.987486 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:08:16.998281 ntpd[1868]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:10 UTC 2025 (1): Starting Jul 7 06:08:16.998683 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:10 UTC 2025 (1): Starting Jul 7 06:08:16.998683 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 06:08:16.998683 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: ---------------------------------------------------- Jul 7 06:08:16.998683 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: ntp-4 is maintained by Network Time Foundation, Jul 7 06:08:16.998318 ntpd[1868]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 06:08:16.998329 ntpd[1868]: ---------------------------------------------------- Jul 7 06:08:16.998338 ntpd[1868]: ntp-4 is maintained by Network Time Foundation, Jul 7 06:08:17.004335 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 06:08:17.004335 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: corporation. Support and training for ntp-4 are Jul 7 06:08:17.004335 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: available at https://www.nwtime.org/support Jul 7 06:08:17.004335 ntpd[1868]: 7 Jul 06:08:16 ntpd[1868]: ---------------------------------------------------- Jul 7 06:08:16.998346 ntpd[1868]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 06:08:17.000640 ntpd[1868]: corporation. Support and training for ntp-4 are Jul 7 06:08:17.000658 ntpd[1868]: available at https://www.nwtime.org/support Jul 7 06:08:17.000667 ntpd[1868]: ---------------------------------------------------- Jul 7 06:08:17.006556 ntpd[1868]: proto: precision = 0.067 usec (-24) Jul 7 06:08:17.012581 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: proto: precision = 0.067 usec (-24) Jul 7 06:08:17.012320 dbus-daemon[1862]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:08:17.025280 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 06:08:17.032916 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: basedate set to 2025-06-24 Jul 7 06:08:17.032916 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: gps base set to 2025-06-29 (week 2373) Jul 7 06:08:17.029560 ntpd[1868]: basedate set to 2025-06-24 Jul 7 06:08:17.032635 (ntainerd)[1934]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:08:17.029582 ntpd[1868]: gps base set to 2025-06-29 (week 2373) Jul 7 06:08:17.051540 ntpd[1868]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 06:08:17.051896 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 06:08:17.051896 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 06:08:17.051601 ntpd[1868]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 06:08:17.052047 ntpd[1868]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 06:08:17.054707 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: Listen normally on 3 eth0 172.31.29.6:123 Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: Listen normally on 4 lo [::1]:123 Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: bind(21) AF_INET6 fe80::413:23ff:fe1a:443d%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: unable to create socket on eth0 (5) for fe80::413:23ff:fe1a:443d%2#123 Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: failed to init interface for address fe80::413:23ff:fe1a:443d%2 Jul 7 06:08:17.056903 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: Listening on routing socket on fd #21 for interface updates Jul 7 06:08:17.055312 ntpd[1868]: Listen normally on 3 eth0 172.31.29.6:123 Jul 7 06:08:17.055383 ntpd[1868]: Listen normally on 4 lo [::1]:123 Jul 7 06:08:17.055439 ntpd[1868]: bind(21) AF_INET6 fe80::413:23ff:fe1a:443d%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:08:17.055461 ntpd[1868]: unable to create socket on eth0 (5) for fe80::413:23ff:fe1a:443d%2#123 Jul 7 06:08:17.055475 ntpd[1868]: failed to init interface for address fe80::413:23ff:fe1a:443d%2 Jul 7 06:08:17.055516 ntpd[1868]: Listening on routing socket on fd #21 for interface updates Jul 7 06:08:17.086785 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Jul 7 06:08:17.099449 ntpd[1868]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:08:17.107959 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:08:17.107959 ntpd[1868]: 7 Jul 06:08:17 ntpd[1868]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:08:17.099488 ntpd[1868]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:08:17.113196 extend-filesystems[1943]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:08:17.125219 update_engine[1883]: I20250707 06:08:17.120710 1883 main.cc:92] Flatcar Update Engine starting Jul 7 06:08:17.133675 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 06:08:17.128061 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:08:17.133860 update_engine[1883]: I20250707 06:08:17.128233 1883 update_check_scheduler.cc:74] Next update check in 8m38s Jul 7 06:08:17.137506 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:08:17.184743 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 06:08:17.243083 coreos-metadata[1861]: Jul 07 06:08:17.243 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 06:08:17.298961 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.246 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.249 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.255 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.257 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.259 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.260 INFO Fetch failed with 404: resource not found Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.260 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.261 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.262 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.262 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.262 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.263 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.263 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.264 INFO Fetch successful Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.264 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 06:08:17.301025 coreos-metadata[1861]: Jul 07 06:08:17.264 INFO Fetch successful Jul 7 06:08:17.331211 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 06:08:17.336075 extend-filesystems[1943]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 06:08:17.336075 extend-filesystems[1943]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:08:17.336075 extend-filesystems[1943]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 06:08:17.335348 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:08:17.378432 bash[1971]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:08:17.378566 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Jul 7 06:08:17.336667 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:08:17.370513 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:08:17.386471 systemd[1]: Starting sshkeys.service... Jul 7 06:08:17.429341 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 06:08:17.432346 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 06:08:17.434896 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:08:17.447821 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:08:17.509262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:17.650746 locksmithd[1954]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:08:17.706649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:17.706962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:17.717274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:17.767653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 06:08:17.781207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:08:17.818502 coreos-metadata[2027]: Jul 07 06:08:17.816 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 06:08:17.818502 coreos-metadata[2027]: Jul 07 06:08:17.818 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 06:08:17.820187 coreos-metadata[2027]: Jul 07 06:08:17.819 INFO Fetch successful Jul 7 06:08:17.820187 coreos-metadata[2027]: Jul 07 06:08:17.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 06:08:17.820955 coreos-metadata[2027]: Jul 07 06:08:17.820 INFO Fetch successful Jul 7 06:08:17.824259 unknown[2027]: wrote ssh authorized keys file for user: core Jul 7 06:08:17.888012 systemd-logind[1873]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:08:17.888047 systemd-logind[1873]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 06:08:17.888070 systemd-logind[1873]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:08:17.892315 systemd-logind[1873]: New seat seat0. Jul 7 06:08:17.901934 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:08:17.905586 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:08:17.907966 update-ssh-keys[2065]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:08:17.910718 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 06:08:17.916026 systemd[1]: Finished sshkeys.service. Jul 7 06:08:17.945723 containerd[1934]: time="2025-07-07T06:08:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:08:17.970028 containerd[1934]: time="2025-07-07T06:08:17.969979469Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:08:18.001109 ntpd[1868]: bind(24) AF_INET6 fe80::413:23ff:fe1a:443d%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:08:18.001748 ntpd[1868]: 7 Jul 06:08:18 ntpd[1868]: bind(24) AF_INET6 fe80::413:23ff:fe1a:443d%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:08:18.001748 ntpd[1868]: 7 Jul 06:08:18 ntpd[1868]: unable to create socket on eth0 (6) for fe80::413:23ff:fe1a:443d%2#123 Jul 7 06:08:18.001748 ntpd[1868]: 7 Jul 06:08:18 ntpd[1868]: failed to init interface for address fe80::413:23ff:fe1a:443d%2 Jul 7 06:08:18.001477 ntpd[1868]: unable to create socket on eth0 (6) for fe80::413:23ff:fe1a:443d%2#123 Jul 7 06:08:18.001493 ntpd[1868]: failed to init interface for address fe80::413:23ff:fe1a:443d%2 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.033781370Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.613µs" Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.033824389Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.033851806Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.034885298Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.034922561Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.034959416Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.035047386Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:08:18.036596 containerd[1934]: time="2025-07-07T06:08:18.035063532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043375783Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043420194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043441953Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043454403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043598882Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043861877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043909348Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:08:18.043968 containerd[1934]: time="2025-07-07T06:08:18.043924144Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:08:18.048061 containerd[1934]: time="2025-07-07T06:08:18.046216761Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:08:18.048390 containerd[1934]: time="2025-07-07T06:08:18.048363132Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:08:18.053698 containerd[1934]: time="2025-07-07T06:08:18.052335164Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:08:18.060407 containerd[1934]: time="2025-07-07T06:08:18.060354087Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060577796Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060607064Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060674738Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060693760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060708540Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060728057Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060744605Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060759441Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060772954Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060785593Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:08:18.060850 containerd[1934]: time="2025-07-07T06:08:18.060801878Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064435868Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064483880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064511226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064528833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064545175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064561487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064580513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064596382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064633897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064651784Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064669440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064754673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064774586Z" level=info msg="Start snapshots syncer" Jul 7 06:08:18.065322 containerd[1934]: time="2025-07-07T06:08:18.064806910Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:08:18.074452 containerd[1934]: time="2025-07-07T06:08:18.074022158Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:08:18.084331 containerd[1934]: time="2025-07-07T06:08:18.080258595Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:08:18.087785 containerd[1934]: time="2025-07-07T06:08:18.087733054Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:08:18.091463 containerd[1934]: time="2025-07-07T06:08:18.089708159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093725885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093787950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093820783Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093842848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093865934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093892959Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093943227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093967633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.093991413Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.094039487Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.094068275Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.094088749Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.094106996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:08:18.094168 containerd[1934]: time="2025-07-07T06:08:18.094135501Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:08:18.096259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:18.097403 containerd[1934]: time="2025-07-07T06:08:18.095224856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:08:18.097403 containerd[1934]: time="2025-07-07T06:08:18.097239490Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:08:18.100248 containerd[1934]: time="2025-07-07T06:08:18.099552570Z" level=info msg="runtime interface created" Jul 7 06:08:18.100248 containerd[1934]: time="2025-07-07T06:08:18.099573476Z" level=info msg="created NRI interface" Jul 7 06:08:18.100248 containerd[1934]: time="2025-07-07T06:08:18.099591860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:08:18.100248 containerd[1934]: time="2025-07-07T06:08:18.099845090Z" level=info msg="Connect containerd service" Jul 7 06:08:18.103382 containerd[1934]: time="2025-07-07T06:08:18.101552522Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:08:18.118262 containerd[1934]: time="2025-07-07T06:08:18.117954920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:08:18.325026 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:08:18.400574 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 06:08:18.400863 dbus-daemon[1862]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 06:08:18.406582 dbus-daemon[1862]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1938 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 06:08:18.422533 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 06:08:18.492669 systemd-networkd[1825]: eth0: Gained IPv6LL Jul 7 06:08:18.517412 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:08:18.519368 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:08:18.522257 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 06:08:18.527461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:18.532515 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:08:18.679175 containerd[1934]: time="2025-07-07T06:08:18.678105557Z" level=info msg="Start subscribing containerd event" Jul 7 06:08:18.679175 containerd[1934]: time="2025-07-07T06:08:18.678322680Z" level=info msg="Start recovering state" Jul 7 06:08:18.683477 containerd[1934]: time="2025-07-07T06:08:18.683437893Z" level=info msg="Start event monitor" Jul 7 06:08:18.683597 containerd[1934]: time="2025-07-07T06:08:18.683484104Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:08:18.683597 containerd[1934]: time="2025-07-07T06:08:18.683503006Z" level=info msg="Start streaming server" Jul 7 06:08:18.683597 containerd[1934]: time="2025-07-07T06:08:18.683524814Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:08:18.683597 containerd[1934]: time="2025-07-07T06:08:18.683534445Z" level=info msg="runtime interface starting up..." Jul 7 06:08:18.683597 containerd[1934]: time="2025-07-07T06:08:18.683542173Z" level=info msg="starting plugins..." Jul 7 06:08:18.683597 containerd[1934]: time="2025-07-07T06:08:18.683563100Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:08:18.685689 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:08:18.689166 containerd[1934]: time="2025-07-07T06:08:18.687992261Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:08:18.689166 containerd[1934]: time="2025-07-07T06:08:18.688134341Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:08:18.689166 containerd[1934]: time="2025-07-07T06:08:18.688538047Z" level=info msg="containerd successfully booted in 0.743328s" Jul 7 06:08:18.688654 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:08:18.742643 polkitd[2170]: Started polkitd version 126 Jul 7 06:08:18.758177 amazon-ssm-agent[2172]: Initializing new seelog logger Jul 7 06:08:18.758177 amazon-ssm-agent[2172]: New Seelog Logger Creation Complete Jul 7 06:08:18.758177 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.758177 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.758627 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 processing appconfig overrides Jul 7 06:08:18.759056 polkitd[2170]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 06:08:18.759628 polkitd[2170]: Loading rules from directory /run/polkit-1/rules.d Jul 7 06:08:18.759691 polkitd[2170]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:08:18.760134 polkitd[2170]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 06:08:18.760183 polkitd[2170]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:08:18.760238 polkitd[2170]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 06:08:18.760847 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.760847 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.760960 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 processing appconfig overrides Jul 7 06:08:18.761758 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.761758 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.761849 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 processing appconfig overrides Jul 7 06:08:18.762631 polkitd[2170]: Finished loading, compiling and executing 2 rules Jul 7 06:08:18.762967 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 06:08:18.766164 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.7607 INFO Proxy environment variables: Jul 7 06:08:18.767323 dbus-daemon[1862]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 06:08:18.768233 polkitd[2170]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 06:08:18.771166 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.771166 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:18.771166 amazon-ssm-agent[2172]: 2025/07/07 06:08:18 processing appconfig overrides Jul 7 06:08:18.802496 systemd-resolved[1771]: System hostname changed to 'ip-172-31-29-6'. Jul 7 06:08:18.802817 systemd-hostnamed[1938]: Hostname set to (transient) Jul 7 06:08:18.865565 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.7608 INFO no_proxy: Jul 7 06:08:18.968836 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.7608 INFO https_proxy: Jul 7 06:08:19.065999 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.7608 INFO http_proxy: Jul 7 06:08:19.114494 tar[1890]: linux-amd64/README.md Jul 7 06:08:19.142442 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:08:19.165313 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.7609 INFO Checking if agent identity type OnPrem can be assumed Jul 7 06:08:19.264094 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.7616 INFO Checking if agent identity type EC2 can be assumed Jul 7 06:08:19.319025 sshd_keygen[1932]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:08:19.363327 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8608 INFO Agent will take identity from EC2 Jul 7 06:08:19.376778 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:08:19.382445 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:08:19.385876 systemd[1]: Started sshd@0-172.31.29.6:22-139.178.89.65:34652.service - OpenSSH per-connection server daemon (139.178.89.65:34652). Jul 7 06:08:19.407797 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:08:19.408084 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:08:19.413738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:08:19.463221 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8653 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 7 06:08:19.463870 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:08:19.472286 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:08:19.476600 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:08:19.478357 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:08:19.562890 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8653 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 7 06:08:19.663196 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8653 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 06:08:19.719948 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 34652 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:19.723319 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:19.740018 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:08:19.744505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:08:19.757008 systemd-logind[1873]: New session 1 of user core. Jul 7 06:08:19.763946 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8653 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 7 06:08:19.787186 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:08:19.795495 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:08:19.812833 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:08:19.817858 systemd-logind[1873]: New session c1 of user core. Jul 7 06:08:19.865882 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8653 INFO [Registrar] Starting registrar module Jul 7 06:08:19.966199 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8707 INFO [EC2Identity] Checking disk for registration info Jul 7 06:08:20.065417 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8708 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 7 06:08:20.078859 systemd[2226]: Queued start job for default target default.target. Jul 7 06:08:20.083293 systemd[2226]: Created slice app.slice - User Application Slice. Jul 7 06:08:20.083339 systemd[2226]: Reached target paths.target - Paths. Jul 7 06:08:20.083399 systemd[2226]: Reached target timers.target - Timers. Jul 7 06:08:20.085183 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:08:20.096095 amazon-ssm-agent[2172]: 2025/07/07 06:08:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:20.096095 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:08:20.096365 amazon-ssm-agent[2172]: 2025/07/07 06:08:20 processing appconfig overrides Jul 7 06:08:20.103766 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:08:20.103953 systemd[2226]: Reached target sockets.target - Sockets. Jul 7 06:08:20.104321 systemd[2226]: Reached target basic.target - Basic System. Jul 7 06:08:20.104397 systemd[2226]: Reached target default.target - Main User Target. Jul 7 06:08:20.104435 systemd[2226]: Startup finished in 276ms. Jul 7 06:08:20.104545 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:08:20.114674 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:18.8708 INFO [EC2Identity] Generating registration keypair Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.0540 INFO [EC2Identity] Checking write access before registering Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.0546 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.0942 INFO [EC2Identity] EC2 registration was successful. Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.0942 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.0943 INFO [CredentialRefresher] credentialRefresher has started Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.0943 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.1344 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 06:08:20.134819 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.1346 INFO [CredentialRefresher] Credentials ready Jul 7 06:08:20.166030 amazon-ssm-agent[2172]: 2025-07-07 06:08:20.1347 INFO [CredentialRefresher] Next credential rotation will be in 29.999995546666668 minutes Jul 7 06:08:20.269975 systemd[1]: Started sshd@1-172.31.29.6:22-139.178.89.65:34662.service - OpenSSH per-connection server daemon (139.178.89.65:34662). Jul 7 06:08:20.436515 sshd[2237]: Accepted publickey for core from 139.178.89.65 port 34662 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:20.437792 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:20.444675 systemd-logind[1873]: New session 2 of user core. Jul 7 06:08:20.450456 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:08:20.480773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:20.482754 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:08:20.485275 systemd[1]: Startup finished in 2.752s (kernel) + 8.244s (initrd) + 7.635s (userspace) = 18.632s. Jul 7 06:08:20.504671 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:20.568720 sshd[2241]: Connection closed by 139.178.89.65 port 34662 Jul 7 06:08:20.569542 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:20.574348 systemd-logind[1873]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:08:20.575769 systemd[1]: sshd@1-172.31.29.6:22-139.178.89.65:34662.service: Deactivated successfully. Jul 7 06:08:20.578355 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:08:20.580781 systemd-logind[1873]: Removed session 2. Jul 7 06:08:20.600206 systemd[1]: Started sshd@2-172.31.29.6:22-139.178.89.65:34670.service - OpenSSH per-connection server daemon (139.178.89.65:34670). Jul 7 06:08:20.778410 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 34670 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:20.779717 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:20.785483 systemd-logind[1873]: New session 3 of user core. Jul 7 06:08:20.788316 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:08:20.909476 sshd[2261]: Connection closed by 139.178.89.65 port 34670 Jul 7 06:08:20.911048 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:20.915899 systemd-logind[1873]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:08:20.916962 systemd[1]: sshd@2-172.31.29.6:22-139.178.89.65:34670.service: Deactivated successfully. Jul 7 06:08:20.919836 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:08:20.923303 systemd-logind[1873]: Removed session 3. Jul 7 06:08:20.944480 systemd[1]: Started sshd@3-172.31.29.6:22-139.178.89.65:34682.service - OpenSSH per-connection server daemon (139.178.89.65:34682). Jul 7 06:08:21.001057 ntpd[1868]: Listen normally on 7 eth0 [fe80::413:23ff:fe1a:443d%2]:123 Jul 7 06:08:21.001621 ntpd[1868]: 7 Jul 06:08:20 ntpd[1868]: Listen normally on 7 eth0 [fe80::413:23ff:fe1a:443d%2]:123 Jul 7 06:08:21.125140 sshd[2267]: Accepted publickey for core from 139.178.89.65 port 34682 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:21.126843 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:21.136524 systemd-logind[1873]: New session 4 of user core. Jul 7 06:08:21.144383 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:08:21.157765 amazon-ssm-agent[2172]: 2025-07-07 06:08:21.1576 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 06:08:21.258254 amazon-ssm-agent[2172]: 2025-07-07 06:08:21.1600 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2272) started Jul 7 06:08:21.273502 sshd[2271]: Connection closed by 139.178.89.65 port 34682 Jul 7 06:08:21.274108 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:21.280688 systemd[1]: sshd@3-172.31.29.6:22-139.178.89.65:34682.service: Deactivated successfully. Jul 7 06:08:21.280928 systemd-logind[1873]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:08:21.283783 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:08:21.290783 systemd-logind[1873]: Removed session 4. Jul 7 06:08:21.307875 systemd[1]: Started sshd@4-172.31.29.6:22-139.178.89.65:34692.service - OpenSSH per-connection server daemon (139.178.89.65:34692). Jul 7 06:08:21.358528 amazon-ssm-agent[2172]: 2025-07-07 06:08:21.1602 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 06:08:21.415842 kubelet[2245]: E0707 06:08:21.415732 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:21.418446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:21.418594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:21.419566 systemd[1]: kubelet.service: Consumed 1.105s CPU time, 266.8M memory peak. Jul 7 06:08:21.499636 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 34692 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:21.501084 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:21.507318 systemd-logind[1873]: New session 5 of user core. Jul 7 06:08:21.516377 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:08:21.654276 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:08:21.654562 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:21.667784 sudo[2293]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:21.690414 sshd[2292]: Connection closed by 139.178.89.65 port 34692 Jul 7 06:08:21.691329 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:21.696263 systemd[1]: sshd@4-172.31.29.6:22-139.178.89.65:34692.service: Deactivated successfully. Jul 7 06:08:21.698284 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:08:21.699203 systemd-logind[1873]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:08:21.701124 systemd-logind[1873]: Removed session 5. Jul 7 06:08:21.722290 systemd[1]: Started sshd@5-172.31.29.6:22-139.178.89.65:34698.service - OpenSSH per-connection server daemon (139.178.89.65:34698). Jul 7 06:08:21.901635 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 34698 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:21.903083 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:21.908667 systemd-logind[1873]: New session 6 of user core. Jul 7 06:08:21.915355 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:08:22.016952 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:08:22.017254 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:22.024347 sudo[2303]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:22.030345 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:08:22.030620 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:22.048655 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:08:22.094676 augenrules[2325]: No rules Jul 7 06:08:22.095826 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:08:22.096030 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:08:22.096971 sudo[2302]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:22.120218 sshd[2301]: Connection closed by 139.178.89.65 port 34698 Jul 7 06:08:22.120732 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:22.124890 systemd[1]: sshd@5-172.31.29.6:22-139.178.89.65:34698.service: Deactivated successfully. Jul 7 06:08:22.126654 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:08:22.127439 systemd-logind[1873]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:08:22.128755 systemd-logind[1873]: Removed session 6. Jul 7 06:08:22.155132 systemd[1]: Started sshd@6-172.31.29.6:22-139.178.89.65:34702.service - OpenSSH per-connection server daemon (139.178.89.65:34702). Jul 7 06:08:22.329314 sshd[2334]: Accepted publickey for core from 139.178.89.65 port 34702 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:08:22.331092 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:22.336913 systemd-logind[1873]: New session 7 of user core. Jul 7 06:08:22.345365 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:08:22.444830 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:08:22.445115 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:23.049540 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:08:23.060620 (dockerd)[2356]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:08:23.505941 dockerd[2356]: time="2025-07-07T06:08:23.504934105Z" level=info msg="Starting up" Jul 7 06:08:23.507226 dockerd[2356]: time="2025-07-07T06:08:23.507197290Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:08:23.744777 dockerd[2356]: time="2025-07-07T06:08:23.744736286Z" level=info msg="Loading containers: start." Jul 7 06:08:23.757191 kernel: Initializing XFRM netlink socket Jul 7 06:08:25.620967 systemd-resolved[1771]: Clock change detected. Flushing caches. Jul 7 06:08:25.625392 (udev-worker)[2377]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:08:25.669332 systemd-networkd[1825]: docker0: Link UP Jul 7 06:08:25.675575 dockerd[2356]: time="2025-07-07T06:08:25.675505332Z" level=info msg="Loading containers: done." Jul 7 06:08:25.695208 dockerd[2356]: time="2025-07-07T06:08:25.695150298Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:08:25.695362 dockerd[2356]: time="2025-07-07T06:08:25.695252117Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:08:25.695362 dockerd[2356]: time="2025-07-07T06:08:25.695354600Z" level=info msg="Initializing buildkit" Jul 7 06:08:25.723090 dockerd[2356]: time="2025-07-07T06:08:25.722877092Z" level=info msg="Completed buildkit initialization" Jul 7 06:08:25.729810 dockerd[2356]: time="2025-07-07T06:08:25.729761951Z" level=info msg="Daemon has completed initialization" Jul 7 06:08:25.730141 dockerd[2356]: time="2025-07-07T06:08:25.729944959Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:08:25.730036 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:08:26.715615 containerd[1934]: time="2025-07-07T06:08:26.715493279Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:08:27.291343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544127397.mount: Deactivated successfully. Jul 7 06:08:28.485074 containerd[1934]: time="2025-07-07T06:08:28.485017351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:28.486103 containerd[1934]: time="2025-07-07T06:08:28.485992306Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 06:08:28.487202 containerd[1934]: time="2025-07-07T06:08:28.487174359Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:28.491527 containerd[1934]: time="2025-07-07T06:08:28.489979249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:28.491527 containerd[1934]: time="2025-07-07T06:08:28.491065344Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.775512332s" Jul 7 06:08:28.491527 containerd[1934]: time="2025-07-07T06:08:28.491104141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 06:08:28.492046 containerd[1934]: time="2025-07-07T06:08:28.492023488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:08:29.927099 containerd[1934]: time="2025-07-07T06:08:29.927040741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:29.928164 containerd[1934]: time="2025-07-07T06:08:29.928126811Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 06:08:29.929688 containerd[1934]: time="2025-07-07T06:08:29.929562654Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:29.932677 containerd[1934]: time="2025-07-07T06:08:29.932487546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:29.933673 containerd[1934]: time="2025-07-07T06:08:29.933440811Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.44127804s" Jul 7 06:08:29.933673 containerd[1934]: time="2025-07-07T06:08:29.933477781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 06:08:29.934353 containerd[1934]: time="2025-07-07T06:08:29.934321946Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:08:31.074661 containerd[1934]: time="2025-07-07T06:08:31.074611397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:31.078897 containerd[1934]: time="2025-07-07T06:08:31.078841738Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 06:08:31.084687 containerd[1934]: time="2025-07-07T06:08:31.084191473Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:31.092233 containerd[1934]: time="2025-07-07T06:08:31.092195282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:31.093191 containerd[1934]: time="2025-07-07T06:08:31.093160152Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.158808978s" Jul 7 06:08:31.093310 containerd[1934]: time="2025-07-07T06:08:31.093297017Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 06:08:31.094054 containerd[1934]: time="2025-07-07T06:08:31.094028879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:08:32.097194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187924441.mount: Deactivated successfully. Jul 7 06:08:32.667554 containerd[1934]: time="2025-07-07T06:08:32.667504041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:32.668686 containerd[1934]: time="2025-07-07T06:08:32.668582758Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 06:08:32.670049 containerd[1934]: time="2025-07-07T06:08:32.669993328Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:32.672247 containerd[1934]: time="2025-07-07T06:08:32.672193581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:32.672760 containerd[1934]: time="2025-07-07T06:08:32.672737270Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.57868222s" Jul 7 06:08:32.672848 containerd[1934]: time="2025-07-07T06:08:32.672836413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 06:08:32.673299 containerd[1934]: time="2025-07-07T06:08:32.673267858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:08:33.150809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:08:33.152079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:33.180330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871171806.mount: Deactivated successfully. Jul 7 06:08:33.415882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:33.425026 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:33.509665 kubelet[2651]: E0707 06:08:33.509151 2651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:33.515748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:33.516111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:33.516980 systemd[1]: kubelet.service: Consumed 173ms CPU time, 109.6M memory peak. Jul 7 06:08:34.200880 containerd[1934]: time="2025-07-07T06:08:34.200834015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:34.202018 containerd[1934]: time="2025-07-07T06:08:34.201836293Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:08:34.203092 containerd[1934]: time="2025-07-07T06:08:34.203044629Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:34.206858 containerd[1934]: time="2025-07-07T06:08:34.205944973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:34.206858 containerd[1934]: time="2025-07-07T06:08:34.206730799Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.533433751s" Jul 7 06:08:34.206858 containerd[1934]: time="2025-07-07T06:08:34.206757805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:08:34.207583 containerd[1934]: time="2025-07-07T06:08:34.207556697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:08:34.631337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393693273.mount: Deactivated successfully. Jul 7 06:08:34.640747 containerd[1934]: time="2025-07-07T06:08:34.640697964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:34.641914 containerd[1934]: time="2025-07-07T06:08:34.641883383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:08:34.644661 containerd[1934]: time="2025-07-07T06:08:34.644427293Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:34.647597 containerd[1934]: time="2025-07-07T06:08:34.647549851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:34.648657 containerd[1934]: time="2025-07-07T06:08:34.648617070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 440.945104ms" Jul 7 06:08:34.648657 containerd[1934]: time="2025-07-07T06:08:34.648657923Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:08:34.649155 containerd[1934]: time="2025-07-07T06:08:34.649138780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:08:35.112096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131958056.mount: Deactivated successfully. Jul 7 06:08:37.506548 containerd[1934]: time="2025-07-07T06:08:37.506496050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:37.507680 containerd[1934]: time="2025-07-07T06:08:37.507480594Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 06:08:37.508801 containerd[1934]: time="2025-07-07T06:08:37.508747592Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:37.511789 containerd[1934]: time="2025-07-07T06:08:37.511745902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:37.512742 containerd[1934]: time="2025-07-07T06:08:37.512598834Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.863354937s" Jul 7 06:08:37.512742 containerd[1934]: time="2025-07-07T06:08:37.512631293Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 06:08:40.519895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:40.520151 systemd[1]: kubelet.service: Consumed 173ms CPU time, 109.6M memory peak. Jul 7 06:08:40.522984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:40.555546 systemd[1]: Reload requested from client PID 2783 ('systemctl') (unit session-7.scope)... Jul 7 06:08:40.555744 systemd[1]: Reloading... Jul 7 06:08:40.651669 zram_generator::config[2828]: No configuration found. Jul 7 06:08:40.798879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:40.934632 systemd[1]: Reloading finished in 378 ms. Jul 7 06:08:40.991640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:08:40.991771 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:08:40.992125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:40.992299 systemd[1]: kubelet.service: Consumed 127ms CPU time, 98.1M memory peak. Jul 7 06:08:40.994306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:41.240826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:41.251084 (kubelet)[2891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:08:41.301212 kubelet[2891]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:41.301212 kubelet[2891]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:08:41.301212 kubelet[2891]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:41.301642 kubelet[2891]: I0707 06:08:41.301315 2891 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:08:41.674626 kubelet[2891]: I0707 06:08:41.674582 2891 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:08:41.674626 kubelet[2891]: I0707 06:08:41.674614 2891 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:08:41.674937 kubelet[2891]: I0707 06:08:41.674900 2891 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:08:41.725790 kubelet[2891]: E0707 06:08:41.725704 2891 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:41.729108 kubelet[2891]: I0707 06:08:41.728939 2891 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:08:41.749264 kubelet[2891]: I0707 06:08:41.749226 2891 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:08:41.755371 kubelet[2891]: I0707 06:08:41.755335 2891 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:08:41.757786 kubelet[2891]: I0707 06:08:41.757720 2891 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:08:41.757961 kubelet[2891]: I0707 06:08:41.757774 2891 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:08:41.761142 kubelet[2891]: I0707 06:08:41.761101 2891 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:08:41.761142 kubelet[2891]: I0707 06:08:41.761139 2891 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:08:41.762872 kubelet[2891]: I0707 06:08:41.762834 2891 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:41.768860 kubelet[2891]: I0707 06:08:41.768722 2891 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:08:41.768860 kubelet[2891]: I0707 06:08:41.768769 2891 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:08:41.771435 kubelet[2891]: I0707 06:08:41.771394 2891 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:08:41.771435 kubelet[2891]: I0707 06:08:41.771431 2891 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:08:41.775255 kubelet[2891]: W0707 06:08:41.774052 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-6&limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:41.775255 kubelet[2891]: E0707 06:08:41.774113 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-6&limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:41.775255 kubelet[2891]: W0707 06:08:41.775040 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:41.775255 kubelet[2891]: E0707 06:08:41.775072 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:41.777091 kubelet[2891]: I0707 06:08:41.777069 2891 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:08:41.782537 kubelet[2891]: I0707 06:08:41.782511 2891 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:08:41.782725 kubelet[2891]: W0707 06:08:41.782715 2891 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:08:41.783636 kubelet[2891]: I0707 06:08:41.783601 2891 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:08:41.783636 kubelet[2891]: I0707 06:08:41.783638 2891 server.go:1287] "Started kubelet" Jul 7 06:08:41.788004 kubelet[2891]: I0707 06:08:41.787284 2891 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:08:41.789688 kubelet[2891]: I0707 06:08:41.789670 2891 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:08:41.792009 kubelet[2891]: I0707 06:08:41.791687 2891 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:08:41.792009 kubelet[2891]: I0707 06:08:41.791938 2891 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:08:41.793338 kubelet[2891]: I0707 06:08:41.793317 2891 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:08:41.798005 kubelet[2891]: E0707 06:08:41.793665 2891 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.6:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-6.184fe3238619f2b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-6,UID:ip-172-31-29-6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-6,},FirstTimestamp:2025-07-07 06:08:41.783620278 +0000 UTC m=+0.528648738,LastTimestamp:2025-07-07 06:08:41.783620278 +0000 UTC m=+0.528648738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-6,}" Jul 7 06:08:41.801669 kubelet[2891]: I0707 06:08:41.800390 2891 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:08:41.803125 kubelet[2891]: E0707 06:08:41.803079 2891 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-6\" not found" Jul 7 06:08:41.803125 kubelet[2891]: I0707 06:08:41.803107 2891 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:08:41.803388 kubelet[2891]: I0707 06:08:41.803341 2891 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:08:41.803388 kubelet[2891]: I0707 06:08:41.803387 2891 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:08:41.803767 kubelet[2891]: W0707 06:08:41.803732 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:41.803855 kubelet[2891]: E0707 06:08:41.803780 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:41.806165 kubelet[2891]: E0707 06:08:41.803956 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-6?timeout=10s\": dial tcp 172.31.29.6:6443: connect: connection refused" interval="200ms" Jul 7 06:08:41.814805 kubelet[2891]: I0707 06:08:41.814778 2891 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:08:41.815042 kubelet[2891]: I0707 06:08:41.815024 2891 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:08:41.818002 kubelet[2891]: E0707 06:08:41.817978 2891 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:08:41.818998 kubelet[2891]: I0707 06:08:41.818982 2891 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:08:41.828493 kubelet[2891]: I0707 06:08:41.828442 2891 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:08:41.833239 kubelet[2891]: I0707 06:08:41.833208 2891 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:08:41.833378 kubelet[2891]: I0707 06:08:41.833371 2891 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:08:41.833433 kubelet[2891]: I0707 06:08:41.833427 2891 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:08:41.833477 kubelet[2891]: I0707 06:08:41.833472 2891 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:08:41.833685 kubelet[2891]: E0707 06:08:41.833640 2891 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:08:41.846078 kubelet[2891]: W0707 06:08:41.845995 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:41.846078 kubelet[2891]: E0707 06:08:41.846071 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:41.852229 kubelet[2891]: I0707 06:08:41.852208 2891 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:08:41.852548 kubelet[2891]: I0707 06:08:41.852360 2891 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:08:41.852548 kubelet[2891]: I0707 06:08:41.852378 2891 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:41.855014 kubelet[2891]: I0707 06:08:41.854991 2891 policy_none.go:49] "None policy: Start" Jul 7 06:08:41.855159 kubelet[2891]: I0707 06:08:41.855148 2891 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:08:41.855222 kubelet[2891]: I0707 06:08:41.855214 2891 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:08:41.864504 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:08:41.884112 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:08:41.889023 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:08:41.899784 kubelet[2891]: I0707 06:08:41.899039 2891 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:08:41.899784 kubelet[2891]: I0707 06:08:41.899235 2891 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:08:41.899784 kubelet[2891]: I0707 06:08:41.899246 2891 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:08:41.899784 kubelet[2891]: I0707 06:08:41.899676 2891 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:08:41.901061 kubelet[2891]: E0707 06:08:41.901043 2891 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:08:41.901203 kubelet[2891]: E0707 06:08:41.901194 2891 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-6\" not found" Jul 7 06:08:41.944517 systemd[1]: Created slice kubepods-burstable-pod1eb967225fbb40dea13d77b0453e3365.slice - libcontainer container kubepods-burstable-pod1eb967225fbb40dea13d77b0453e3365.slice. Jul 7 06:08:41.963358 kubelet[2891]: E0707 06:08:41.963334 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:41.966811 systemd[1]: Created slice kubepods-burstable-pod1c32a938a125606d0922d1da754b0757.slice - libcontainer container kubepods-burstable-pod1c32a938a125606d0922d1da754b0757.slice. Jul 7 06:08:41.968930 kubelet[2891]: E0707 06:08:41.968731 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:41.971592 systemd[1]: Created slice kubepods-burstable-pod82a31fdaea126e82d5863908aacc8820.slice - libcontainer container kubepods-burstable-pod82a31fdaea126e82d5863908aacc8820.slice. Jul 7 06:08:41.973254 kubelet[2891]: E0707 06:08:41.973228 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:42.002086 kubelet[2891]: I0707 06:08:42.002051 2891 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-6" Jul 7 06:08:42.002483 kubelet[2891]: E0707 06:08:42.002437 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.6:6443/api/v1/nodes\": dial tcp 172.31.29.6:6443: connect: connection refused" node="ip-172-31-29-6" Jul 7 06:08:42.005013 kubelet[2891]: I0707 06:08:42.004635 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1eb967225fbb40dea13d77b0453e3365-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-6\" (UID: \"1eb967225fbb40dea13d77b0453e3365\") " pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:42.005013 kubelet[2891]: I0707 06:08:42.004720 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1eb967225fbb40dea13d77b0453e3365-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-6\" (UID: \"1eb967225fbb40dea13d77b0453e3365\") " pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:42.005013 kubelet[2891]: I0707 06:08:42.004748 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:42.005013 kubelet[2891]: I0707 06:08:42.004781 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:42.005013 kubelet[2891]: I0707 06:08:42.004805 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82a31fdaea126e82d5863908aacc8820-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-6\" (UID: \"82a31fdaea126e82d5863908aacc8820\") " pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:42.005222 kubelet[2891]: I0707 06:08:42.004826 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1eb967225fbb40dea13d77b0453e3365-ca-certs\") pod \"kube-apiserver-ip-172-31-29-6\" (UID: \"1eb967225fbb40dea13d77b0453e3365\") " pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:42.005222 kubelet[2891]: I0707 06:08:42.004875 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:42.005222 kubelet[2891]: I0707 06:08:42.004900 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:42.005222 kubelet[2891]: I0707 06:08:42.004923 2891 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:42.005222 kubelet[2891]: E0707 06:08:42.004976 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-6?timeout=10s\": dial tcp 172.31.29.6:6443: connect: connection refused" interval="400ms" Jul 7 06:08:42.204418 kubelet[2891]: I0707 06:08:42.204332 2891 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-6" Jul 7 06:08:42.204942 kubelet[2891]: E0707 06:08:42.204628 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.6:6443/api/v1/nodes\": dial tcp 172.31.29.6:6443: connect: connection refused" node="ip-172-31-29-6" Jul 7 06:08:42.265311 containerd[1934]: time="2025-07-07T06:08:42.265265309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-6,Uid:1eb967225fbb40dea13d77b0453e3365,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:42.275551 containerd[1934]: time="2025-07-07T06:08:42.275495197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-6,Uid:1c32a938a125606d0922d1da754b0757,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:42.275867 containerd[1934]: time="2025-07-07T06:08:42.275805111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-6,Uid:82a31fdaea126e82d5863908aacc8820,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:42.405966 kubelet[2891]: E0707 06:08:42.405913 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-6?timeout=10s\": dial tcp 172.31.29.6:6443: connect: connection refused" interval="800ms" Jul 7 06:08:42.443635 containerd[1934]: time="2025-07-07T06:08:42.443395444Z" level=info msg="connecting to shim 43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12" address="unix:///run/containerd/s/72f6e3d1477e456b73f7a3d0e144e6c5f0d853999a4cbcf87b84014876874884" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:08:42.444561 containerd[1934]: time="2025-07-07T06:08:42.444519240Z" level=info msg="connecting to shim c6df9579e12d59c11e057d1f2e1af9580b027fdca8cb3ecbf844c42c3024014e" address="unix:///run/containerd/s/523e0e1a703c4500ffa3772b32707649dbd429cdcb53aacd8c663cbaab6e9e9f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:08:42.453079 containerd[1934]: time="2025-07-07T06:08:42.453023063Z" level=info msg="connecting to shim 9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d" address="unix:///run/containerd/s/0a112c52364ea4bdc53812fac36a8b13c8891a4b5c818af94c99bef5de4d844d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:08:42.582880 systemd[1]: Started cri-containerd-43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12.scope - libcontainer container 43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12. Jul 7 06:08:42.584154 systemd[1]: Started cri-containerd-9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d.scope - libcontainer container 9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d. Jul 7 06:08:42.585267 systemd[1]: Started cri-containerd-c6df9579e12d59c11e057d1f2e1af9580b027fdca8cb3ecbf844c42c3024014e.scope - libcontainer container c6df9579e12d59c11e057d1f2e1af9580b027fdca8cb3ecbf844c42c3024014e. Jul 7 06:08:42.620900 kubelet[2891]: I0707 06:08:42.620862 2891 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-6" Jul 7 06:08:42.622435 kubelet[2891]: E0707 06:08:42.622393 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.6:6443/api/v1/nodes\": dial tcp 172.31.29.6:6443: connect: connection refused" node="ip-172-31-29-6" Jul 7 06:08:42.667860 containerd[1934]: time="2025-07-07T06:08:42.667799739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-6,Uid:1c32a938a125606d0922d1da754b0757,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d\"" Jul 7 06:08:42.673379 containerd[1934]: time="2025-07-07T06:08:42.673339992Z" level=info msg="CreateContainer within sandbox \"9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:08:42.677099 containerd[1934]: time="2025-07-07T06:08:42.677045245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-6,Uid:1eb967225fbb40dea13d77b0453e3365,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6df9579e12d59c11e057d1f2e1af9580b027fdca8cb3ecbf844c42c3024014e\"" Jul 7 06:08:42.683004 containerd[1934]: time="2025-07-07T06:08:42.682936890Z" level=info msg="CreateContainer within sandbox \"c6df9579e12d59c11e057d1f2e1af9580b027fdca8cb3ecbf844c42c3024014e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:08:42.684150 kubelet[2891]: W0707 06:08:42.684041 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:42.684150 kubelet[2891]: E0707 06:08:42.684100 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:42.689667 containerd[1934]: time="2025-07-07T06:08:42.689080833Z" level=info msg="Container ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:08:42.701905 containerd[1934]: time="2025-07-07T06:08:42.701867553Z" level=info msg="Container ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:08:42.702684 containerd[1934]: time="2025-07-07T06:08:42.702626801Z" level=info msg="CreateContainer within sandbox \"9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\"" Jul 7 06:08:42.703261 containerd[1934]: time="2025-07-07T06:08:42.703132620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-6,Uid:82a31fdaea126e82d5863908aacc8820,Namespace:kube-system,Attempt:0,} returns sandbox id \"43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12\"" Jul 7 06:08:42.704534 containerd[1934]: time="2025-07-07T06:08:42.704504130Z" level=info msg="StartContainer for \"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\"" Jul 7 06:08:42.707510 containerd[1934]: time="2025-07-07T06:08:42.707463308Z" level=info msg="connecting to shim ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846" address="unix:///run/containerd/s/0a112c52364ea4bdc53812fac36a8b13c8891a4b5c818af94c99bef5de4d844d" protocol=ttrpc version=3 Jul 7 06:08:42.708133 containerd[1934]: time="2025-07-07T06:08:42.708102714Z" level=info msg="CreateContainer within sandbox \"43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:08:42.714684 containerd[1934]: time="2025-07-07T06:08:42.714384477Z" level=info msg="CreateContainer within sandbox \"c6df9579e12d59c11e057d1f2e1af9580b027fdca8cb3ecbf844c42c3024014e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c\"" Jul 7 06:08:42.720276 containerd[1934]: time="2025-07-07T06:08:42.720240777Z" level=info msg="StartContainer for \"ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c\"" Jul 7 06:08:42.726353 kubelet[2891]: W0707 06:08:42.726262 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-6&limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:42.726353 kubelet[2891]: E0707 06:08:42.726319 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-6&limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:42.726824 containerd[1934]: time="2025-07-07T06:08:42.726781146Z" level=info msg="connecting to shim ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c" address="unix:///run/containerd/s/523e0e1a703c4500ffa3772b32707649dbd429cdcb53aacd8c663cbaab6e9e9f" protocol=ttrpc version=3 Jul 7 06:08:42.731085 containerd[1934]: time="2025-07-07T06:08:42.731048392Z" level=info msg="Container 8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:08:42.745573 containerd[1934]: time="2025-07-07T06:08:42.745537504Z" level=info msg="CreateContainer within sandbox \"43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\"" Jul 7 06:08:42.746241 containerd[1934]: time="2025-07-07T06:08:42.746172179Z" level=info msg="StartContainer for \"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\"" Jul 7 06:08:42.746862 systemd[1]: Started cri-containerd-ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846.scope - libcontainer container ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846. Jul 7 06:08:42.755110 containerd[1934]: time="2025-07-07T06:08:42.754073369Z" level=info msg="connecting to shim 8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977" address="unix:///run/containerd/s/72f6e3d1477e456b73f7a3d0e144e6c5f0d853999a4cbcf87b84014876874884" protocol=ttrpc version=3 Jul 7 06:08:42.759039 systemd[1]: Started cri-containerd-ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c.scope - libcontainer container ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c. Jul 7 06:08:42.791519 systemd[1]: Started cri-containerd-8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977.scope - libcontainer container 8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977. Jul 7 06:08:42.817034 containerd[1934]: time="2025-07-07T06:08:42.816994666Z" level=info msg="StartContainer for \"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\" returns successfully" Jul 7 06:08:42.827078 kubelet[2891]: W0707 06:08:42.826970 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:42.827078 kubelet[2891]: E0707 06:08:42.827033 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:42.861675 containerd[1934]: time="2025-07-07T06:08:42.861392517Z" level=info msg="StartContainer for \"ec1004ca61e06df505300b5c41a47cf86ec065413a118a069dc09da70596136c\" returns successfully" Jul 7 06:08:42.875757 kubelet[2891]: E0707 06:08:42.875725 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:42.923042 containerd[1934]: time="2025-07-07T06:08:42.922956551Z" level=info msg="StartContainer for \"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\" returns successfully" Jul 7 06:08:43.207675 kubelet[2891]: E0707 06:08:43.206903 2891 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-6?timeout=10s\": dial tcp 172.31.29.6:6443: connect: connection refused" interval="1.6s" Jul 7 06:08:43.232981 kubelet[2891]: W0707 06:08:43.232865 2891 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.6:6443: connect: connection refused Jul 7 06:08:43.232981 kubelet[2891]: E0707 06:08:43.232947 2891 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:43.425362 kubelet[2891]: I0707 06:08:43.425013 2891 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-6" Jul 7 06:08:43.426926 kubelet[2891]: E0707 06:08:43.426896 2891 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.6:6443/api/v1/nodes\": dial tcp 172.31.29.6:6443: connect: connection refused" node="ip-172-31-29-6" Jul 7 06:08:43.879711 kubelet[2891]: E0707 06:08:43.879428 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:43.881028 kubelet[2891]: E0707 06:08:43.880887 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:44.884674 kubelet[2891]: E0707 06:08:44.884118 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:44.884674 kubelet[2891]: E0707 06:08:44.884528 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:45.031449 kubelet[2891]: I0707 06:08:45.030881 2891 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-6" Jul 7 06:08:45.807976 kubelet[2891]: E0707 06:08:45.807679 2891 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:45.837290 kubelet[2891]: E0707 06:08:45.837177 2891 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-6.184fe3238619f2b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-6,UID:ip-172-31-29-6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-6,},FirstTimestamp:2025-07-07 06:08:41.783620278 +0000 UTC m=+0.528648738,LastTimestamp:2025-07-07 06:08:41.783620278 +0000 UTC m=+0.528648738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-6,}" Jul 7 06:08:45.886867 kubelet[2891]: E0707 06:08:45.885418 2891 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-6\" not found" node="ip-172-31-29-6" Jul 7 06:08:45.894015 kubelet[2891]: I0707 06:08:45.893855 2891 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-6" Jul 7 06:08:45.894015 kubelet[2891]: E0707 06:08:45.893890 2891 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-6\": node \"ip-172-31-29-6\" not found" Jul 7 06:08:45.903909 kubelet[2891]: I0707 06:08:45.903857 2891 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:45.914254 kubelet[2891]: E0707 06:08:45.913316 2891 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:45.914254 kubelet[2891]: I0707 06:08:45.913341 2891 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:45.916488 kubelet[2891]: E0707 06:08:45.916462 2891 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:45.917322 kubelet[2891]: I0707 06:08:45.917303 2891 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:45.918859 kubelet[2891]: E0707 06:08:45.918842 2891 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:46.778866 kubelet[2891]: I0707 06:08:46.778828 2891 apiserver.go:52] "Watching apiserver" Jul 7 06:08:46.803618 kubelet[2891]: I0707 06:08:46.803575 2891 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:08:46.888171 kubelet[2891]: I0707 06:08:46.887987 2891 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:46.915164 kubelet[2891]: I0707 06:08:46.915120 2891 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:47.710245 systemd[1]: Reload requested from client PID 3162 ('systemctl') (unit session-7.scope)... Jul 7 06:08:47.710263 systemd[1]: Reloading... Jul 7 06:08:47.878709 zram_generator::config[3207]: No configuration found. Jul 7 06:08:48.035902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:48.194328 systemd[1]: Reloading finished in 483 ms. Jul 7 06:08:48.224452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:48.247401 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:08:48.247614 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:48.247683 systemd[1]: kubelet.service: Consumed 950ms CPU time, 128.1M memory peak. Jul 7 06:08:48.250435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:48.578834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:48.591057 (kubelet)[3267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:08:48.660591 kubelet[3267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:48.661389 kubelet[3267]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:08:48.661389 kubelet[3267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:48.661389 kubelet[3267]: I0707 06:08:48.661248 3267 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:08:48.669820 kubelet[3267]: I0707 06:08:48.669788 3267 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:08:48.669971 kubelet[3267]: I0707 06:08:48.669962 3267 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:08:48.670277 kubelet[3267]: I0707 06:08:48.670266 3267 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:08:48.671563 kubelet[3267]: I0707 06:08:48.671544 3267 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:08:48.679795 kubelet[3267]: I0707 06:08:48.679760 3267 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:08:48.685767 kubelet[3267]: I0707 06:08:48.685710 3267 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:08:48.695411 kubelet[3267]: I0707 06:08:48.695378 3267 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:08:48.697819 kubelet[3267]: I0707 06:08:48.695628 3267 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:08:48.699087 kubelet[3267]: I0707 06:08:48.698556 3267 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:08:48.699087 kubelet[3267]: I0707 06:08:48.698955 3267 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:08:48.699087 kubelet[3267]: I0707 06:08:48.698971 3267 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:08:48.699087 kubelet[3267]: I0707 06:08:48.699032 3267 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:48.699573 kubelet[3267]: I0707 06:08:48.699559 3267 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:08:48.700208 kubelet[3267]: I0707 06:08:48.700184 3267 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:08:48.700353 kubelet[3267]: I0707 06:08:48.700342 3267 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:08:48.700521 kubelet[3267]: I0707 06:08:48.700407 3267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:08:48.706986 kubelet[3267]: I0707 06:08:48.706750 3267 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:08:48.707304 kubelet[3267]: I0707 06:08:48.707279 3267 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:08:48.709823 kubelet[3267]: I0707 06:08:48.709799 3267 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:08:48.709922 kubelet[3267]: I0707 06:08:48.709843 3267 server.go:1287] "Started kubelet" Jul 7 06:08:48.714597 kubelet[3267]: I0707 06:08:48.714570 3267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:08:48.729025 kubelet[3267]: I0707 06:08:48.728939 3267 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:08:48.730673 kubelet[3267]: I0707 06:08:48.730212 3267 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:08:48.733761 kubelet[3267]: I0707 06:08:48.730877 3267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:08:48.733761 kubelet[3267]: I0707 06:08:48.731606 3267 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:08:48.733761 kubelet[3267]: E0707 06:08:48.731990 3267 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-6\" not found" Jul 7 06:08:48.734165 kubelet[3267]: I0707 06:08:48.733986 3267 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:08:48.734165 kubelet[3267]: I0707 06:08:48.734128 3267 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:08:48.738488 kubelet[3267]: I0707 06:08:48.736624 3267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:08:48.738488 kubelet[3267]: I0707 06:08:48.738488 3267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:08:48.738685 kubelet[3267]: I0707 06:08:48.738517 3267 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:08:48.738685 kubelet[3267]: I0707 06:08:48.738539 3267 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:08:48.738685 kubelet[3267]: I0707 06:08:48.738546 3267 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:08:48.738685 kubelet[3267]: E0707 06:08:48.738599 3267 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:08:48.739114 kubelet[3267]: I0707 06:08:48.739095 3267 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:08:48.741774 kubelet[3267]: I0707 06:08:48.740270 3267 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:08:48.750212 kubelet[3267]: I0707 06:08:48.750146 3267 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:08:48.750354 kubelet[3267]: I0707 06:08:48.750264 3267 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:08:48.755690 kubelet[3267]: E0707 06:08:48.751093 3267 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:08:48.755690 kubelet[3267]: I0707 06:08:48.752885 3267 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:08:48.765252 sudo[3287]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 06:08:48.765748 sudo[3287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 06:08:48.838874 kubelet[3267]: E0707 06:08:48.838765 3267 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:08:48.850238 kubelet[3267]: I0707 06:08:48.850214 3267 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:08:48.850238 kubelet[3267]: I0707 06:08:48.850232 3267 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:08:48.850448 kubelet[3267]: I0707 06:08:48.850253 3267 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:48.850494 kubelet[3267]: I0707 06:08:48.850458 3267 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:08:48.850494 kubelet[3267]: I0707 06:08:48.850472 3267 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:08:48.850570 kubelet[3267]: I0707 06:08:48.850497 3267 policy_none.go:49] "None policy: Start" Jul 7 06:08:48.850570 kubelet[3267]: I0707 06:08:48.850511 3267 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:08:48.850570 kubelet[3267]: I0707 06:08:48.850524 3267 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:08:48.850705 kubelet[3267]: I0707 06:08:48.850686 3267 state_mem.go:75] "Updated machine memory state" Jul 7 06:08:48.861667 kubelet[3267]: I0707 06:08:48.859899 3267 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:08:48.862504 kubelet[3267]: I0707 06:08:48.862309 3267 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:08:48.862504 kubelet[3267]: I0707 06:08:48.862328 3267 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:08:48.862808 kubelet[3267]: I0707 06:08:48.862796 3267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:08:48.865053 kubelet[3267]: E0707 06:08:48.865025 3267 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:08:48.983487 kubelet[3267]: I0707 06:08:48.983397 3267 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-6" Jul 7 06:08:48.996130 kubelet[3267]: I0707 06:08:48.996071 3267 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-6" Jul 7 06:08:48.996560 kubelet[3267]: I0707 06:08:48.996453 3267 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-6" Jul 7 06:08:49.040663 kubelet[3267]: I0707 06:08:49.040617 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.044830 kubelet[3267]: I0707 06:08:49.044761 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.046702 kubelet[3267]: I0707 06:08:49.046680 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:49.051517 kubelet[3267]: E0707 06:08:49.051487 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-6\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.060660 kubelet[3267]: E0707 06:08:49.060559 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-6\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:49.136322 kubelet[3267]: I0707 06:08:49.135626 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1eb967225fbb40dea13d77b0453e3365-ca-certs\") pod \"kube-apiserver-ip-172-31-29-6\" (UID: \"1eb967225fbb40dea13d77b0453e3365\") " pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.136322 kubelet[3267]: I0707 06:08:49.135774 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1eb967225fbb40dea13d77b0453e3365-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-6\" (UID: \"1eb967225fbb40dea13d77b0453e3365\") " pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.136322 kubelet[3267]: I0707 06:08:49.135811 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.136322 kubelet[3267]: I0707 06:08:49.135833 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.136322 kubelet[3267]: I0707 06:08:49.135857 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82a31fdaea126e82d5863908aacc8820-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-6\" (UID: \"82a31fdaea126e82d5863908aacc8820\") " pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:49.136522 kubelet[3267]: I0707 06:08:49.135879 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1eb967225fbb40dea13d77b0453e3365-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-6\" (UID: \"1eb967225fbb40dea13d77b0453e3365\") " pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.136522 kubelet[3267]: I0707 06:08:49.135898 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.136522 kubelet[3267]: I0707 06:08:49.135920 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.136522 kubelet[3267]: I0707 06:08:49.135945 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c32a938a125606d0922d1da754b0757-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-6\" (UID: \"1c32a938a125606d0922d1da754b0757\") " pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.467697 sudo[3287]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:49.713256 kubelet[3267]: I0707 06:08:49.713154 3267 apiserver.go:52] "Watching apiserver" Jul 7 06:08:49.734413 kubelet[3267]: I0707 06:08:49.734280 3267 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:08:49.782501 kubelet[3267]: I0707 06:08:49.782299 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-6" podStartSLOduration=3.782258267 podStartE2EDuration="3.782258267s" podCreationTimestamp="2025-07-07 06:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:49.782057445 +0000 UTC m=+1.182198215" watchObservedRunningTime="2025-07-07 06:08:49.782258267 +0000 UTC m=+1.182399032" Jul 7 06:08:49.809274 kubelet[3267]: I0707 06:08:49.808836 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-6" podStartSLOduration=0.808814782 podStartE2EDuration="808.814782ms" podCreationTimestamp="2025-07-07 06:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:49.797362678 +0000 UTC m=+1.197503440" watchObservedRunningTime="2025-07-07 06:08:49.808814782 +0000 UTC m=+1.208955554" Jul 7 06:08:49.816553 kubelet[3267]: I0707 06:08:49.816493 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.818752 kubelet[3267]: I0707 06:08:49.818686 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.819289 kubelet[3267]: I0707 06:08:49.819195 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:49.837673 kubelet[3267]: I0707 06:08:49.837368 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-6" podStartSLOduration=3.837346763 podStartE2EDuration="3.837346763s" podCreationTimestamp="2025-07-07 06:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:49.810069163 +0000 UTC m=+1.210209933" watchObservedRunningTime="2025-07-07 06:08:49.837346763 +0000 UTC m=+1.237487536" Jul 7 06:08:49.838724 kubelet[3267]: E0707 06:08:49.838376 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-6\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-6" Jul 7 06:08:49.838724 kubelet[3267]: E0707 06:08:49.838632 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-6\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-6" Jul 7 06:08:49.839165 kubelet[3267]: E0707 06:08:49.839139 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-6\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-6" Jul 7 06:08:50.454759 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 06:08:51.175009 sudo[2337]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:51.199126 sshd[2336]: Connection closed by 139.178.89.65 port 34702 Jul 7 06:08:51.200154 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:51.204029 systemd[1]: sshd@6-172.31.29.6:22-139.178.89.65:34702.service: Deactivated successfully. Jul 7 06:08:51.206941 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:08:51.207154 systemd[1]: session-7.scope: Consumed 5.011s CPU time, 204.8M memory peak. Jul 7 06:08:51.208406 systemd-logind[1873]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:08:51.209991 systemd-logind[1873]: Removed session 7. Jul 7 06:08:52.444722 kubelet[3267]: I0707 06:08:52.444692 3267 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:08:52.445888 containerd[1934]: time="2025-07-07T06:08:52.445736034Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:08:52.447235 kubelet[3267]: I0707 06:08:52.446929 3267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:08:53.121054 systemd[1]: Created slice kubepods-besteffort-pod37710666_4349_4f30_b29f_fb9cd1e07a74.slice - libcontainer container kubepods-besteffort-pod37710666_4349_4f30_b29f_fb9cd1e07a74.slice. Jul 7 06:08:53.136977 systemd[1]: Created slice kubepods-burstable-podef4765c6_5272_4ffe_a183_f010e64a4d12.slice - libcontainer container kubepods-burstable-podef4765c6_5272_4ffe_a183_f010e64a4d12.slice. Jul 7 06:08:53.164403 kubelet[3267]: I0707 06:08:53.164359 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef4765c6-5272-4ffe-a183-f010e64a4d12-clustermesh-secrets\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164403 kubelet[3267]: I0707 06:08:53.164400 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-run\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164585 kubelet[3267]: I0707 06:08:53.164425 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-lib-modules\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164585 kubelet[3267]: I0707 06:08:53.164440 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-kernel\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164585 kubelet[3267]: I0707 06:08:53.164458 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37710666-4349-4f30-b29f-fb9cd1e07a74-lib-modules\") pod \"kube-proxy-pklng\" (UID: \"37710666-4349-4f30-b29f-fb9cd1e07a74\") " pod="kube-system/kube-proxy-pklng" Jul 7 06:08:53.164585 kubelet[3267]: I0707 06:08:53.164473 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-etc-cni-netd\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164585 kubelet[3267]: I0707 06:08:53.164489 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37710666-4349-4f30-b29f-fb9cd1e07a74-kube-proxy\") pod \"kube-proxy-pklng\" (UID: \"37710666-4349-4f30-b29f-fb9cd1e07a74\") " pod="kube-system/kube-proxy-pklng" Jul 7 06:08:53.164736 kubelet[3267]: I0707 06:08:53.164503 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6qj\" (UniqueName: \"kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-kube-api-access-qv6qj\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164736 kubelet[3267]: I0707 06:08:53.164520 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-bpf-maps\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164736 kubelet[3267]: I0707 06:08:53.164535 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-xtables-lock\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164736 kubelet[3267]: I0707 06:08:53.164551 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-config-path\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164736 kubelet[3267]: I0707 06:08:53.164565 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-hostproc\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164736 kubelet[3267]: I0707 06:08:53.164581 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cni-path\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164885 kubelet[3267]: I0707 06:08:53.164595 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-hubble-tls\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164885 kubelet[3267]: I0707 06:08:53.164619 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-net\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.164885 kubelet[3267]: I0707 06:08:53.164639 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37710666-4349-4f30-b29f-fb9cd1e07a74-xtables-lock\") pod \"kube-proxy-pklng\" (UID: \"37710666-4349-4f30-b29f-fb9cd1e07a74\") " pod="kube-system/kube-proxy-pklng" Jul 7 06:08:53.164885 kubelet[3267]: I0707 06:08:53.164673 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmt6\" (UniqueName: \"kubernetes.io/projected/37710666-4349-4f30-b29f-fb9cd1e07a74-kube-api-access-fkmt6\") pod \"kube-proxy-pklng\" (UID: \"37710666-4349-4f30-b29f-fb9cd1e07a74\") " pod="kube-system/kube-proxy-pklng" Jul 7 06:08:53.164885 kubelet[3267]: I0707 06:08:53.164696 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-cgroup\") pod \"cilium-4p8s4\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " pod="kube-system/cilium-4p8s4" Jul 7 06:08:53.432228 containerd[1934]: time="2025-07-07T06:08:53.432134004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pklng,Uid:37710666-4349-4f30-b29f-fb9cd1e07a74,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:53.444640 containerd[1934]: time="2025-07-07T06:08:53.444595866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4p8s4,Uid:ef4765c6-5272-4ffe-a183-f010e64a4d12,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:53.481612 containerd[1934]: time="2025-07-07T06:08:53.479490843Z" level=info msg="connecting to shim c82e57dd93a49e53a5c5dab5a284f68d6d4341dec87b2fe263e6c8d6eb5e6a6f" address="unix:///run/containerd/s/9af18cd43cc21f6f7309c6be5a76447df2c6de46cb1f14a3fcb9a545080e91bf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:08:53.527513 containerd[1934]: time="2025-07-07T06:08:53.527465879Z" level=info msg="connecting to shim 823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693" address="unix:///run/containerd/s/c8a8c293adf01afd181e64d31dba1425182cc181a5b03bdfac062e2e01c203c2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:08:53.537115 systemd[1]: Started cri-containerd-c82e57dd93a49e53a5c5dab5a284f68d6d4341dec87b2fe263e6c8d6eb5e6a6f.scope - libcontainer container c82e57dd93a49e53a5c5dab5a284f68d6d4341dec87b2fe263e6c8d6eb5e6a6f. Jul 7 06:08:53.560576 systemd[1]: Created slice kubepods-besteffort-pod8ade2bf0_23bc_4777_a8c6_8e99c030d492.slice - libcontainer container kubepods-besteffort-pod8ade2bf0_23bc_4777_a8c6_8e99c030d492.slice. Jul 7 06:08:53.567362 kubelet[3267]: I0707 06:08:53.567285 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng98q\" (UniqueName: \"kubernetes.io/projected/8ade2bf0-23bc-4777-a8c6-8e99c030d492-kube-api-access-ng98q\") pod \"cilium-operator-6c4d7847fc-65bsl\" (UID: \"8ade2bf0-23bc-4777-a8c6-8e99c030d492\") " pod="kube-system/cilium-operator-6c4d7847fc-65bsl" Jul 7 06:08:53.567362 kubelet[3267]: I0707 06:08:53.567320 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ade2bf0-23bc-4777-a8c6-8e99c030d492-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-65bsl\" (UID: \"8ade2bf0-23bc-4777-a8c6-8e99c030d492\") " pod="kube-system/cilium-operator-6c4d7847fc-65bsl" Jul 7 06:08:53.579863 systemd[1]: Started cri-containerd-823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693.scope - libcontainer container 823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693. Jul 7 06:08:53.633097 containerd[1934]: time="2025-07-07T06:08:53.633050772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pklng,Uid:37710666-4349-4f30-b29f-fb9cd1e07a74,Namespace:kube-system,Attempt:0,} returns sandbox id \"c82e57dd93a49e53a5c5dab5a284f68d6d4341dec87b2fe263e6c8d6eb5e6a6f\"" Jul 7 06:08:53.635061 containerd[1934]: time="2025-07-07T06:08:53.635020043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4p8s4,Uid:ef4765c6-5272-4ffe-a183-f010e64a4d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\"" Jul 7 06:08:53.637148 containerd[1934]: time="2025-07-07T06:08:53.637114208Z" level=info msg="CreateContainer within sandbox \"c82e57dd93a49e53a5c5dab5a284f68d6d4341dec87b2fe263e6c8d6eb5e6a6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:08:53.641999 containerd[1934]: time="2025-07-07T06:08:53.641950113Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 06:08:53.659734 containerd[1934]: time="2025-07-07T06:08:53.659691337Z" level=info msg="Container 80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:08:53.674666 containerd[1934]: time="2025-07-07T06:08:53.673248196Z" level=info msg="CreateContainer within sandbox \"c82e57dd93a49e53a5c5dab5a284f68d6d4341dec87b2fe263e6c8d6eb5e6a6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23\"" Jul 7 06:08:53.675718 containerd[1934]: time="2025-07-07T06:08:53.675373574Z" level=info msg="StartContainer for \"80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23\"" Jul 7 06:08:53.678669 containerd[1934]: time="2025-07-07T06:08:53.678468831Z" level=info msg="connecting to shim 80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23" address="unix:///run/containerd/s/9af18cd43cc21f6f7309c6be5a76447df2c6de46cb1f14a3fcb9a545080e91bf" protocol=ttrpc version=3 Jul 7 06:08:53.700874 systemd[1]: Started cri-containerd-80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23.scope - libcontainer container 80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23. Jul 7 06:08:53.743390 containerd[1934]: time="2025-07-07T06:08:53.743340912Z" level=info msg="StartContainer for \"80c3c9e5964b62b6c683854cadd9bc0f07e3893f13f9816170a4de54d867fc23\" returns successfully" Jul 7 06:08:53.867136 containerd[1934]: time="2025-07-07T06:08:53.867083061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-65bsl,Uid:8ade2bf0-23bc-4777-a8c6-8e99c030d492,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:53.902349 containerd[1934]: time="2025-07-07T06:08:53.902273431Z" level=info msg="connecting to shim 41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189" address="unix:///run/containerd/s/72f95167175be4cd3f1fb6ebaff9a0d5434afbbbdbb65c338a3e038583478a5b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:08:53.925820 systemd[1]: Started cri-containerd-41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189.scope - libcontainer container 41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189. Jul 7 06:08:53.981026 containerd[1934]: time="2025-07-07T06:08:53.980503248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-65bsl,Uid:8ade2bf0-23bc-4777-a8c6-8e99c030d492,Namespace:kube-system,Attempt:0,} returns sandbox id \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\"" Jul 7 06:08:54.954374 kubelet[3267]: I0707 06:08:54.954325 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pklng" podStartSLOduration=1.954309602 podStartE2EDuration="1.954309602s" podCreationTimestamp="2025-07-07 06:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:53.838318667 +0000 UTC m=+5.238459439" watchObservedRunningTime="2025-07-07 06:08:54.954309602 +0000 UTC m=+6.354450373" Jul 7 06:09:02.857835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3178502122.mount: Deactivated successfully. Jul 7 06:09:04.483366 update_engine[1883]: I20250707 06:09:04.482769 1883 update_attempter.cc:509] Updating boot flags... Jul 7 06:09:05.746224 containerd[1934]: time="2025-07-07T06:09:05.746167352Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:05.749256 containerd[1934]: time="2025-07-07T06:09:05.749005044Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 06:09:05.749256 containerd[1934]: time="2025-07-07T06:09:05.749207694Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:05.751075 containerd[1934]: time="2025-07-07T06:09:05.751035982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.109038632s" Jul 7 06:09:05.751295 containerd[1934]: time="2025-07-07T06:09:05.751081675Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 06:09:05.752995 containerd[1934]: time="2025-07-07T06:09:05.752887765Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 06:09:05.755492 containerd[1934]: time="2025-07-07T06:09:05.755384816Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:09:05.830555 containerd[1934]: time="2025-07-07T06:09:05.828819657Z" level=info msg="Container 79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:05.834262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079941602.mount: Deactivated successfully. Jul 7 06:09:05.848030 containerd[1934]: time="2025-07-07T06:09:05.847991314Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\"" Jul 7 06:09:05.848741 containerd[1934]: time="2025-07-07T06:09:05.848712125Z" level=info msg="StartContainer for \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\"" Jul 7 06:09:05.849946 containerd[1934]: time="2025-07-07T06:09:05.849895739Z" level=info msg="connecting to shim 79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d" address="unix:///run/containerd/s/c8a8c293adf01afd181e64d31dba1425182cc181a5b03bdfac062e2e01c203c2" protocol=ttrpc version=3 Jul 7 06:09:05.953849 systemd[1]: Started cri-containerd-79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d.scope - libcontainer container 79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d. Jul 7 06:09:05.987489 containerd[1934]: time="2025-07-07T06:09:05.987447184Z" level=info msg="StartContainer for \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" returns successfully" Jul 7 06:09:05.998414 systemd[1]: cri-containerd-79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d.scope: Deactivated successfully. Jul 7 06:09:06.033231 containerd[1934]: time="2025-07-07T06:09:06.033178136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" id:\"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" pid:3868 exited_at:{seconds:1751868545 nanos:999851217}" Jul 7 06:09:06.043807 containerd[1934]: time="2025-07-07T06:09:06.043755009Z" level=info msg="received exit event container_id:\"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" id:\"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" pid:3868 exited_at:{seconds:1751868545 nanos:999851217}" Jul 7 06:09:06.071780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d-rootfs.mount: Deactivated successfully. Jul 7 06:09:06.874179 containerd[1934]: time="2025-07-07T06:09:06.874131726Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:09:06.895381 containerd[1934]: time="2025-07-07T06:09:06.895325695Z" level=info msg="Container 1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:06.899859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979471000.mount: Deactivated successfully. Jul 7 06:09:06.907253 containerd[1934]: time="2025-07-07T06:09:06.907213980Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\"" Jul 7 06:09:06.909773 containerd[1934]: time="2025-07-07T06:09:06.908753961Z" level=info msg="StartContainer for \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\"" Jul 7 06:09:06.914511 containerd[1934]: time="2025-07-07T06:09:06.913724624Z" level=info msg="connecting to shim 1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0" address="unix:///run/containerd/s/c8a8c293adf01afd181e64d31dba1425182cc181a5b03bdfac062e2e01c203c2" protocol=ttrpc version=3 Jul 7 06:09:06.943271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320110009.mount: Deactivated successfully. Jul 7 06:09:06.955851 systemd[1]: Started cri-containerd-1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0.scope - libcontainer container 1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0. Jul 7 06:09:06.999330 containerd[1934]: time="2025-07-07T06:09:06.999301056Z" level=info msg="StartContainer for \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" returns successfully" Jul 7 06:09:07.011712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:09:07.011930 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:09:07.012465 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:09:07.015103 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:09:07.018768 systemd[1]: cri-containerd-1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0.scope: Deactivated successfully. Jul 7 06:09:07.021492 containerd[1934]: time="2025-07-07T06:09:07.021318200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" id:\"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" pid:3915 exited_at:{seconds:1751868547 nanos:20213699}" Jul 7 06:09:07.021492 containerd[1934]: time="2025-07-07T06:09:07.021387230Z" level=info msg="received exit event container_id:\"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" id:\"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" pid:3915 exited_at:{seconds:1751868547 nanos:20213699}" Jul 7 06:09:07.047020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:09:07.877780 containerd[1934]: time="2025-07-07T06:09:07.877736769Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:09:07.894728 containerd[1934]: time="2025-07-07T06:09:07.894688796Z" level=info msg="Container fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:07.896864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0-rootfs.mount: Deactivated successfully. Jul 7 06:09:07.905126 containerd[1934]: time="2025-07-07T06:09:07.905077102Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\"" Jul 7 06:09:07.905892 containerd[1934]: time="2025-07-07T06:09:07.905791347Z" level=info msg="StartContainer for \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\"" Jul 7 06:09:07.907360 containerd[1934]: time="2025-07-07T06:09:07.907328280Z" level=info msg="connecting to shim fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc" address="unix:///run/containerd/s/c8a8c293adf01afd181e64d31dba1425182cc181a5b03bdfac062e2e01c203c2" protocol=ttrpc version=3 Jul 7 06:09:07.931965 systemd[1]: Started cri-containerd-fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc.scope - libcontainer container fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc. Jul 7 06:09:07.975348 containerd[1934]: time="2025-07-07T06:09:07.975302508Z" level=info msg="StartContainer for \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" returns successfully" Jul 7 06:09:07.983004 systemd[1]: cri-containerd-fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc.scope: Deactivated successfully. Jul 7 06:09:07.983415 systemd[1]: cri-containerd-fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc.scope: Consumed 24ms CPU time, 5.8M memory peak, 1M read from disk. Jul 7 06:09:07.984769 containerd[1934]: time="2025-07-07T06:09:07.984426272Z" level=info msg="received exit event container_id:\"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" id:\"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" pid:3966 exited_at:{seconds:1751868547 nanos:983857479}" Jul 7 06:09:07.984769 containerd[1934]: time="2025-07-07T06:09:07.984501678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" id:\"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" pid:3966 exited_at:{seconds:1751868547 nanos:983857479}" Jul 7 06:09:08.010099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc-rootfs.mount: Deactivated successfully. Jul 7 06:09:08.884570 containerd[1934]: time="2025-07-07T06:09:08.884453993Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:09:08.904807 containerd[1934]: time="2025-07-07T06:09:08.902763055Z" level=info msg="Container 4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:08.905798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406037927.mount: Deactivated successfully. Jul 7 06:09:08.916565 containerd[1934]: time="2025-07-07T06:09:08.916450423Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\"" Jul 7 06:09:08.918056 containerd[1934]: time="2025-07-07T06:09:08.917777910Z" level=info msg="StartContainer for \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\"" Jul 7 06:09:08.921596 containerd[1934]: time="2025-07-07T06:09:08.921551682Z" level=info msg="connecting to shim 4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e" address="unix:///run/containerd/s/c8a8c293adf01afd181e64d31dba1425182cc181a5b03bdfac062e2e01c203c2" protocol=ttrpc version=3 Jul 7 06:09:08.953862 systemd[1]: Started cri-containerd-4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e.scope - libcontainer container 4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e. Jul 7 06:09:08.990542 systemd[1]: cri-containerd-4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e.scope: Deactivated successfully. Jul 7 06:09:08.994241 containerd[1934]: time="2025-07-07T06:09:08.993537610Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef4765c6_5272_4ffe_a183_f010e64a4d12.slice/cri-containerd-4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e.scope/memory.events\": no such file or directory" Jul 7 06:09:08.998513 containerd[1934]: time="2025-07-07T06:09:08.997693295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" id:\"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" pid:4007 exited_at:{seconds:1751868548 nanos:993382959}" Jul 7 06:09:08.998513 containerd[1934]: time="2025-07-07T06:09:08.998376334Z" level=info msg="received exit event container_id:\"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" id:\"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" pid:4007 exited_at:{seconds:1751868548 nanos:993382959}" Jul 7 06:09:09.017800 containerd[1934]: time="2025-07-07T06:09:09.017763400Z" level=info msg="StartContainer for \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" returns successfully" Jul 7 06:09:09.039396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e-rootfs.mount: Deactivated successfully. Jul 7 06:09:09.750458 containerd[1934]: time="2025-07-07T06:09:09.750414904Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:09.752579 containerd[1934]: time="2025-07-07T06:09:09.752396021Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 06:09:09.753768 containerd[1934]: time="2025-07-07T06:09:09.753722811Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:09.754796 containerd[1934]: time="2025-07-07T06:09:09.754692567Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.001772007s" Jul 7 06:09:09.754796 containerd[1934]: time="2025-07-07T06:09:09.754724387Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 06:09:09.757475 containerd[1934]: time="2025-07-07T06:09:09.757442805Z" level=info msg="CreateContainer within sandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 06:09:09.767317 containerd[1934]: time="2025-07-07T06:09:09.767279114Z" level=info msg="Container 72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:09.773982 containerd[1934]: time="2025-07-07T06:09:09.773928077Z" level=info msg="CreateContainer within sandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\"" Jul 7 06:09:09.774486 containerd[1934]: time="2025-07-07T06:09:09.774453881Z" level=info msg="StartContainer for \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\"" Jul 7 06:09:09.775497 containerd[1934]: time="2025-07-07T06:09:09.775467189Z" level=info msg="connecting to shim 72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f" address="unix:///run/containerd/s/72f95167175be4cd3f1fb6ebaff9a0d5434afbbbdbb65c338a3e038583478a5b" protocol=ttrpc version=3 Jul 7 06:09:09.799895 systemd[1]: Started cri-containerd-72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f.scope - libcontainer container 72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f. Jul 7 06:09:09.831158 containerd[1934]: time="2025-07-07T06:09:09.831078291Z" level=info msg="StartContainer for \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" returns successfully" Jul 7 06:09:09.900198 containerd[1934]: time="2025-07-07T06:09:09.899434991Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:09:09.918328 containerd[1934]: time="2025-07-07T06:09:09.917027401Z" level=info msg="Container 1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:09.919838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112695637.mount: Deactivated successfully. Jul 7 06:09:09.930156 containerd[1934]: time="2025-07-07T06:09:09.930104671Z" level=info msg="CreateContainer within sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\"" Jul 7 06:09:09.932184 containerd[1934]: time="2025-07-07T06:09:09.932160547Z" level=info msg="StartContainer for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\"" Jul 7 06:09:09.933993 containerd[1934]: time="2025-07-07T06:09:09.933912001Z" level=info msg="connecting to shim 1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d" address="unix:///run/containerd/s/c8a8c293adf01afd181e64d31dba1425182cc181a5b03bdfac062e2e01c203c2" protocol=ttrpc version=3 Jul 7 06:09:09.960836 systemd[1]: Started cri-containerd-1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d.scope - libcontainer container 1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d. Jul 7 06:09:10.043476 containerd[1934]: time="2025-07-07T06:09:10.043423341Z" level=info msg="StartContainer for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" returns successfully" Jul 7 06:09:10.265604 containerd[1934]: time="2025-07-07T06:09:10.265539502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" id:\"71259710a658490a33b37376f5261e8d96e042def2df52e711d919b44c113727\" pid:4113 exited_at:{seconds:1751868550 nanos:264150897}" Jul 7 06:09:10.303691 kubelet[3267]: I0707 06:09:10.303314 3267 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:09:10.565857 kubelet[3267]: I0707 06:09:10.565248 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-65bsl" podStartSLOduration=1.791993881 podStartE2EDuration="17.565221046s" podCreationTimestamp="2025-07-07 06:08:53 +0000 UTC" firstStartedPulling="2025-07-07 06:08:53.982163608 +0000 UTC m=+5.382304359" lastFinishedPulling="2025-07-07 06:09:09.755390775 +0000 UTC m=+21.155531524" observedRunningTime="2025-07-07 06:09:09.963230728 +0000 UTC m=+21.363371499" watchObservedRunningTime="2025-07-07 06:09:10.565221046 +0000 UTC m=+21.965361818" Jul 7 06:09:10.585862 kubelet[3267]: W0707 06:09:10.583748 3267 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-29-6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-6' and this object Jul 7 06:09:10.585862 kubelet[3267]: E0707 06:09:10.583822 3267 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-29-6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-6' and this object" logger="UnhandledError" Jul 7 06:09:10.584308 systemd[1]: Created slice kubepods-burstable-pod237bbdd4_cd68_437e_81e8_84fdc5182ca9.slice - libcontainer container kubepods-burstable-pod237bbdd4_cd68_437e_81e8_84fdc5182ca9.slice. Jul 7 06:09:10.590284 kubelet[3267]: I0707 06:09:10.590249 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/237bbdd4-cd68-437e-81e8-84fdc5182ca9-config-volume\") pod \"coredns-668d6bf9bc-4lsq9\" (UID: \"237bbdd4-cd68-437e-81e8-84fdc5182ca9\") " pod="kube-system/coredns-668d6bf9bc-4lsq9" Jul 7 06:09:10.590582 kubelet[3267]: I0707 06:09:10.590564 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9367e0d8-d668-49c9-a822-54d38724a036-config-volume\") pod \"coredns-668d6bf9bc-wz248\" (UID: \"9367e0d8-d668-49c9-a822-54d38724a036\") " pod="kube-system/coredns-668d6bf9bc-wz248" Jul 7 06:09:10.590804 kubelet[3267]: I0707 06:09:10.590707 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5mc6\" (UniqueName: \"kubernetes.io/projected/9367e0d8-d668-49c9-a822-54d38724a036-kube-api-access-r5mc6\") pod \"coredns-668d6bf9bc-wz248\" (UID: \"9367e0d8-d668-49c9-a822-54d38724a036\") " pod="kube-system/coredns-668d6bf9bc-wz248" Jul 7 06:09:10.590804 kubelet[3267]: I0707 06:09:10.590737 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pp4x\" (UniqueName: \"kubernetes.io/projected/237bbdd4-cd68-437e-81e8-84fdc5182ca9-kube-api-access-9pp4x\") pod \"coredns-668d6bf9bc-4lsq9\" (UID: \"237bbdd4-cd68-437e-81e8-84fdc5182ca9\") " pod="kube-system/coredns-668d6bf9bc-4lsq9" Jul 7 06:09:10.590804 kubelet[3267]: I0707 06:09:10.590583 3267 status_manager.go:890] "Failed to get status for pod" podUID="237bbdd4-cd68-437e-81e8-84fdc5182ca9" pod="kube-system/coredns-668d6bf9bc-4lsq9" err="pods \"coredns-668d6bf9bc-4lsq9\" is forbidden: User \"system:node:ip-172-31-29-6\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-6' and this object" Jul 7 06:09:10.601255 systemd[1]: Created slice kubepods-burstable-pod9367e0d8_d668_49c9_a822_54d38724a036.slice - libcontainer container kubepods-burstable-pod9367e0d8_d668_49c9_a822_54d38724a036.slice. Jul 7 06:09:11.695965 kubelet[3267]: E0707 06:09:11.695918 3267 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 7 06:09:11.696344 kubelet[3267]: E0707 06:09:11.695979 3267 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 7 06:09:11.707265 kubelet[3267]: E0707 06:09:11.707054 3267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9367e0d8-d668-49c9-a822-54d38724a036-config-volume podName:9367e0d8-d668-49c9-a822-54d38724a036 nodeName:}" failed. No retries permitted until 2025-07-07 06:09:12.198689392 +0000 UTC m=+23.598830154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9367e0d8-d668-49c9-a822-54d38724a036-config-volume") pod "coredns-668d6bf9bc-wz248" (UID: "9367e0d8-d668-49c9-a822-54d38724a036") : failed to sync configmap cache: timed out waiting for the condition Jul 7 06:09:11.707265 kubelet[3267]: E0707 06:09:11.707125 3267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/237bbdd4-cd68-437e-81e8-84fdc5182ca9-config-volume podName:237bbdd4-cd68-437e-81e8-84fdc5182ca9 nodeName:}" failed. No retries permitted until 2025-07-07 06:09:12.207099723 +0000 UTC m=+23.607240474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/237bbdd4-cd68-437e-81e8-84fdc5182ca9-config-volume") pod "coredns-668d6bf9bc-4lsq9" (UID: "237bbdd4-cd68-437e-81e8-84fdc5182ca9") : failed to sync configmap cache: timed out waiting for the condition Jul 7 06:09:12.401527 containerd[1934]: time="2025-07-07T06:09:12.401470705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lsq9,Uid:237bbdd4-cd68-437e-81e8-84fdc5182ca9,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:12.408008 containerd[1934]: time="2025-07-07T06:09:12.407928576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wz248,Uid:9367e0d8-d668-49c9-a822-54d38724a036,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:13.955952 systemd-networkd[1825]: cilium_host: Link UP Jul 7 06:09:13.956907 (udev-worker)[4207]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:09:13.959118 systemd-networkd[1825]: cilium_net: Link UP Jul 7 06:09:13.959187 (udev-worker)[4148]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:09:13.959283 systemd-networkd[1825]: cilium_net: Gained carrier Jul 7 06:09:13.959417 systemd-networkd[1825]: cilium_host: Gained carrier Jul 7 06:09:14.079885 (udev-worker)[4217]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:09:14.086373 systemd-networkd[1825]: cilium_vxlan: Link UP Jul 7 06:09:14.086406 systemd-networkd[1825]: cilium_vxlan: Gained carrier Jul 7 06:09:14.510863 systemd-networkd[1825]: cilium_net: Gained IPv6LL Jul 7 06:09:14.638829 systemd-networkd[1825]: cilium_host: Gained IPv6LL Jul 7 06:09:15.036688 kernel: NET: Registered PF_ALG protocol family Jul 7 06:09:15.728298 (udev-worker)[4218]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:09:15.747368 systemd-networkd[1825]: lxc_health: Link UP Jul 7 06:09:15.749483 systemd-networkd[1825]: lxc_health: Gained carrier Jul 7 06:09:16.047806 kernel: eth0: renamed from tmp0a61d Jul 7 06:09:16.063948 kernel: eth0: renamed from tmpd560b Jul 7 06:09:16.064920 systemd-networkd[1825]: lxc0f029426d4cf: Link UP Jul 7 06:09:16.067094 systemd-networkd[1825]: lxc0f029426d4cf: Gained carrier Jul 7 06:09:16.067305 systemd-networkd[1825]: lxc9153ae21d4ba: Link UP Jul 7 06:09:16.067627 systemd-networkd[1825]: lxc9153ae21d4ba: Gained carrier Jul 7 06:09:16.110853 systemd-networkd[1825]: cilium_vxlan: Gained IPv6LL Jul 7 06:09:17.070882 systemd-networkd[1825]: lxc0f029426d4cf: Gained IPv6LL Jul 7 06:09:17.390951 systemd-networkd[1825]: lxc9153ae21d4ba: Gained IPv6LL Jul 7 06:09:17.488160 kubelet[3267]: I0707 06:09:17.487254 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4p8s4" podStartSLOduration=12.372726521 podStartE2EDuration="24.486217619s" podCreationTimestamp="2025-07-07 06:08:53 +0000 UTC" firstStartedPulling="2025-07-07 06:08:53.638848805 +0000 UTC m=+5.038989568" lastFinishedPulling="2025-07-07 06:09:05.752339902 +0000 UTC m=+17.152480666" observedRunningTime="2025-07-07 06:09:11.070444821 +0000 UTC m=+22.470585593" watchObservedRunningTime="2025-07-07 06:09:17.486217619 +0000 UTC m=+28.886358392" Jul 7 06:09:17.582952 systemd-networkd[1825]: lxc_health: Gained IPv6LL Jul 7 06:09:19.620778 ntpd[1868]: Listen normally on 8 cilium_host 192.168.0.79:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 8 cilium_host 192.168.0.79:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 9 cilium_net [fe80::e860:efff:fe8c:b395%4]:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 10 cilium_host [fe80::8cc3:45ff:fe09:de08%5]:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 11 cilium_vxlan [fe80::1834:32ff:fe5c:8ad0%6]:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 12 lxc_health [fe80::484:72ff:fe15:692c%8]:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 13 lxc0f029426d4cf [fe80::9413:5eff:fee2:5594%10]:123 Jul 7 06:09:19.621958 ntpd[1868]: 7 Jul 06:09:19 ntpd[1868]: Listen normally on 14 lxc9153ae21d4ba [fe80::fc80:2fff:fe49:f5b3%12]:123 Jul 7 06:09:19.621416 ntpd[1868]: Listen normally on 9 cilium_net [fe80::e860:efff:fe8c:b395%4]:123 Jul 7 06:09:19.621478 ntpd[1868]: Listen normally on 10 cilium_host [fe80::8cc3:45ff:fe09:de08%5]:123 Jul 7 06:09:19.621521 ntpd[1868]: Listen normally on 11 cilium_vxlan [fe80::1834:32ff:fe5c:8ad0%6]:123 Jul 7 06:09:19.621561 ntpd[1868]: Listen normally on 12 lxc_health [fe80::484:72ff:fe15:692c%8]:123 Jul 7 06:09:19.621601 ntpd[1868]: Listen normally on 13 lxc0f029426d4cf [fe80::9413:5eff:fee2:5594%10]:123 Jul 7 06:09:19.621660 ntpd[1868]: Listen normally on 14 lxc9153ae21d4ba [fe80::fc80:2fff:fe49:f5b3%12]:123 Jul 7 06:09:20.483374 containerd[1934]: time="2025-07-07T06:09:20.483276511Z" level=info msg="connecting to shim 0a61dec682f5523671f7539a60af2ee6a813d3a78ce84bdce413199ac854e1f8" address="unix:///run/containerd/s/5363a333e889149500baf0bf612517e6d351bfaf757cb2a15eedde52223d89d9" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:09:20.489485 containerd[1934]: time="2025-07-07T06:09:20.488810817Z" level=info msg="connecting to shim d560b5626131f9104c84ad4b8bb28dfdd64e09528eeb39fc1a17e3557871760c" address="unix:///run/containerd/s/3086f0fb48ee18fdf2d560313fafb3efb8e1d64d810dddebd1e7ee39482d7123" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:09:20.588092 systemd[1]: Started cri-containerd-d560b5626131f9104c84ad4b8bb28dfdd64e09528eeb39fc1a17e3557871760c.scope - libcontainer container d560b5626131f9104c84ad4b8bb28dfdd64e09528eeb39fc1a17e3557871760c. Jul 7 06:09:20.603174 systemd[1]: Started cri-containerd-0a61dec682f5523671f7539a60af2ee6a813d3a78ce84bdce413199ac854e1f8.scope - libcontainer container 0a61dec682f5523671f7539a60af2ee6a813d3a78ce84bdce413199ac854e1f8. Jul 7 06:09:20.706993 containerd[1934]: time="2025-07-07T06:09:20.706865905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wz248,Uid:9367e0d8-d668-49c9-a822-54d38724a036,Namespace:kube-system,Attempt:0,} returns sandbox id \"d560b5626131f9104c84ad4b8bb28dfdd64e09528eeb39fc1a17e3557871760c\"" Jul 7 06:09:20.713065 containerd[1934]: time="2025-07-07T06:09:20.713026893Z" level=info msg="CreateContainer within sandbox \"d560b5626131f9104c84ad4b8bb28dfdd64e09528eeb39fc1a17e3557871760c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:09:20.715136 containerd[1934]: time="2025-07-07T06:09:20.715104525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lsq9,Uid:237bbdd4-cd68-437e-81e8-84fdc5182ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a61dec682f5523671f7539a60af2ee6a813d3a78ce84bdce413199ac854e1f8\"" Jul 7 06:09:20.719682 containerd[1934]: time="2025-07-07T06:09:20.719021089Z" level=info msg="CreateContainer within sandbox \"0a61dec682f5523671f7539a60af2ee6a813d3a78ce84bdce413199ac854e1f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:09:20.743219 containerd[1934]: time="2025-07-07T06:09:20.743111763Z" level=info msg="Container d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:20.743882 containerd[1934]: time="2025-07-07T06:09:20.743847433Z" level=info msg="Container 285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:09:20.750128 containerd[1934]: time="2025-07-07T06:09:20.750070455Z" level=info msg="CreateContainer within sandbox \"0a61dec682f5523671f7539a60af2ee6a813d3a78ce84bdce413199ac854e1f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e\"" Jul 7 06:09:20.750877 containerd[1934]: time="2025-07-07T06:09:20.750844053Z" level=info msg="StartContainer for \"d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e\"" Jul 7 06:09:20.753242 containerd[1934]: time="2025-07-07T06:09:20.753208196Z" level=info msg="connecting to shim d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e" address="unix:///run/containerd/s/5363a333e889149500baf0bf612517e6d351bfaf757cb2a15eedde52223d89d9" protocol=ttrpc version=3 Jul 7 06:09:20.758509 containerd[1934]: time="2025-07-07T06:09:20.758453931Z" level=info msg="CreateContainer within sandbox \"d560b5626131f9104c84ad4b8bb28dfdd64e09528eeb39fc1a17e3557871760c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c\"" Jul 7 06:09:20.759440 containerd[1934]: time="2025-07-07T06:09:20.759386360Z" level=info msg="StartContainer for \"285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c\"" Jul 7 06:09:20.762050 containerd[1934]: time="2025-07-07T06:09:20.761987986Z" level=info msg="connecting to shim 285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c" address="unix:///run/containerd/s/3086f0fb48ee18fdf2d560313fafb3efb8e1d64d810dddebd1e7ee39482d7123" protocol=ttrpc version=3 Jul 7 06:09:20.787877 systemd[1]: Started cri-containerd-d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e.scope - libcontainer container d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e. Jul 7 06:09:20.791723 systemd[1]: Started cri-containerd-285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c.scope - libcontainer container 285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c. Jul 7 06:09:20.848933 containerd[1934]: time="2025-07-07T06:09:20.848887117Z" level=info msg="StartContainer for \"d9116aea5fc52b2c31c7e271ffee332e7f7c1602d1cdd723f0bf5a675505f82e\" returns successfully" Jul 7 06:09:20.850332 containerd[1934]: time="2025-07-07T06:09:20.850300090Z" level=info msg="StartContainer for \"285f027075328e72d896931eb79e339a60c217f6caf80ed9cbf3a3bbe5b8df1c\" returns successfully" Jul 7 06:09:20.984260 kubelet[3267]: I0707 06:09:20.984117 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4lsq9" podStartSLOduration=27.984096671 podStartE2EDuration="27.984096671s" podCreationTimestamp="2025-07-07 06:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:20.983098059 +0000 UTC m=+32.383238860" watchObservedRunningTime="2025-07-07 06:09:20.984096671 +0000 UTC m=+32.384237441" Jul 7 06:09:21.000083 kubelet[3267]: I0707 06:09:20.999954 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wz248" podStartSLOduration=27.999936476 podStartE2EDuration="27.999936476s" podCreationTimestamp="2025-07-07 06:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:20.999507013 +0000 UTC m=+32.399647784" watchObservedRunningTime="2025-07-07 06:09:20.999936476 +0000 UTC m=+32.400077248" Jul 7 06:09:21.452160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521798068.mount: Deactivated successfully. Jul 7 06:09:22.404793 kubelet[3267]: I0707 06:09:22.404606 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:24.957816 systemd[1]: Started sshd@7-172.31.29.6:22-139.178.89.65:51208.service - OpenSSH per-connection server daemon (139.178.89.65:51208). Jul 7 06:09:25.163181 sshd[4752]: Accepted publickey for core from 139.178.89.65 port 51208 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:25.165192 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:25.170706 systemd-logind[1873]: New session 8 of user core. Jul 7 06:09:25.174839 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:09:25.968946 sshd[4754]: Connection closed by 139.178.89.65 port 51208 Jul 7 06:09:25.969867 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:25.975251 systemd[1]: sshd@7-172.31.29.6:22-139.178.89.65:51208.service: Deactivated successfully. Jul 7 06:09:25.978516 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:09:25.980491 systemd-logind[1873]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:09:25.982486 systemd-logind[1873]: Removed session 8. Jul 7 06:09:31.003854 systemd[1]: Started sshd@8-172.31.29.6:22-139.178.89.65:60358.service - OpenSSH per-connection server daemon (139.178.89.65:60358). Jul 7 06:09:31.178381 sshd[4771]: Accepted publickey for core from 139.178.89.65 port 60358 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:31.179819 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:31.185267 systemd-logind[1873]: New session 9 of user core. Jul 7 06:09:31.189861 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:09:31.398311 sshd[4774]: Connection closed by 139.178.89.65 port 60358 Jul 7 06:09:31.398854 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:31.402699 systemd[1]: sshd@8-172.31.29.6:22-139.178.89.65:60358.service: Deactivated successfully. Jul 7 06:09:31.404611 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:09:31.405590 systemd-logind[1873]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:09:31.407783 systemd-logind[1873]: Removed session 9. Jul 7 06:09:36.433161 systemd[1]: Started sshd@9-172.31.29.6:22-139.178.89.65:60360.service - OpenSSH per-connection server daemon (139.178.89.65:60360). Jul 7 06:09:36.621979 sshd[4788]: Accepted publickey for core from 139.178.89.65 port 60360 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:36.623480 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:36.628496 systemd-logind[1873]: New session 10 of user core. Jul 7 06:09:36.632860 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:09:36.837622 sshd[4790]: Connection closed by 139.178.89.65 port 60360 Jul 7 06:09:36.838589 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:36.847694 systemd[1]: sshd@9-172.31.29.6:22-139.178.89.65:60360.service: Deactivated successfully. Jul 7 06:09:36.847749 systemd-logind[1873]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:09:36.850183 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:09:36.851902 systemd-logind[1873]: Removed session 10. Jul 7 06:09:41.872794 systemd[1]: Started sshd@10-172.31.29.6:22-139.178.89.65:46192.service - OpenSSH per-connection server daemon (139.178.89.65:46192). Jul 7 06:09:42.061588 sshd[4803]: Accepted publickey for core from 139.178.89.65 port 46192 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:42.063063 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:42.067959 systemd-logind[1873]: New session 11 of user core. Jul 7 06:09:42.080867 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:09:42.273539 sshd[4805]: Connection closed by 139.178.89.65 port 46192 Jul 7 06:09:42.274285 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:42.278173 systemd[1]: sshd@10-172.31.29.6:22-139.178.89.65:46192.service: Deactivated successfully. Jul 7 06:09:42.280573 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:09:42.282689 systemd-logind[1873]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:09:42.285141 systemd-logind[1873]: Removed session 11. Jul 7 06:09:42.306181 systemd[1]: Started sshd@11-172.31.29.6:22-139.178.89.65:46202.service - OpenSSH per-connection server daemon (139.178.89.65:46202). Jul 7 06:09:42.478811 sshd[4818]: Accepted publickey for core from 139.178.89.65 port 46202 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:42.480168 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:42.485627 systemd-logind[1873]: New session 12 of user core. Jul 7 06:09:42.488831 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:09:42.721831 sshd[4820]: Connection closed by 139.178.89.65 port 46202 Jul 7 06:09:42.722929 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:42.730352 systemd-logind[1873]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:09:42.731140 systemd[1]: sshd@11-172.31.29.6:22-139.178.89.65:46202.service: Deactivated successfully. Jul 7 06:09:42.735242 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:09:42.737827 systemd-logind[1873]: Removed session 12. Jul 7 06:09:42.757886 systemd[1]: Started sshd@12-172.31.29.6:22-139.178.89.65:46216.service - OpenSSH per-connection server daemon (139.178.89.65:46216). Jul 7 06:09:42.942733 sshd[4830]: Accepted publickey for core from 139.178.89.65 port 46216 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:42.944131 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:42.949154 systemd-logind[1873]: New session 13 of user core. Jul 7 06:09:42.959884 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:09:43.152603 sshd[4832]: Connection closed by 139.178.89.65 port 46216 Jul 7 06:09:43.153168 sshd-session[4830]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:43.156680 systemd[1]: sshd@12-172.31.29.6:22-139.178.89.65:46216.service: Deactivated successfully. Jul 7 06:09:43.158703 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:09:43.159804 systemd-logind[1873]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:09:43.161784 systemd-logind[1873]: Removed session 13. Jul 7 06:09:48.186670 systemd[1]: Started sshd@13-172.31.29.6:22-139.178.89.65:46228.service - OpenSSH per-connection server daemon (139.178.89.65:46228). Jul 7 06:09:48.357047 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 46228 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:48.358561 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:48.363730 systemd-logind[1873]: New session 14 of user core. Jul 7 06:09:48.372872 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:09:48.571303 sshd[4847]: Connection closed by 139.178.89.65 port 46228 Jul 7 06:09:48.571917 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:48.575701 systemd[1]: sshd@13-172.31.29.6:22-139.178.89.65:46228.service: Deactivated successfully. Jul 7 06:09:48.577521 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:09:48.578298 systemd-logind[1873]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:09:48.579926 systemd-logind[1873]: Removed session 14. Jul 7 06:09:53.608884 systemd[1]: Started sshd@14-172.31.29.6:22-139.178.89.65:46522.service - OpenSSH per-connection server daemon (139.178.89.65:46522). Jul 7 06:09:53.781762 sshd[4860]: Accepted publickey for core from 139.178.89.65 port 46522 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:53.783136 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:53.788101 systemd-logind[1873]: New session 15 of user core. Jul 7 06:09:53.799884 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:09:53.982737 sshd[4862]: Connection closed by 139.178.89.65 port 46522 Jul 7 06:09:53.983532 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:53.987555 systemd[1]: sshd@14-172.31.29.6:22-139.178.89.65:46522.service: Deactivated successfully. Jul 7 06:09:53.989434 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:09:53.990939 systemd-logind[1873]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:09:53.992526 systemd-logind[1873]: Removed session 15. Jul 7 06:09:54.019906 systemd[1]: Started sshd@15-172.31.29.6:22-139.178.89.65:46528.service - OpenSSH per-connection server daemon (139.178.89.65:46528). Jul 7 06:09:54.197051 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 46528 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:54.198497 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:54.203885 systemd-logind[1873]: New session 16 of user core. Jul 7 06:09:54.209869 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:09:54.856970 sshd[4876]: Connection closed by 139.178.89.65 port 46528 Jul 7 06:09:54.857641 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:54.861640 systemd[1]: sshd@15-172.31.29.6:22-139.178.89.65:46528.service: Deactivated successfully. Jul 7 06:09:54.863540 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:09:54.866154 systemd-logind[1873]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:09:54.867253 systemd-logind[1873]: Removed session 16. Jul 7 06:09:54.894245 systemd[1]: Started sshd@16-172.31.29.6:22-139.178.89.65:46540.service - OpenSSH per-connection server daemon (139.178.89.65:46540). Jul 7 06:09:55.098583 sshd[4888]: Accepted publickey for core from 139.178.89.65 port 46540 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:55.099999 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:55.106017 systemd-logind[1873]: New session 17 of user core. Jul 7 06:09:55.108830 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:09:56.058086 sshd[4890]: Connection closed by 139.178.89.65 port 46540 Jul 7 06:09:56.059242 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:56.069547 systemd[1]: sshd@16-172.31.29.6:22-139.178.89.65:46540.service: Deactivated successfully. Jul 7 06:09:56.071813 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:09:56.075169 systemd-logind[1873]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:09:56.089306 systemd[1]: Started sshd@17-172.31.29.6:22-139.178.89.65:46552.service - OpenSSH per-connection server daemon (139.178.89.65:46552). Jul 7 06:09:56.090295 systemd-logind[1873]: Removed session 17. Jul 7 06:09:56.263597 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 46552 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:56.265079 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:56.270905 systemd-logind[1873]: New session 18 of user core. Jul 7 06:09:56.275835 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:09:56.601846 sshd[4909]: Connection closed by 139.178.89.65 port 46552 Jul 7 06:09:56.602611 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:56.607959 systemd-logind[1873]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:09:56.609069 systemd[1]: sshd@17-172.31.29.6:22-139.178.89.65:46552.service: Deactivated successfully. Jul 7 06:09:56.611583 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:09:56.613973 systemd-logind[1873]: Removed session 18. Jul 7 06:09:56.638097 systemd[1]: Started sshd@18-172.31.29.6:22-139.178.89.65:46556.service - OpenSSH per-connection server daemon (139.178.89.65:46556). Jul 7 06:09:56.804677 sshd[4919]: Accepted publickey for core from 139.178.89.65 port 46556 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:09:56.806179 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:56.811280 systemd-logind[1873]: New session 19 of user core. Jul 7 06:09:56.818934 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:09:56.992115 sshd[4921]: Connection closed by 139.178.89.65 port 46556 Jul 7 06:09:56.992826 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:56.997752 systemd[1]: sshd@18-172.31.29.6:22-139.178.89.65:46556.service: Deactivated successfully. Jul 7 06:09:57.000163 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:09:57.001336 systemd-logind[1873]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:09:57.003802 systemd-logind[1873]: Removed session 19. Jul 7 06:10:02.027793 systemd[1]: Started sshd@19-172.31.29.6:22-139.178.89.65:46804.service - OpenSSH per-connection server daemon (139.178.89.65:46804). Jul 7 06:10:02.237074 sshd[4935]: Accepted publickey for core from 139.178.89.65 port 46804 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:02.239456 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:02.247938 systemd-logind[1873]: New session 20 of user core. Jul 7 06:10:02.249879 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:10:02.444796 sshd[4937]: Connection closed by 139.178.89.65 port 46804 Jul 7 06:10:02.446005 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:02.451172 systemd-logind[1873]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:10:02.452178 systemd[1]: sshd@19-172.31.29.6:22-139.178.89.65:46804.service: Deactivated successfully. Jul 7 06:10:02.454930 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:10:02.458289 systemd-logind[1873]: Removed session 20. Jul 7 06:10:07.479806 systemd[1]: Started sshd@20-172.31.29.6:22-139.178.89.65:46810.service - OpenSSH per-connection server daemon (139.178.89.65:46810). Jul 7 06:10:07.658518 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 46810 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:07.659957 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:07.666699 systemd-logind[1873]: New session 21 of user core. Jul 7 06:10:07.670867 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:10:07.864825 sshd[4951]: Connection closed by 139.178.89.65 port 46810 Jul 7 06:10:07.865530 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:07.869197 systemd[1]: sshd@20-172.31.29.6:22-139.178.89.65:46810.service: Deactivated successfully. Jul 7 06:10:07.871012 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:10:07.872250 systemd-logind[1873]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:10:07.874108 systemd-logind[1873]: Removed session 21. Jul 7 06:10:12.900037 systemd[1]: Started sshd@21-172.31.29.6:22-139.178.89.65:54658.service - OpenSSH per-connection server daemon (139.178.89.65:54658). Jul 7 06:10:13.081575 sshd[4964]: Accepted publickey for core from 139.178.89.65 port 54658 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:13.083044 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:13.089675 systemd-logind[1873]: New session 22 of user core. Jul 7 06:10:13.096823 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:10:13.282616 sshd[4966]: Connection closed by 139.178.89.65 port 54658 Jul 7 06:10:13.283748 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:13.287537 systemd[1]: sshd@21-172.31.29.6:22-139.178.89.65:54658.service: Deactivated successfully. Jul 7 06:10:13.289577 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:10:13.290728 systemd-logind[1873]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:10:13.291836 systemd-logind[1873]: Removed session 22. Jul 7 06:10:13.318635 systemd[1]: Started sshd@22-172.31.29.6:22-139.178.89.65:54670.service - OpenSSH per-connection server daemon (139.178.89.65:54670). Jul 7 06:10:13.509908 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 54670 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:13.511305 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:13.516706 systemd-logind[1873]: New session 23 of user core. Jul 7 06:10:13.523856 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:10:14.949559 containerd[1934]: time="2025-07-07T06:10:14.949070450Z" level=info msg="StopContainer for \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" with timeout 30 (s)" Jul 7 06:10:14.965037 containerd[1934]: time="2025-07-07T06:10:14.964953942Z" level=info msg="Stop container \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" with signal terminated" Jul 7 06:10:14.978205 systemd[1]: cri-containerd-72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f.scope: Deactivated successfully. Jul 7 06:10:14.981693 containerd[1934]: time="2025-07-07T06:10:14.981516661Z" level=info msg="received exit event container_id:\"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" id:\"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" pid:4052 exited_at:{seconds:1751868614 nanos:981144991}" Jul 7 06:10:14.982068 containerd[1934]: time="2025-07-07T06:10:14.982020145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" id:\"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" pid:4052 exited_at:{seconds:1751868614 nanos:981144991}" Jul 7 06:10:14.998732 containerd[1934]: time="2025-07-07T06:10:14.998687584Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:10:15.001592 containerd[1934]: time="2025-07-07T06:10:15.001171494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" id:\"45ae75d38788729c06d1e490b5947752d73fa996f562ee36e84496fc1ebc3a02\" pid:5000 exited_at:{seconds:1751868614 nanos:999255090}" Jul 7 06:10:15.005031 containerd[1934]: time="2025-07-07T06:10:15.004990122Z" level=info msg="StopContainer for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" with timeout 2 (s)" Jul 7 06:10:15.005820 containerd[1934]: time="2025-07-07T06:10:15.005782122Z" level=info msg="Stop container \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" with signal terminated" Jul 7 06:10:15.018761 systemd-networkd[1825]: lxc_health: Link DOWN Jul 7 06:10:15.018773 systemd-networkd[1825]: lxc_health: Lost carrier Jul 7 06:10:15.032192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f-rootfs.mount: Deactivated successfully. Jul 7 06:10:15.040752 systemd[1]: cri-containerd-1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d.scope: Deactivated successfully. Jul 7 06:10:15.041225 systemd[1]: cri-containerd-1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d.scope: Consumed 7.802s CPU time, 234.7M memory peak, 124.5M read from disk, 13.3M written to disk. Jul 7 06:10:15.043423 containerd[1934]: time="2025-07-07T06:10:15.043138288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" id:\"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" pid:4084 exited_at:{seconds:1751868615 nanos:42769830}" Jul 7 06:10:15.043423 containerd[1934]: time="2025-07-07T06:10:15.043294901Z" level=info msg="received exit event container_id:\"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" id:\"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" pid:4084 exited_at:{seconds:1751868615 nanos:42769830}" Jul 7 06:10:15.057668 containerd[1934]: time="2025-07-07T06:10:15.057474779Z" level=info msg="StopContainer for \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" returns successfully" Jul 7 06:10:15.058887 containerd[1934]: time="2025-07-07T06:10:15.058738313Z" level=info msg="StopPodSandbox for \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\"" Jul 7 06:10:15.067947 containerd[1934]: time="2025-07-07T06:10:15.067641941Z" level=info msg="Container to stop \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:10:15.078797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d-rootfs.mount: Deactivated successfully. Jul 7 06:10:15.091317 containerd[1934]: time="2025-07-07T06:10:15.091234038Z" level=info msg="StopContainer for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" returns successfully" Jul 7 06:10:15.092605 containerd[1934]: time="2025-07-07T06:10:15.092571955Z" level=info msg="StopPodSandbox for \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\"" Jul 7 06:10:15.094373 containerd[1934]: time="2025-07-07T06:10:15.094110446Z" level=info msg="Container to stop \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:10:15.094373 containerd[1934]: time="2025-07-07T06:10:15.094151634Z" level=info msg="Container to stop \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:10:15.094373 containerd[1934]: time="2025-07-07T06:10:15.094165279Z" level=info msg="Container to stop \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:10:15.094373 containerd[1934]: time="2025-07-07T06:10:15.094173307Z" level=info msg="Container to stop \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:10:15.094373 containerd[1934]: time="2025-07-07T06:10:15.094182366Z" level=info msg="Container to stop \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:10:15.095896 systemd[1]: cri-containerd-41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189.scope: Deactivated successfully. Jul 7 06:10:15.106845 containerd[1934]: time="2025-07-07T06:10:15.106787188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" id:\"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" pid:3497 exit_status:137 exited_at:{seconds:1751868615 nanos:106038688}" Jul 7 06:10:15.110692 systemd[1]: cri-containerd-823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693.scope: Deactivated successfully. Jul 7 06:10:15.154045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189-rootfs.mount: Deactivated successfully. Jul 7 06:10:15.154208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693-rootfs.mount: Deactivated successfully. Jul 7 06:10:15.159118 containerd[1934]: time="2025-07-07T06:10:15.158890046Z" level=info msg="shim disconnected" id=823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693 namespace=k8s.io Jul 7 06:10:15.159118 containerd[1934]: time="2025-07-07T06:10:15.158923826Z" level=warning msg="cleaning up after shim disconnected" id=823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693 namespace=k8s.io Jul 7 06:10:15.159118 containerd[1934]: time="2025-07-07T06:10:15.158934037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:10:15.159465 containerd[1934]: time="2025-07-07T06:10:15.159440865Z" level=info msg="shim disconnected" id=41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189 namespace=k8s.io Jul 7 06:10:15.159563 containerd[1934]: time="2025-07-07T06:10:15.159546082Z" level=warning msg="cleaning up after shim disconnected" id=41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189 namespace=k8s.io Jul 7 06:10:15.159680 containerd[1934]: time="2025-07-07T06:10:15.159620681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:10:15.211742 containerd[1934]: time="2025-07-07T06:10:15.208186309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" id:\"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" pid:3419 exit_status:137 exited_at:{seconds:1751868615 nanos:111599811}" Jul 7 06:10:15.213843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189-shm.mount: Deactivated successfully. Jul 7 06:10:15.223383 containerd[1934]: time="2025-07-07T06:10:15.223037396Z" level=info msg="received exit event sandbox_id:\"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" exit_status:137 exited_at:{seconds:1751868615 nanos:111599811}" Jul 7 06:10:15.226262 containerd[1934]: time="2025-07-07T06:10:15.226221477Z" level=info msg="TearDown network for sandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" successfully" Jul 7 06:10:15.226262 containerd[1934]: time="2025-07-07T06:10:15.226262908Z" level=info msg="StopPodSandbox for \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" returns successfully" Jul 7 06:10:15.226436 containerd[1934]: time="2025-07-07T06:10:15.226415998Z" level=info msg="received exit event sandbox_id:\"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" exit_status:137 exited_at:{seconds:1751868615 nanos:106038688}" Jul 7 06:10:15.226562 containerd[1934]: time="2025-07-07T06:10:15.226543538Z" level=info msg="TearDown network for sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" successfully" Jul 7 06:10:15.226638 containerd[1934]: time="2025-07-07T06:10:15.226625042Z" level=info msg="StopPodSandbox for \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" returns successfully" Jul 7 06:10:15.335003 kubelet[3267]: I0707 06:10:15.334953 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-run\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335003 kubelet[3267]: I0707 06:10:15.334991 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cni-path\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335427 kubelet[3267]: I0707 06:10:15.335015 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ade2bf0-23bc-4777-a8c6-8e99c030d492-cilium-config-path\") pod \"8ade2bf0-23bc-4777-a8c6-8e99c030d492\" (UID: \"8ade2bf0-23bc-4777-a8c6-8e99c030d492\") " Jul 7 06:10:15.335427 kubelet[3267]: I0707 06:10:15.335036 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-xtables-lock\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335427 kubelet[3267]: I0707 06:10:15.335054 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-hubble-tls\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335427 kubelet[3267]: I0707 06:10:15.335067 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-kernel\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335427 kubelet[3267]: I0707 06:10:15.335082 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-etc-cni-netd\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335427 kubelet[3267]: I0707 06:10:15.335098 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv6qj\" (UniqueName: \"kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-kube-api-access-qv6qj\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335584 kubelet[3267]: I0707 06:10:15.335113 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-hostproc\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335584 kubelet[3267]: I0707 06:10:15.335130 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-cgroup\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335584 kubelet[3267]: I0707 06:10:15.335144 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-bpf-maps\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335584 kubelet[3267]: I0707 06:10:15.335158 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-net\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335584 kubelet[3267]: I0707 06:10:15.335176 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng98q\" (UniqueName: \"kubernetes.io/projected/8ade2bf0-23bc-4777-a8c6-8e99c030d492-kube-api-access-ng98q\") pod \"8ade2bf0-23bc-4777-a8c6-8e99c030d492\" (UID: \"8ade2bf0-23bc-4777-a8c6-8e99c030d492\") " Jul 7 06:10:15.335584 kubelet[3267]: I0707 06:10:15.335192 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef4765c6-5272-4ffe-a183-f010e64a4d12-clustermesh-secrets\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335761 kubelet[3267]: I0707 06:10:15.335205 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-lib-modules\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.335761 kubelet[3267]: I0707 06:10:15.335222 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-config-path\") pod \"ef4765c6-5272-4ffe-a183-f010e64a4d12\" (UID: \"ef4765c6-5272-4ffe-a183-f010e64a4d12\") " Jul 7 06:10:15.337208 kubelet[3267]: I0707 06:10:15.337176 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:10:15.337721 kubelet[3267]: I0707 06:10:15.337234 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.337721 kubelet[3267]: I0707 06:10:15.337253 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.337721 kubelet[3267]: I0707 06:10:15.337267 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.337721 kubelet[3267]: I0707 06:10:15.337279 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.339774 kubelet[3267]: I0707 06:10:15.339738 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.339852 kubelet[3267]: I0707 06:10:15.339782 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.341539 kubelet[3267]: I0707 06:10:15.341492 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.341737 kubelet[3267]: I0707 06:10:15.341689 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ade2bf0-23bc-4777-a8c6-8e99c030d492-kube-api-access-ng98q" (OuterVolumeSpecName: "kube-api-access-ng98q") pod "8ade2bf0-23bc-4777-a8c6-8e99c030d492" (UID: "8ade2bf0-23bc-4777-a8c6-8e99c030d492"). InnerVolumeSpecName "kube-api-access-ng98q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:10:15.342101 kubelet[3267]: I0707 06:10:15.342063 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.342101 kubelet[3267]: I0707 06:10:15.342094 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.343160 kubelet[3267]: I0707 06:10:15.342943 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:10:15.343160 kubelet[3267]: I0707 06:10:15.343036 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-kube-api-access-qv6qj" (OuterVolumeSpecName: "kube-api-access-qv6qj") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "kube-api-access-qv6qj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:10:15.343160 kubelet[3267]: I0707 06:10:15.343113 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ade2bf0-23bc-4777-a8c6-8e99c030d492-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ade2bf0-23bc-4777-a8c6-8e99c030d492" (UID: "8ade2bf0-23bc-4777-a8c6-8e99c030d492"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:10:15.344060 kubelet[3267]: I0707 06:10:15.344026 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef4765c6-5272-4ffe-a183-f010e64a4d12-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:10:15.345435 kubelet[3267]: I0707 06:10:15.345408 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef4765c6-5272-4ffe-a183-f010e64a4d12" (UID: "ef4765c6-5272-4ffe-a183-f010e64a4d12"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:10:15.438566 kubelet[3267]: I0707 06:10:15.438526 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-cgroup\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438579 3267 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-bpf-maps\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438591 3267 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-net\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438605 3267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ng98q\" (UniqueName: \"kubernetes.io/projected/8ade2bf0-23bc-4777-a8c6-8e99c030d492-kube-api-access-ng98q\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438619 3267 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef4765c6-5272-4ffe-a183-f010e64a4d12-clustermesh-secrets\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438628 3267 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-lib-modules\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438637 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-config-path\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438659 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cilium-run\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.438905 kubelet[3267]: I0707 06:10:15.438667 3267 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-cni-path\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438678 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ade2bf0-23bc-4777-a8c6-8e99c030d492-cilium-config-path\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438686 3267 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-xtables-lock\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438697 3267 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-hubble-tls\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438705 3267 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-host-proc-sys-kernel\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438713 3267 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-etc-cni-netd\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438722 3267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qv6qj\" (UniqueName: \"kubernetes.io/projected/ef4765c6-5272-4ffe-a183-f010e64a4d12-kube-api-access-qv6qj\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:15.439118 kubelet[3267]: I0707 06:10:15.438732 3267 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef4765c6-5272-4ffe-a183-f010e64a4d12-hostproc\") on node \"ip-172-31-29-6\" DevicePath \"\"" Jul 7 06:10:16.029941 systemd[1]: var-lib-kubelet-pods-8ade2bf0\x2d23bc\x2d4777\x2da8c6\x2d8e99c030d492-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dng98q.mount: Deactivated successfully. Jul 7 06:10:16.030069 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693-shm.mount: Deactivated successfully. Jul 7 06:10:16.030131 systemd[1]: var-lib-kubelet-pods-ef4765c6\x2d5272\x2d4ffe\x2da183\x2df010e64a4d12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqv6qj.mount: Deactivated successfully. Jul 7 06:10:16.030190 systemd[1]: var-lib-kubelet-pods-ef4765c6\x2d5272\x2d4ffe\x2da183\x2df010e64a4d12-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 06:10:16.030253 systemd[1]: var-lib-kubelet-pods-ef4765c6\x2d5272\x2d4ffe\x2da183\x2df010e64a4d12-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 06:10:16.101459 kubelet[3267]: I0707 06:10:16.101429 3267 scope.go:117] "RemoveContainer" containerID="1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d" Jul 7 06:10:16.105188 containerd[1934]: time="2025-07-07T06:10:16.104981904Z" level=info msg="RemoveContainer for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\"" Jul 7 06:10:16.110607 systemd[1]: Removed slice kubepods-burstable-podef4765c6_5272_4ffe_a183_f010e64a4d12.slice - libcontainer container kubepods-burstable-podef4765c6_5272_4ffe_a183_f010e64a4d12.slice. Jul 7 06:10:16.110905 systemd[1]: kubepods-burstable-podef4765c6_5272_4ffe_a183_f010e64a4d12.slice: Consumed 7.891s CPU time, 235.1M memory peak, 125.6M read from disk, 13.3M written to disk. Jul 7 06:10:16.117213 systemd[1]: Removed slice kubepods-besteffort-pod8ade2bf0_23bc_4777_a8c6_8e99c030d492.slice - libcontainer container kubepods-besteffort-pod8ade2bf0_23bc_4777_a8c6_8e99c030d492.slice. Jul 7 06:10:16.123084 containerd[1934]: time="2025-07-07T06:10:16.122991863Z" level=info msg="RemoveContainer for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" returns successfully" Jul 7 06:10:16.123367 kubelet[3267]: I0707 06:10:16.123304 3267 scope.go:117] "RemoveContainer" containerID="4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e" Jul 7 06:10:16.125037 containerd[1934]: time="2025-07-07T06:10:16.125003636Z" level=info msg="RemoveContainer for \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\"" Jul 7 06:10:16.139150 containerd[1934]: time="2025-07-07T06:10:16.139092782Z" level=info msg="RemoveContainer for \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" returns successfully" Jul 7 06:10:16.139373 kubelet[3267]: I0707 06:10:16.139351 3267 scope.go:117] "RemoveContainer" containerID="fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc" Jul 7 06:10:16.142896 containerd[1934]: time="2025-07-07T06:10:16.142796863Z" level=info msg="RemoveContainer for \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\"" Jul 7 06:10:16.150114 containerd[1934]: time="2025-07-07T06:10:16.150072554Z" level=info msg="RemoveContainer for \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" returns successfully" Jul 7 06:10:16.150500 kubelet[3267]: I0707 06:10:16.150476 3267 scope.go:117] "RemoveContainer" containerID="1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0" Jul 7 06:10:16.152415 containerd[1934]: time="2025-07-07T06:10:16.152384766Z" level=info msg="RemoveContainer for \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\"" Jul 7 06:10:16.157967 containerd[1934]: time="2025-07-07T06:10:16.157932243Z" level=info msg="RemoveContainer for \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" returns successfully" Jul 7 06:10:16.158196 kubelet[3267]: I0707 06:10:16.158155 3267 scope.go:117] "RemoveContainer" containerID="79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d" Jul 7 06:10:16.160113 containerd[1934]: time="2025-07-07T06:10:16.159621373Z" level=info msg="RemoveContainer for \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\"" Jul 7 06:10:16.164773 containerd[1934]: time="2025-07-07T06:10:16.164737220Z" level=info msg="RemoveContainer for \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" returns successfully" Jul 7 06:10:16.165023 kubelet[3267]: I0707 06:10:16.164984 3267 scope.go:117] "RemoveContainer" containerID="1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d" Jul 7 06:10:16.165468 containerd[1934]: time="2025-07-07T06:10:16.165409328Z" level=error msg="ContainerStatus for \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\": not found" Jul 7 06:10:16.167931 kubelet[3267]: E0707 06:10:16.167867 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\": not found" containerID="1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d" Jul 7 06:10:16.168023 kubelet[3267]: I0707 06:10:16.167937 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d"} err="failed to get container status \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f93a0e8a6dcdc55943949a5ed426142293a0d8ba7950e2046facec95682b03d\": not found" Jul 7 06:10:16.168023 kubelet[3267]: I0707 06:10:16.168014 3267 scope.go:117] "RemoveContainer" containerID="4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e" Jul 7 06:10:16.168290 containerd[1934]: time="2025-07-07T06:10:16.168229922Z" level=error msg="ContainerStatus for \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\": not found" Jul 7 06:10:16.168391 kubelet[3267]: E0707 06:10:16.168354 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\": not found" containerID="4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e" Jul 7 06:10:16.168391 kubelet[3267]: I0707 06:10:16.168375 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e"} err="failed to get container status \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4742900a7753e4733e957fec903697be6747b5d2ccb8aacac1d05bd5534fe00e\": not found" Jul 7 06:10:16.168482 kubelet[3267]: I0707 06:10:16.168397 3267 scope.go:117] "RemoveContainer" containerID="fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc" Jul 7 06:10:16.168579 containerd[1934]: time="2025-07-07T06:10:16.168549518Z" level=error msg="ContainerStatus for \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\": not found" Jul 7 06:10:16.168728 kubelet[3267]: E0707 06:10:16.168704 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\": not found" containerID="fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc" Jul 7 06:10:16.168778 kubelet[3267]: I0707 06:10:16.168732 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc"} err="failed to get container status \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa9622ec963b96af389efad35b10bd380f57bc62f9e89c2588d380b128363bcc\": not found" Jul 7 06:10:16.168778 kubelet[3267]: I0707 06:10:16.168747 3267 scope.go:117] "RemoveContainer" containerID="1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0" Jul 7 06:10:16.168940 containerd[1934]: time="2025-07-07T06:10:16.168915003Z" level=error msg="ContainerStatus for \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\": not found" Jul 7 06:10:16.169031 kubelet[3267]: E0707 06:10:16.169001 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\": not found" containerID="1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0" Jul 7 06:10:16.169031 kubelet[3267]: I0707 06:10:16.169016 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0"} err="failed to get container status \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"1de8e9136a73f1e61836919032162a7feae04fce204f094fc5e0d48d7145d5a0\": not found" Jul 7 06:10:16.169031 kubelet[3267]: I0707 06:10:16.169029 3267 scope.go:117] "RemoveContainer" containerID="79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d" Jul 7 06:10:16.169431 containerd[1934]: time="2025-07-07T06:10:16.169300402Z" level=error msg="ContainerStatus for \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\": not found" Jul 7 06:10:16.169570 kubelet[3267]: E0707 06:10:16.169540 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\": not found" containerID="79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d" Jul 7 06:10:16.169641 kubelet[3267]: I0707 06:10:16.169579 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d"} err="failed to get container status \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"79842cb231c46399ac5cb8c8e2f6e78a833a6f22335b63a68405afe5986eae6d\": not found" Jul 7 06:10:16.169641 kubelet[3267]: I0707 06:10:16.169594 3267 scope.go:117] "RemoveContainer" containerID="72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f" Jul 7 06:10:16.173541 containerd[1934]: time="2025-07-07T06:10:16.173516014Z" level=info msg="RemoveContainer for \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\"" Jul 7 06:10:16.179504 containerd[1934]: time="2025-07-07T06:10:16.179389781Z" level=info msg="RemoveContainer for \"72390321a0ac76c0c942853a30fd3b51349d6b5ef46b94c1b1ed5d00adb0062f\" returns successfully" Jul 7 06:10:16.741001 kubelet[3267]: I0707 06:10:16.740907 3267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ade2bf0-23bc-4777-a8c6-8e99c030d492" path="/var/lib/kubelet/pods/8ade2bf0-23bc-4777-a8c6-8e99c030d492/volumes" Jul 7 06:10:16.741865 kubelet[3267]: I0707 06:10:16.741836 3267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4765c6-5272-4ffe-a183-f010e64a4d12" path="/var/lib/kubelet/pods/ef4765c6-5272-4ffe-a183-f010e64a4d12/volumes" Jul 7 06:10:16.905900 sshd[4980]: Connection closed by 139.178.89.65 port 54670 Jul 7 06:10:16.906836 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:16.910819 systemd[1]: sshd@22-172.31.29.6:22-139.178.89.65:54670.service: Deactivated successfully. Jul 7 06:10:16.913546 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:10:16.914454 systemd-logind[1873]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:10:16.916178 systemd-logind[1873]: Removed session 23. Jul 7 06:10:16.940082 systemd[1]: Started sshd@23-172.31.29.6:22-139.178.89.65:54674.service - OpenSSH per-connection server daemon (139.178.89.65:54674). Jul 7 06:10:17.128117 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 54674 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:17.129859 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:17.136695 systemd-logind[1873]: New session 24 of user core. Jul 7 06:10:17.143883 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:10:17.620624 ntpd[1868]: Deleting interface #12 lxc_health, fe80::484:72ff:fe15:692c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=58 secs Jul 7 06:10:17.621146 ntpd[1868]: 7 Jul 06:10:17 ntpd[1868]: Deleting interface #12 lxc_health, fe80::484:72ff:fe15:692c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=58 secs Jul 7 06:10:18.003422 sshd[5137]: Connection closed by 139.178.89.65 port 54674 Jul 7 06:10:18.003928 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:18.008555 systemd-logind[1873]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:10:18.009518 systemd[1]: sshd@23-172.31.29.6:22-139.178.89.65:54674.service: Deactivated successfully. Jul 7 06:10:18.012472 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:10:18.015556 systemd-logind[1873]: Removed session 24. Jul 7 06:10:18.021228 kubelet[3267]: I0707 06:10:18.020565 3267 memory_manager.go:355] "RemoveStaleState removing state" podUID="ef4765c6-5272-4ffe-a183-f010e64a4d12" containerName="cilium-agent" Jul 7 06:10:18.021228 kubelet[3267]: I0707 06:10:18.020588 3267 memory_manager.go:355] "RemoveStaleState removing state" podUID="8ade2bf0-23bc-4777-a8c6-8e99c030d492" containerName="cilium-operator" Jul 7 06:10:18.042005 systemd[1]: Created slice kubepods-burstable-pod1a82dd18_1b27_43d0_be06_cf2720eeecce.slice - libcontainer container kubepods-burstable-pod1a82dd18_1b27_43d0_be06_cf2720eeecce.slice. Jul 7 06:10:18.044985 systemd[1]: Started sshd@24-172.31.29.6:22-139.178.89.65:54690.service - OpenSSH per-connection server daemon (139.178.89.65:54690). Jul 7 06:10:18.155919 kubelet[3267]: I0707 06:10:18.155811 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-hostproc\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.155919 kubelet[3267]: I0707 06:10:18.155895 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-cilium-run\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.155919 kubelet[3267]: I0707 06:10:18.155925 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-host-proc-sys-net\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156178 kubelet[3267]: I0707 06:10:18.155962 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-etc-cni-netd\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156178 kubelet[3267]: I0707 06:10:18.155989 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-lib-modules\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156178 kubelet[3267]: I0707 06:10:18.156012 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-xtables-lock\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156178 kubelet[3267]: I0707 06:10:18.156035 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dqb\" (UniqueName: \"kubernetes.io/projected/1a82dd18-1b27-43d0-be06-cf2720eeecce-kube-api-access-k6dqb\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156178 kubelet[3267]: I0707 06:10:18.156059 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-host-proc-sys-kernel\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156178 kubelet[3267]: I0707 06:10:18.156084 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-bpf-maps\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156417 kubelet[3267]: I0707 06:10:18.156106 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a82dd18-1b27-43d0-be06-cf2720eeecce-cilium-config-path\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156417 kubelet[3267]: I0707 06:10:18.156133 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a82dd18-1b27-43d0-be06-cf2720eeecce-hubble-tls\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156417 kubelet[3267]: I0707 06:10:18.156166 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-cilium-cgroup\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156417 kubelet[3267]: I0707 06:10:18.156203 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a82dd18-1b27-43d0-be06-cf2720eeecce-clustermesh-secrets\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156417 kubelet[3267]: I0707 06:10:18.156234 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a82dd18-1b27-43d0-be06-cf2720eeecce-cni-path\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.156417 kubelet[3267]: I0707 06:10:18.156258 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a82dd18-1b27-43d0-be06-cf2720eeecce-cilium-ipsec-secrets\") pod \"cilium-dmcjw\" (UID: \"1a82dd18-1b27-43d0-be06-cf2720eeecce\") " pod="kube-system/cilium-dmcjw" Jul 7 06:10:18.231790 sshd[5149]: Accepted publickey for core from 139.178.89.65 port 54690 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:18.233481 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:18.239208 systemd-logind[1873]: New session 25 of user core. Jul 7 06:10:18.243828 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:10:18.349973 containerd[1934]: time="2025-07-07T06:10:18.349868255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dmcjw,Uid:1a82dd18-1b27-43d0-be06-cf2720eeecce,Namespace:kube-system,Attempt:0,}" Jul 7 06:10:18.364789 sshd[5151]: Connection closed by 139.178.89.65 port 54690 Jul 7 06:10:18.366554 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:18.374110 systemd[1]: sshd@24-172.31.29.6:22-139.178.89.65:54690.service: Deactivated successfully. Jul 7 06:10:18.376902 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:10:18.380819 systemd-logind[1873]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:10:18.387402 containerd[1934]: time="2025-07-07T06:10:18.387318628Z" level=info msg="connecting to shim 4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb" address="unix:///run/containerd/s/11a4185080d14b859cb84aa3bac6e87607de7589658072ff41a3b2d0c8088176" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:10:18.400193 systemd[1]: Started sshd@25-172.31.29.6:22-139.178.89.65:54704.service - OpenSSH per-connection server daemon (139.178.89.65:54704). Jul 7 06:10:18.401236 systemd-logind[1873]: Removed session 25. Jul 7 06:10:18.439833 systemd[1]: Started cri-containerd-4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb.scope - libcontainer container 4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb. Jul 7 06:10:18.471431 containerd[1934]: time="2025-07-07T06:10:18.471368410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dmcjw,Uid:1a82dd18-1b27-43d0-be06-cf2720eeecce,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\"" Jul 7 06:10:18.477016 containerd[1934]: time="2025-07-07T06:10:18.476986092Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:10:18.492804 containerd[1934]: time="2025-07-07T06:10:18.492169570Z" level=info msg="Container b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:18.503557 containerd[1934]: time="2025-07-07T06:10:18.503512565Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\"" Jul 7 06:10:18.505227 containerd[1934]: time="2025-07-07T06:10:18.504375526Z" level=info msg="StartContainer for \"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\"" Jul 7 06:10:18.506020 containerd[1934]: time="2025-07-07T06:10:18.505994865Z" level=info msg="connecting to shim b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9" address="unix:///run/containerd/s/11a4185080d14b859cb84aa3bac6e87607de7589658072ff41a3b2d0c8088176" protocol=ttrpc version=3 Jul 7 06:10:18.529921 systemd[1]: Started cri-containerd-b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9.scope - libcontainer container b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9. Jul 7 06:10:18.574129 containerd[1934]: time="2025-07-07T06:10:18.574092582Z" level=info msg="StartContainer for \"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\" returns successfully" Jul 7 06:10:18.582514 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 54704 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:10:18.584087 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:18.590883 systemd-logind[1873]: New session 26 of user core. Jul 7 06:10:18.596282 containerd[1934]: time="2025-07-07T06:10:18.596253286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\" id:\"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\" pid:5223 exited_at:{seconds:1751868618 nanos:595945153}" Jul 7 06:10:18.596479 containerd[1934]: time="2025-07-07T06:10:18.596254183Z" level=info msg="received exit event container_id:\"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\" id:\"b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9\" pid:5223 exited_at:{seconds:1751868618 nanos:595945153}" Jul 7 06:10:18.596900 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:10:18.597349 systemd[1]: cri-containerd-b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9.scope: Deactivated successfully. Jul 7 06:10:18.598322 systemd[1]: cri-containerd-b88e9343770fdb4b0150f5fe7284744b23a552757fcbcd6df5577a51c702fab9.scope: Consumed 24ms CPU time, 9.4M memory peak, 2.9M read from disk. Jul 7 06:10:18.891093 kubelet[3267]: E0707 06:10:18.891052 3267 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:10:19.117337 containerd[1934]: time="2025-07-07T06:10:19.117021811Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:10:19.128822 containerd[1934]: time="2025-07-07T06:10:19.128770942Z" level=info msg="Container 5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:19.139841 containerd[1934]: time="2025-07-07T06:10:19.139734345Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\"" Jul 7 06:10:19.140318 containerd[1934]: time="2025-07-07T06:10:19.140294380Z" level=info msg="StartContainer for \"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\"" Jul 7 06:10:19.142715 containerd[1934]: time="2025-07-07T06:10:19.142425720Z" level=info msg="connecting to shim 5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135" address="unix:///run/containerd/s/11a4185080d14b859cb84aa3bac6e87607de7589658072ff41a3b2d0c8088176" protocol=ttrpc version=3 Jul 7 06:10:19.166859 systemd[1]: Started cri-containerd-5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135.scope - libcontainer container 5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135. Jul 7 06:10:19.203806 containerd[1934]: time="2025-07-07T06:10:19.203760768Z" level=info msg="StartContainer for \"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\" returns successfully" Jul 7 06:10:19.212701 systemd[1]: cri-containerd-5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135.scope: Deactivated successfully. Jul 7 06:10:19.213344 systemd[1]: cri-containerd-5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135.scope: Consumed 19ms CPU time, 7.5M memory peak, 2.1M read from disk. Jul 7 06:10:19.213462 containerd[1934]: time="2025-07-07T06:10:19.213361193Z" level=info msg="received exit event container_id:\"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\" id:\"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\" pid:5275 exited_at:{seconds:1751868619 nanos:212870080}" Jul 7 06:10:19.214781 containerd[1934]: time="2025-07-07T06:10:19.214717230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\" id:\"5653acb8d0ac2d68c7b1445786d2f251f577a77245e1b76daf01592e1d668135\" pid:5275 exited_at:{seconds:1751868619 nanos:212870080}" Jul 7 06:10:20.123674 containerd[1934]: time="2025-07-07T06:10:20.122729945Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:10:20.136780 containerd[1934]: time="2025-07-07T06:10:20.136739245Z" level=info msg="Container 9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:20.152955 containerd[1934]: time="2025-07-07T06:10:20.152911793Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\"" Jul 7 06:10:20.153818 containerd[1934]: time="2025-07-07T06:10:20.153791262Z" level=info msg="StartContainer for \"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\"" Jul 7 06:10:20.155247 containerd[1934]: time="2025-07-07T06:10:20.155192433Z" level=info msg="connecting to shim 9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da" address="unix:///run/containerd/s/11a4185080d14b859cb84aa3bac6e87607de7589658072ff41a3b2d0c8088176" protocol=ttrpc version=3 Jul 7 06:10:20.181861 systemd[1]: Started cri-containerd-9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da.scope - libcontainer container 9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da. Jul 7 06:10:20.226049 containerd[1934]: time="2025-07-07T06:10:20.226013402Z" level=info msg="StartContainer for \"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\" returns successfully" Jul 7 06:10:20.231327 systemd[1]: cri-containerd-9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da.scope: Deactivated successfully. Jul 7 06:10:20.231571 systemd[1]: cri-containerd-9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da.scope: Consumed 24ms CPU time, 5.8M memory peak, 1.1M read from disk. Jul 7 06:10:20.233319 containerd[1934]: time="2025-07-07T06:10:20.233283466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\" id:\"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\" pid:5319 exited_at:{seconds:1751868620 nanos:232678828}" Jul 7 06:10:20.233543 containerd[1934]: time="2025-07-07T06:10:20.233435386Z" level=info msg="received exit event container_id:\"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\" id:\"9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da\" pid:5319 exited_at:{seconds:1751868620 nanos:232678828}" Jul 7 06:10:20.255300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b951097b6017893f046bd82d85f95cf7704c8d9c4ba1d50a790f255165150da-rootfs.mount: Deactivated successfully. Jul 7 06:10:20.379148 kubelet[3267]: I0707 06:10:20.379043 3267 setters.go:602] "Node became not ready" node="ip-172-31-29-6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T06:10:20Z","lastTransitionTime":"2025-07-07T06:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 06:10:21.129217 containerd[1934]: time="2025-07-07T06:10:21.129094818Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:10:21.148196 containerd[1934]: time="2025-07-07T06:10:21.145308352Z" level=info msg="Container ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:21.161122 containerd[1934]: time="2025-07-07T06:10:21.160759159Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\"" Jul 7 06:10:21.162762 containerd[1934]: time="2025-07-07T06:10:21.162708329Z" level=info msg="StartContainer for \"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\"" Jul 7 06:10:21.163721 containerd[1934]: time="2025-07-07T06:10:21.163683190Z" level=info msg="connecting to shim ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba" address="unix:///run/containerd/s/11a4185080d14b859cb84aa3bac6e87607de7589658072ff41a3b2d0c8088176" protocol=ttrpc version=3 Jul 7 06:10:21.195924 systemd[1]: Started cri-containerd-ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba.scope - libcontainer container ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba. Jul 7 06:10:21.224325 systemd[1]: cri-containerd-ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba.scope: Deactivated successfully. Jul 7 06:10:21.225962 containerd[1934]: time="2025-07-07T06:10:21.225830647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\" id:\"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\" pid:5358 exited_at:{seconds:1751868621 nanos:225562852}" Jul 7 06:10:21.228905 containerd[1934]: time="2025-07-07T06:10:21.228807356Z" level=info msg="received exit event container_id:\"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\" id:\"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\" pid:5358 exited_at:{seconds:1751868621 nanos:225562852}" Jul 7 06:10:21.230581 containerd[1934]: time="2025-07-07T06:10:21.230520801Z" level=info msg="StartContainer for \"ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba\" returns successfully" Jul 7 06:10:21.253235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed5039b63b1e9d579a1a9179754b5e534b920e1744c3baf72411f6ddebd522ba-rootfs.mount: Deactivated successfully. Jul 7 06:10:22.158668 containerd[1934]: time="2025-07-07T06:10:22.158356424Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:10:22.177038 containerd[1934]: time="2025-07-07T06:10:22.176573513Z" level=info msg="Container 480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:22.196609 containerd[1934]: time="2025-07-07T06:10:22.196554074Z" level=info msg="CreateContainer within sandbox \"4ab89ff70a49f7d4da699a64e41b279412d1e01db92b0bae813ff88c5750e4bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\"" Jul 7 06:10:22.198628 containerd[1934]: time="2025-07-07T06:10:22.197510926Z" level=info msg="StartContainer for \"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\"" Jul 7 06:10:22.198830 containerd[1934]: time="2025-07-07T06:10:22.198796012Z" level=info msg="connecting to shim 480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982" address="unix:///run/containerd/s/11a4185080d14b859cb84aa3bac6e87607de7589658072ff41a3b2d0c8088176" protocol=ttrpc version=3 Jul 7 06:10:22.224870 systemd[1]: Started cri-containerd-480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982.scope - libcontainer container 480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982. Jul 7 06:10:22.268575 containerd[1934]: time="2025-07-07T06:10:22.268513631Z" level=info msg="StartContainer for \"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" returns successfully" Jul 7 06:10:22.406048 containerd[1934]: time="2025-07-07T06:10:22.405922588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" id:\"843002bd68484ab9f853a8ca112e9b57a018bd55d538885ad35db5abb30acdd7\" pid:5427 exited_at:{seconds:1751868622 nanos:405306217}" Jul 7 06:10:22.968680 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 06:10:25.233384 containerd[1934]: time="2025-07-07T06:10:25.233259391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" id:\"415dd91fa462d935663f975f02f1ab0da6ee0cc48f00a30861ab8089c1160a0f\" pid:5639 exit_status:1 exited_at:{seconds:1751868625 nanos:231227133}" Jul 7 06:10:26.043914 systemd-networkd[1825]: lxc_health: Link UP Jul 7 06:10:26.055229 (udev-worker)[5915]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:10:26.062805 systemd-networkd[1825]: lxc_health: Gained carrier Jul 7 06:10:26.372596 kubelet[3267]: I0707 06:10:26.372470 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dmcjw" podStartSLOduration=9.37245223 podStartE2EDuration="9.37245223s" podCreationTimestamp="2025-07-07 06:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:10:23.180034072 +0000 UTC m=+94.580174844" watchObservedRunningTime="2025-07-07 06:10:26.37245223 +0000 UTC m=+97.772593002" Jul 7 06:10:27.406864 systemd-networkd[1825]: lxc_health: Gained IPv6LL Jul 7 06:10:27.520449 containerd[1934]: time="2025-07-07T06:10:27.520410199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" id:\"313277f8bdeec859908b17dd14ab21d9fd023ae45892dcd7736b2d4917940ddb\" pid:5951 exited_at:{seconds:1751868627 nanos:519907476}" Jul 7 06:10:29.620682 ntpd[1868]: Listen normally on 15 lxc_health [fe80::2cd2:16ff:fe27:cfbb%14]:123 Jul 7 06:10:29.622082 ntpd[1868]: 7 Jul 06:10:29 ntpd[1868]: Listen normally on 15 lxc_health [fe80::2cd2:16ff:fe27:cfbb%14]:123 Jul 7 06:10:29.699773 containerd[1934]: time="2025-07-07T06:10:29.699028655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" id:\"a64fcc02bf9af637dd8b0f36966779f076036494a88f6740f324b3b7e53cb259\" pid:5981 exited_at:{seconds:1751868629 nanos:698060423}" Jul 7 06:10:31.828256 containerd[1934]: time="2025-07-07T06:10:31.828214241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" id:\"cc111918b99dc81508f02f64edd08b9f090abb28613c6185e1499ea8806f0876\" pid:6010 exited_at:{seconds:1751868631 nanos:827715537}" Jul 7 06:10:33.956007 containerd[1934]: time="2025-07-07T06:10:33.955735181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"480df53df09e80a390e4ad65ef8e6ee2e41027b4bbb1352c6b7ac7790f223982\" id:\"7eed11d89bfdf25281bd26594a454d1ac20be0acc4162f5153a25e0c05f945e9\" pid:6034 exited_at:{seconds:1751868633 nanos:954748621}" Jul 7 06:10:33.983137 sshd[5244]: Connection closed by 139.178.89.65 port 54704 Jul 7 06:10:33.984303 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:33.988252 systemd[1]: sshd@25-172.31.29.6:22-139.178.89.65:54704.service: Deactivated successfully. Jul 7 06:10:33.990193 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:10:33.991417 systemd-logind[1873]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:10:33.992917 systemd-logind[1873]: Removed session 26. Jul 7 06:10:48.139866 systemd[1]: cri-containerd-ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846.scope: Deactivated successfully. Jul 7 06:10:48.140365 systemd[1]: cri-containerd-ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846.scope: Consumed 3.301s CPU time, 76.8M memory peak, 26M read from disk. Jul 7 06:10:48.142634 containerd[1934]: time="2025-07-07T06:10:48.142583015Z" level=info msg="received exit event container_id:\"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\" id:\"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\" pid:3086 exit_status:1 exited_at:{seconds:1751868648 nanos:142271060}" Jul 7 06:10:48.143193 containerd[1934]: time="2025-07-07T06:10:48.143109684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\" id:\"ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846\" pid:3086 exit_status:1 exited_at:{seconds:1751868648 nanos:142271060}" Jul 7 06:10:48.168740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846-rootfs.mount: Deactivated successfully. Jul 7 06:10:48.213955 kubelet[3267]: I0707 06:10:48.213933 3267 scope.go:117] "RemoveContainer" containerID="ca36ed21793c8a6866c4fb8b16e6d62fdf3694d99e4ad28f5cddc7400737f846" Jul 7 06:10:48.217311 containerd[1934]: time="2025-07-07T06:10:48.217274822Z" level=info msg="CreateContainer within sandbox \"9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 06:10:48.238665 containerd[1934]: time="2025-07-07T06:10:48.235882337Z" level=info msg="Container 0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:48.243035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723066194.mount: Deactivated successfully. Jul 7 06:10:48.254256 containerd[1934]: time="2025-07-07T06:10:48.254204321Z" level=info msg="CreateContainer within sandbox \"9b095737d0b1462dccd94469be1ac9fe580bfb16d4996f18c60c168ffc74881d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959\"" Jul 7 06:10:48.254794 containerd[1934]: time="2025-07-07T06:10:48.254763173Z" level=info msg="StartContainer for \"0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959\"" Jul 7 06:10:48.255709 containerd[1934]: time="2025-07-07T06:10:48.255680161Z" level=info msg="connecting to shim 0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959" address="unix:///run/containerd/s/0a112c52364ea4bdc53812fac36a8b13c8891a4b5c818af94c99bef5de4d844d" protocol=ttrpc version=3 Jul 7 06:10:48.277823 systemd[1]: Started cri-containerd-0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959.scope - libcontainer container 0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959. Jul 7 06:10:48.336420 containerd[1934]: time="2025-07-07T06:10:48.336386110Z" level=info msg="StartContainer for \"0caa701a976b8e8f7acda7327a4a8614de6f0c5534a8f1e07a98eafa97333959\" returns successfully" Jul 7 06:10:48.762244 containerd[1934]: time="2025-07-07T06:10:48.762188773Z" level=info msg="StopPodSandbox for \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\"" Jul 7 06:10:48.762405 containerd[1934]: time="2025-07-07T06:10:48.762384488Z" level=info msg="TearDown network for sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" successfully" Jul 7 06:10:48.763185 containerd[1934]: time="2025-07-07T06:10:48.762405849Z" level=info msg="StopPodSandbox for \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" returns successfully" Jul 7 06:10:48.763185 containerd[1934]: time="2025-07-07T06:10:48.763115160Z" level=info msg="RemovePodSandbox for \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\"" Jul 7 06:10:48.763185 containerd[1934]: time="2025-07-07T06:10:48.763140875Z" level=info msg="Forcibly stopping sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\"" Jul 7 06:10:48.763321 containerd[1934]: time="2025-07-07T06:10:48.763253389Z" level=info msg="TearDown network for sandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" successfully" Jul 7 06:10:48.773424 containerd[1934]: time="2025-07-07T06:10:48.773374000Z" level=info msg="Ensure that sandbox 823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693 in task-service has been cleanup successfully" Jul 7 06:10:48.780579 containerd[1934]: time="2025-07-07T06:10:48.780514397Z" level=info msg="RemovePodSandbox \"823d3a04276dcf5c1d6f2c25962f5a8e3553ef2a00a54f6f21fc5f655d5b6693\" returns successfully" Jul 7 06:10:48.781209 containerd[1934]: time="2025-07-07T06:10:48.781177193Z" level=info msg="StopPodSandbox for \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\"" Jul 7 06:10:48.781340 containerd[1934]: time="2025-07-07T06:10:48.781319026Z" level=info msg="TearDown network for sandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" successfully" Jul 7 06:10:48.781392 containerd[1934]: time="2025-07-07T06:10:48.781340796Z" level=info msg="StopPodSandbox for \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" returns successfully" Jul 7 06:10:48.781862 containerd[1934]: time="2025-07-07T06:10:48.781837088Z" level=info msg="RemovePodSandbox for \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\"" Jul 7 06:10:48.781951 containerd[1934]: time="2025-07-07T06:10:48.781866655Z" level=info msg="Forcibly stopping sandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\"" Jul 7 06:10:48.781994 containerd[1934]: time="2025-07-07T06:10:48.781957111Z" level=info msg="TearDown network for sandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" successfully" Jul 7 06:10:48.784178 containerd[1934]: time="2025-07-07T06:10:48.784148756Z" level=info msg="Ensure that sandbox 41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189 in task-service has been cleanup successfully" Jul 7 06:10:48.791588 containerd[1934]: time="2025-07-07T06:10:48.790943844Z" level=info msg="RemovePodSandbox \"41d54ad6cdca769a61c29206a4d06ee79f04fd52ef97b7ca50915abf74795189\" returns successfully" Jul 7 06:10:50.968564 kubelet[3267]: E0707 06:10:50.968515 3267 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 06:10:52.798443 systemd[1]: cri-containerd-8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977.scope: Deactivated successfully. Jul 7 06:10:52.798819 systemd[1]: cri-containerd-8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977.scope: Consumed 2.463s CPU time, 36.9M memory peak, 18.2M read from disk. Jul 7 06:10:52.802860 containerd[1934]: time="2025-07-07T06:10:52.802802831Z" level=info msg="received exit event container_id:\"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\" id:\"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\" pid:3115 exit_status:1 exited_at:{seconds:1751868652 nanos:802432247}" Jul 7 06:10:52.803450 containerd[1934]: time="2025-07-07T06:10:52.803396388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\" id:\"8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977\" pid:3115 exit_status:1 exited_at:{seconds:1751868652 nanos:802432247}" Jul 7 06:10:52.829540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977-rootfs.mount: Deactivated successfully. Jul 7 06:10:53.228399 kubelet[3267]: I0707 06:10:53.228290 3267 scope.go:117] "RemoveContainer" containerID="8fd83dc479375e4207a6aacb3b4d087e98a8d4f5613e90a38e33c5db80b84977" Jul 7 06:10:53.230941 containerd[1934]: time="2025-07-07T06:10:53.230905787Z" level=info msg="CreateContainer within sandbox \"43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 06:10:53.254114 containerd[1934]: time="2025-07-07T06:10:53.254069920Z" level=info msg="Container 16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:53.264491 containerd[1934]: time="2025-07-07T06:10:53.264443800Z" level=info msg="CreateContainer within sandbox \"43f6306f1cf075994e491f66743fb8a3b76fec33a39d32e17a0ef16e2781aa12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36\"" Jul 7 06:10:53.265126 containerd[1934]: time="2025-07-07T06:10:53.265084308Z" level=info msg="StartContainer for \"16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36\"" Jul 7 06:10:53.266115 containerd[1934]: time="2025-07-07T06:10:53.266084527Z" level=info msg="connecting to shim 16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36" address="unix:///run/containerd/s/72f6e3d1477e456b73f7a3d0e144e6c5f0d853999a4cbcf87b84014876874884" protocol=ttrpc version=3 Jul 7 06:10:53.285837 systemd[1]: Started cri-containerd-16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36.scope - libcontainer container 16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36. Jul 7 06:10:53.346090 containerd[1934]: time="2025-07-07T06:10:53.346053765Z" level=info msg="StartContainer for \"16d092894938da12d925c0ef8a99c3034fc65d6b1144735403be7144789ddc36\" returns successfully" Jul 7 06:11:00.969352 kubelet[3267]: E0707 06:11:00.969305 3267 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-6?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"