Aug 13 00:00:50.964569 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 00:00:50.964614 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:00:50.964633 kernel: BIOS-provided physical RAM map: Aug 13 00:00:50.964644 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:00:50.964654 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Aug 13 00:00:50.964665 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Aug 13 00:00:50.964679 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 00:00:50.964691 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 00:00:50.964702 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 00:00:50.964713 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 00:00:50.964728 kernel: NX (Execute Disable) protection: active Aug 13 00:00:50.964739 kernel: APIC: Static calls initialized Aug 13 00:00:50.964751 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Aug 13 00:00:50.964763 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Aug 13 00:00:50.964778 kernel: extended physical RAM map: Aug 13 00:00:50.964790 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:00:50.964806 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Aug 13 00:00:50.964819 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Aug 13 00:00:50.964831 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Aug 13 00:00:50.964844 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Aug 13 00:00:50.964857 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 00:00:50.964869 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 00:00:50.964882 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 00:00:50.964895 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 00:00:50.964907 kernel: efi: EFI v2.7 by EDK II Aug 13 00:00:50.964920 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Aug 13 00:00:50.964935 kernel: secureboot: Secure boot disabled Aug 13 00:00:50.964948 kernel: SMBIOS 2.7 present. Aug 13 00:00:50.964960 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 13 00:00:50.964972 kernel: Hypervisor detected: KVM Aug 13 00:00:50.964985 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:00:50.964998 kernel: kvm-clock: using sched offset of 3940734478 cycles Aug 13 00:00:50.965010 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:00:50.965024 kernel: tsc: Detected 2499.996 MHz processor Aug 13 00:00:50.965037 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:00:50.965050 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:00:50.965062 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Aug 13 00:00:50.965078 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 00:00:50.965091 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:00:50.965105 kernel: Using GB pages for direct mapping Aug 13 00:00:50.965123 kernel: ACPI: Early table checksum verification disabled Aug 13 00:00:50.965137 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Aug 13 00:00:50.965150 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:00:50.965167 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:00:50.965181 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 13 00:00:50.965195 kernel: ACPI: FACS 0x00000000789D0000 000040 Aug 13 00:00:50.965208 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 13 00:00:50.965222 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:00:50.965236 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:00:50.965249 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 13 00:00:50.965263 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 13 00:00:50.965280 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 00:00:50.965294 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 00:00:50.965307 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Aug 13 00:00:50.965321 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Aug 13 00:00:50.965335 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Aug 13 00:00:50.965348 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Aug 13 00:00:50.965362 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Aug 13 00:00:50.965376 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Aug 13 00:00:50.965976 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Aug 13 00:00:50.966005 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Aug 13 00:00:50.966019 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Aug 13 00:00:50.966034 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Aug 13 00:00:50.966049 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Aug 13 00:00:50.966064 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Aug 13 00:00:50.966078 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:00:50.966093 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:00:50.966108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 13 00:00:50.966122 kernel: NUMA: Initialized distance table, cnt=1 Aug 13 00:00:50.966140 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Aug 13 00:00:50.966155 kernel: Zone ranges: Aug 13 00:00:50.966169 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:00:50.966184 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Aug 13 00:00:50.966199 kernel: Normal empty Aug 13 00:00:50.966214 kernel: Movable zone start for each node Aug 13 00:00:50.966228 kernel: Early memory node ranges Aug 13 00:00:50.966242 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:00:50.966255 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Aug 13 00:00:50.966272 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Aug 13 00:00:50.966286 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Aug 13 00:00:50.966300 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:00:50.966313 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:00:50.966327 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 00:00:50.966341 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Aug 13 00:00:50.966355 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 00:00:50.966368 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:00:50.966514 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 13 00:00:50.966536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:00:50.966551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:00:50.966566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:00:50.966581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:00:50.966596 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:00:50.966611 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:00:50.966626 kernel: TSC deadline timer available Aug 13 00:00:50.966641 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:00:50.966656 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:00:50.966671 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Aug 13 00:00:50.966689 kernel: Booting paravirtualized kernel on KVM Aug 13 00:00:50.966704 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:00:50.966719 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:00:50.966734 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 00:00:50.966748 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 00:00:50.966763 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:00:50.966778 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:00:50.966793 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:00:50.966814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:00:50.966829 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:00:50.966844 kernel: random: crng init done Aug 13 00:00:50.966858 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:00:50.966873 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:00:50.966888 kernel: Fallback order for Node 0: 0 Aug 13 00:00:50.966902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Aug 13 00:00:50.966917 kernel: Policy zone: DMA32 Aug 13 00:00:50.966943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:00:50.966959 kernel: Memory: 1872532K/2037804K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 165016K reserved, 0K cma-reserved) Aug 13 00:00:50.966974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:00:50.966988 kernel: Kernel/User page tables isolation: enabled Aug 13 00:00:50.967004 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 00:00:50.967031 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 00:00:50.967049 kernel: Dynamic Preempt: voluntary Aug 13 00:00:50.967065 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:00:50.967081 kernel: rcu: RCU event tracing is enabled. Aug 13 00:00:50.967097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:00:50.967113 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:00:50.967129 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:00:50.967148 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:00:50.967164 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:00:50.967179 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:00:50.967195 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:00:50.967211 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:00:50.967229 kernel: Console: colour dummy device 80x25 Aug 13 00:00:50.967245 kernel: printk: console [tty0] enabled Aug 13 00:00:50.967260 kernel: printk: console [ttyS0] enabled Aug 13 00:00:50.967276 kernel: ACPI: Core revision 20230628 Aug 13 00:00:50.967292 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 13 00:00:50.967308 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:00:50.967324 kernel: x2apic enabled Aug 13 00:00:50.967339 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:00:50.967355 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 13 00:00:50.967374 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Aug 13 00:00:50.967406 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:00:50.967422 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:00:50.967438 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:00:50.967453 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:00:50.967469 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:00:50.967484 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:00:50.967501 kernel: RETBleed: Vulnerable Aug 13 00:00:50.967516 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:00:50.967531 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:00:50.967550 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:00:50.967565 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 00:00:50.967581 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:00:50.967596 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:00:50.967611 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:00:50.967627 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:00:50.967643 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 00:00:50.967658 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 00:00:50.967674 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:00:50.967689 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:00:50.967705 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:00:50.967724 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:00:50.967739 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:00:50.967755 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 00:00:50.967770 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 00:00:50.967786 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 13 00:00:50.967801 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 13 00:00:50.967817 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 13 00:00:50.967832 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 13 00:00:50.967848 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 13 00:00:50.967863 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:00:50.967879 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:00:50.967894 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:00:50.967913 kernel: landlock: Up and running. Aug 13 00:00:50.967928 kernel: SELinux: Initializing. Aug 13 00:00:50.967944 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:00:50.967958 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:00:50.967974 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 00:00:50.967990 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:00:50.968006 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:00:50.968022 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:00:50.968038 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 00:00:50.968054 kernel: signal: max sigframe size: 3632 Aug 13 00:00:50.968072 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:00:50.968088 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:00:50.968104 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:00:50.968120 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:00:50.968135 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:00:50.968151 kernel: .... node #0, CPUs: #1 Aug 13 00:00:50.968167 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 00:00:50.968184 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:00:50.968202 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:00:50.968218 kernel: smpboot: Max logical packages: 1 Aug 13 00:00:50.968234 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Aug 13 00:00:50.968249 kernel: devtmpfs: initialized Aug 13 00:00:50.968265 kernel: x86/mm: Memory block size: 128MB Aug 13 00:00:50.968281 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Aug 13 00:00:50.968296 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:00:50.968312 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:00:50.968328 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:00:50.968346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:00:50.968362 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:00:50.968377 kernel: audit: type=2000 audit(1755043250.392:1): state=initialized audit_enabled=0 res=1 Aug 13 00:00:50.968451 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:00:50.968467 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:00:50.968483 kernel: cpuidle: using governor menu Aug 13 00:00:50.968499 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:00:50.968515 kernel: dca service started, version 1.12.1 Aug 13 00:00:50.968531 kernel: PCI: Using configuration type 1 for base access Aug 13 00:00:50.968551 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:00:50.968567 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:00:50.968583 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:00:50.968598 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:00:50.968614 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:00:50.968630 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:00:50.968646 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:00:50.968662 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:00:50.968678 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 00:00:50.968697 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 00:00:50.968713 kernel: ACPI: Interpreter enabled Aug 13 00:00:50.968729 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:00:50.968745 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:00:50.968761 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:00:50.968777 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:00:50.968792 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:00:50.968808 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:00:50.970662 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:00:50.970840 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 00:00:50.971000 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 00:00:50.971021 kernel: acpiphp: Slot [3] registered Aug 13 00:00:50.971038 kernel: acpiphp: Slot [4] registered Aug 13 00:00:50.971053 kernel: acpiphp: Slot [5] registered Aug 13 00:00:50.971069 kernel: acpiphp: Slot [6] registered Aug 13 00:00:50.971083 kernel: acpiphp: Slot [7] registered Aug 13 00:00:50.971101 kernel: acpiphp: Slot [8] registered Aug 13 00:00:50.971115 kernel: acpiphp: Slot [9] registered Aug 13 00:00:50.971128 kernel: acpiphp: Slot [10] registered Aug 13 00:00:50.971141 kernel: acpiphp: Slot [11] registered Aug 13 00:00:50.971155 kernel: acpiphp: Slot [12] registered Aug 13 00:00:50.971169 kernel: acpiphp: Slot [13] registered Aug 13 00:00:50.971182 kernel: acpiphp: Slot [14] registered Aug 13 00:00:50.971196 kernel: acpiphp: Slot [15] registered Aug 13 00:00:50.971209 kernel: acpiphp: Slot [16] registered Aug 13 00:00:50.971223 kernel: acpiphp: Slot [17] registered Aug 13 00:00:50.971240 kernel: acpiphp: Slot [18] registered Aug 13 00:00:50.971254 kernel: acpiphp: Slot [19] registered Aug 13 00:00:50.971267 kernel: acpiphp: Slot [20] registered Aug 13 00:00:50.971281 kernel: acpiphp: Slot [21] registered Aug 13 00:00:50.971295 kernel: acpiphp: Slot [22] registered Aug 13 00:00:50.971310 kernel: acpiphp: Slot [23] registered Aug 13 00:00:50.971323 kernel: acpiphp: Slot [24] registered Aug 13 00:00:50.971336 kernel: acpiphp: Slot [25] registered Aug 13 00:00:50.971349 kernel: acpiphp: Slot [26] registered Aug 13 00:00:50.971366 kernel: acpiphp: Slot [27] registered Aug 13 00:00:50.971394 kernel: acpiphp: Slot [28] registered Aug 13 00:00:50.971408 kernel: acpiphp: Slot [29] registered Aug 13 00:00:50.971422 kernel: acpiphp: Slot [30] registered Aug 13 00:00:50.971435 kernel: acpiphp: Slot [31] registered Aug 13 00:00:50.971450 kernel: PCI host bridge to bus 0000:00 Aug 13 00:00:50.971602 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:00:50.971725 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:00:50.971851 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:00:50.971970 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:00:50.972089 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Aug 13 00:00:50.972216 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:00:50.972375 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 00:00:50.974634 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 00:00:50.974802 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 13 00:00:50.975059 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 00:00:50.975211 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 13 00:00:50.975350 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 13 00:00:50.977565 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 13 00:00:50.977726 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 13 00:00:50.977871 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 13 00:00:50.978016 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 13 00:00:50.978181 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 13 00:00:50.978326 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Aug 13 00:00:50.978512 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 00:00:50.978657 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Aug 13 00:00:50.978801 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:00:50.978963 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 00:00:50.979115 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Aug 13 00:00:50.979266 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 00:00:50.980817 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Aug 13 00:00:50.980850 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:00:50.980867 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:00:50.980884 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:00:50.980901 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:00:50.980917 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:00:50.980940 kernel: iommu: Default domain type: Translated Aug 13 00:00:50.980957 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:00:50.980973 kernel: efivars: Registered efivars operations Aug 13 00:00:50.980990 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:00:50.981006 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:00:50.981022 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Aug 13 00:00:50.981038 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Aug 13 00:00:50.981054 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Aug 13 00:00:50.981216 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 13 00:00:50.981367 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 13 00:00:50.982297 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:00:50.982322 kernel: vgaarb: loaded Aug 13 00:00:50.982340 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 13 00:00:50.982357 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 13 00:00:50.982374 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:00:50.982412 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:00:50.982430 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:00:50.982452 kernel: pnp: PnP ACPI init Aug 13 00:00:50.982468 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:00:50.982485 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:00:50.982502 kernel: NET: Registered PF_INET protocol family Aug 13 00:00:50.982518 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:00:50.982535 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:00:50.982550 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:00:50.982567 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:00:50.982584 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:00:50.982604 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:00:50.982619 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:00:50.982636 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:00:50.982651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:00:50.982666 kernel: NET: Registered PF_XDP protocol family Aug 13 00:00:50.982813 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:00:50.982960 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:00:50.983092 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:00:50.983219 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:00:50.983351 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Aug 13 00:00:50.983519 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:00:50.983542 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:00:50.983559 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:00:50.983576 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 13 00:00:50.983592 kernel: clocksource: Switched to clocksource tsc Aug 13 00:00:50.983608 kernel: Initialise system trusted keyrings Aug 13 00:00:50.983624 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:00:50.983644 kernel: Key type asymmetric registered Aug 13 00:00:50.983661 kernel: Asymmetric key parser 'x509' registered Aug 13 00:00:50.983677 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 00:00:50.983693 kernel: io scheduler mq-deadline registered Aug 13 00:00:50.983709 kernel: io scheduler kyber registered Aug 13 00:00:50.983725 kernel: io scheduler bfq registered Aug 13 00:00:50.983741 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:00:50.983757 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:00:50.983773 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:00:50.983793 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:00:50.983808 kernel: i8042: Warning: Keylock active Aug 13 00:00:50.983824 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:00:50.983841 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:00:50.983996 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 00:00:50.984134 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 00:00:50.984268 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T00:00:50 UTC (1755043250) Aug 13 00:00:50.984424 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 00:00:50.984446 kernel: intel_pstate: CPU model not supported Aug 13 00:00:50.984460 kernel: efifb: probing for efifb Aug 13 00:00:50.984475 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Aug 13 00:00:50.984491 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Aug 13 00:00:50.984533 kernel: efifb: scrolling: redraw Aug 13 00:00:50.984553 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:00:50.984573 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 00:00:50.984588 kernel: fb0: EFI VGA frame buffer device Aug 13 00:00:50.984605 kernel: pstore: Using crash dump compression: deflate Aug 13 00:00:50.984625 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 00:00:50.984640 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:00:50.984657 kernel: Segment Routing with IPv6 Aug 13 00:00:50.984673 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:00:50.984689 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:00:50.984707 kernel: Key type dns_resolver registered Aug 13 00:00:50.984723 kernel: IPI shorthand broadcast: enabled Aug 13 00:00:50.984741 kernel: sched_clock: Marking stable (462003810, 142709305)->(698907733, -94194618) Aug 13 00:00:50.984758 kernel: registered taskstats version 1 Aug 13 00:00:50.984780 kernel: Loading compiled-in X.509 certificates Aug 13 00:00:50.984797 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 00:00:50.984815 kernel: Key type .fscrypt registered Aug 13 00:00:50.984832 kernel: Key type fscrypt-provisioning registered Aug 13 00:00:50.984849 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:00:50.984867 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:00:50.984884 kernel: ima: No architecture policies found Aug 13 00:00:50.984902 kernel: clk: Disabling unused clocks Aug 13 00:00:50.984919 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 00:00:50.984941 kernel: Write protecting the kernel read-only data: 38912k Aug 13 00:00:50.984959 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 00:00:50.984976 kernel: Run /init as init process Aug 13 00:00:50.984993 kernel: with arguments: Aug 13 00:00:50.985011 kernel: /init Aug 13 00:00:50.985029 kernel: with environment: Aug 13 00:00:50.985046 kernel: HOME=/ Aug 13 00:00:50.985064 kernel: TERM=linux Aug 13 00:00:50.985082 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:00:50.985107 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:00:50.985130 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:00:50.985149 systemd[1]: Detected virtualization amazon. Aug 13 00:00:50.985167 systemd[1]: Detected architecture x86-64. Aug 13 00:00:50.985185 systemd[1]: Running in initrd. Aug 13 00:00:50.985207 systemd[1]: No hostname configured, using default hostname. Aug 13 00:00:50.985226 systemd[1]: Hostname set to . Aug 13 00:00:50.985244 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:00:50.985263 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:00:50.985282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:00:50.985300 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:00:50.985321 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:00:50.985344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:00:50.985363 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:00:50.985449 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:00:50.985469 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:00:50.985486 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:00:50.985502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:00:50.985519 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:00:50.985542 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:00:50.985560 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:00:50.985578 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:00:50.985597 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:00:50.985616 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:00:50.985635 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:00:50.985653 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:00:50.985672 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:00:50.985695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:00:50.985714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:00:50.985733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:00:50.985752 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:00:50.985771 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:00:50.985791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:00:50.985812 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:00:50.985832 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:00:50.985851 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:00:50.985873 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:00:50.985895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:50.985913 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:00:50.985971 systemd-journald[179]: Collecting audit messages is disabled. Aug 13 00:00:50.986012 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:00:50.986030 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:00:50.986047 systemd-journald[179]: Journal started Aug 13 00:00:50.986086 systemd-journald[179]: Runtime Journal (/run/log/journal/ec20c2734b0a237156c506a190d4f97b) is 4.7M, max 38.2M, 33.4M free. Aug 13 00:00:50.991443 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:00:50.969604 systemd-modules-load[180]: Inserted module 'overlay' Aug 13 00:00:51.003418 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:00:51.004472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:51.007049 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:00:51.009478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:00:51.020522 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:00:51.023213 kernel: Bridge firewalling registered Aug 13 00:00:51.022470 systemd-modules-load[180]: Inserted module 'br_netfilter' Aug 13 00:00:51.022676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:00:51.024839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:00:51.032595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:00:51.037457 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:00:51.049268 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:00:51.052845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:00:51.055890 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:51.061653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:00:51.065636 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:00:51.075703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:00:51.079608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:00:51.091974 dracut-cmdline[212]: dracut-dracut-053 Aug 13 00:00:51.095857 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:00:51.144037 systemd-resolved[215]: Positive Trust Anchors: Aug 13 00:00:51.144060 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:00:51.144127 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:00:51.152727 systemd-resolved[215]: Defaulting to hostname 'linux'. Aug 13 00:00:51.154226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:00:51.156643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:00:51.188425 kernel: SCSI subsystem initialized Aug 13 00:00:51.198419 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:00:51.209408 kernel: iscsi: registered transport (tcp) Aug 13 00:00:51.231937 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:00:51.232026 kernel: QLogic iSCSI HBA Driver Aug 13 00:00:51.270497 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:00:51.277560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:00:51.308648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:00:51.308729 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:00:51.311410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:00:51.352429 kernel: raid6: avx512x4 gen() 17742 MB/s Aug 13 00:00:51.370414 kernel: raid6: avx512x2 gen() 17450 MB/s Aug 13 00:00:51.388428 kernel: raid6: avx512x1 gen() 17540 MB/s Aug 13 00:00:51.406409 kernel: raid6: avx2x4 gen() 17583 MB/s Aug 13 00:00:51.424413 kernel: raid6: avx2x2 gen() 17622 MB/s Aug 13 00:00:51.442629 kernel: raid6: avx2x1 gen() 13634 MB/s Aug 13 00:00:51.442683 kernel: raid6: using algorithm avx512x4 gen() 17742 MB/s Aug 13 00:00:51.461620 kernel: raid6: .... xor() 7583 MB/s, rmw enabled Aug 13 00:00:51.461692 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:00:51.483426 kernel: xor: automatically using best checksumming function avx Aug 13 00:00:51.640415 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:00:51.650602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:00:51.656620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:00:51.672731 systemd-udevd[398]: Using default interface naming scheme 'v255'. Aug 13 00:00:51.678777 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:00:51.686239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:00:51.707796 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Aug 13 00:00:51.739544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:00:51.744599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:00:51.799286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:00:51.809326 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:00:51.834692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:00:51.840970 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:00:51.843419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:00:51.845907 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:00:51.853603 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:00:51.878448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:00:51.905439 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:00:51.927403 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:00:51.927737 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:00:51.935401 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 13 00:00:51.935118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:00:51.935345 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:51.938453 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:00:51.939062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:00:51.939276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:51.942533 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:51.952713 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:00:51.953134 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:51.956064 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:00:51.963647 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:d2:0f:28:40:f1 Aug 13 00:00:51.963941 kernel: AES CTR mode by8 optimization enabled Aug 13 00:00:51.967148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:00:51.967787 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:00:51.969547 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:51.975441 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:00:51.987484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:51.998413 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:00:52.001913 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:00:52.017398 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:00:52.019133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:52.025509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:00:52.031906 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:00:52.031942 kernel: GPT:9289727 != 16777215 Aug 13 00:00:52.031959 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:00:52.031975 kernel: GPT:9289727 != 16777215 Aug 13 00:00:52.031991 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:00:52.032007 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:00:52.051294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:52.098408 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (456) Aug 13 00:00:52.127977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:00:52.137120 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/nvme0n1p3 scanned by (udev-worker) (449) Aug 13 00:00:52.171857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:00:52.190816 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:00:52.199932 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:00:52.200496 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:00:52.205657 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:00:52.213338 disk-uuid[633]: Primary Header is updated. Aug 13 00:00:52.213338 disk-uuid[633]: Secondary Entries is updated. Aug 13 00:00:52.213338 disk-uuid[633]: Secondary Header is updated. Aug 13 00:00:52.219926 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:00:52.225413 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:00:53.229053 disk-uuid[634]: The operation has completed successfully. Aug 13 00:00:53.230110 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:00:53.347969 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:00:53.348105 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:00:53.387635 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:00:53.391411 sh[892]: Success Aug 13 00:00:53.411413 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:00:53.523501 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:00:53.540591 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:00:53.542559 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:00:53.579682 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 00:00:53.579751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:53.579765 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:00:53.581670 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:00:53.583066 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:00:53.657537 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:00:53.679451 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:00:53.680781 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:00:53.685624 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:00:53.689611 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:00:53.726407 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:53.726489 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:53.726512 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:00:53.733488 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:00:53.740477 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:53.743516 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:00:53.749627 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:00:53.794308 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:00:53.800665 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:00:53.834103 systemd-networkd[1081]: lo: Link UP Aug 13 00:00:53.834116 systemd-networkd[1081]: lo: Gained carrier Aug 13 00:00:53.836183 systemd-networkd[1081]: Enumeration completed Aug 13 00:00:53.836545 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:00:53.837087 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:53.837092 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:00:53.843450 systemd[1]: Reached target network.target - Network. Aug 13 00:00:53.845425 systemd-networkd[1081]: eth0: Link UP Aug 13 00:00:53.845431 systemd-networkd[1081]: eth0: Gained carrier Aug 13 00:00:53.845448 systemd-networkd[1081]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:53.856497 systemd-networkd[1081]: eth0: DHCPv4 address 172.31.18.46/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:00:54.177583 ignition[1020]: Ignition 2.20.0 Aug 13 00:00:54.177595 ignition[1020]: Stage: fetch-offline Aug 13 00:00:54.177774 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:54.177783 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:00:54.178220 ignition[1020]: Ignition finished successfully Aug 13 00:00:54.180851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:00:54.185634 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:00:54.201813 ignition[1091]: Ignition 2.20.0 Aug 13 00:00:54.201828 ignition[1091]: Stage: fetch Aug 13 00:00:54.202307 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:54.202322 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:00:54.202482 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:00:54.260112 ignition[1091]: PUT result: OK Aug 13 00:00:54.265668 ignition[1091]: parsed url from cmdline: "" Aug 13 00:00:54.265679 ignition[1091]: no config URL provided Aug 13 00:00:54.265691 ignition[1091]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:00:54.265708 ignition[1091]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:00:54.265738 ignition[1091]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:00:54.266747 ignition[1091]: PUT result: OK Aug 13 00:00:54.266833 ignition[1091]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:00:54.269747 ignition[1091]: GET result: OK Aug 13 00:00:54.269853 ignition[1091]: parsing config with SHA512: 14bba01ded85bb56196d70145fba17a003786c4838419b9a8846700d6e29b8b114898c7d27beef19d7cc4ee3e8467814d1bc27fb6a014c792048cb810a8d7545 Aug 13 00:00:54.275037 unknown[1091]: fetched base config from "system" Aug 13 00:00:54.275052 unknown[1091]: fetched base config from "system" Aug 13 00:00:54.275722 ignition[1091]: fetch: fetch complete Aug 13 00:00:54.275059 unknown[1091]: fetched user config from "aws" Aug 13 00:00:54.275730 ignition[1091]: fetch: fetch passed Aug 13 00:00:54.275823 ignition[1091]: Ignition finished successfully Aug 13 00:00:54.278186 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:00:54.288707 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:00:54.307873 ignition[1097]: Ignition 2.20.0 Aug 13 00:00:54.307890 ignition[1097]: Stage: kargs Aug 13 00:00:54.308336 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:54.308350 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:00:54.308508 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:00:54.309434 ignition[1097]: PUT result: OK Aug 13 00:00:54.312163 ignition[1097]: kargs: kargs passed Aug 13 00:00:54.312247 ignition[1097]: Ignition finished successfully Aug 13 00:00:54.314067 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:00:54.321728 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:00:54.335686 ignition[1104]: Ignition 2.20.0 Aug 13 00:00:54.335702 ignition[1104]: Stage: disks Aug 13 00:00:54.336156 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:54.336171 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:00:54.336300 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:00:54.337180 ignition[1104]: PUT result: OK Aug 13 00:00:54.339968 ignition[1104]: disks: disks passed Aug 13 00:00:54.340048 ignition[1104]: Ignition finished successfully Aug 13 00:00:54.341739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:00:54.342422 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:00:54.342820 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:00:54.343502 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:00:54.344064 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:00:54.344633 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:00:54.348632 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:00:54.383859 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:00:54.386598 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:00:54.392528 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:00:54.496413 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 00:00:54.497299 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:00:54.498549 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:00:54.504536 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:00:54.508599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:00:54.509746 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:00:54.509823 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:00:54.509859 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:00:54.524662 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:00:54.531628 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1131) Aug 13 00:00:54.537443 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:54.537524 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:54.537544 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:00:54.538755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:00:54.550417 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:00:54.552449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:00:54.901259 initrd-setup-root[1155]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:00:54.907923 initrd-setup-root[1162]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:00:54.913763 initrd-setup-root[1169]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:00:54.917927 initrd-setup-root[1176]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:00:55.217247 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:00:55.228555 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:00:55.231607 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:00:55.240335 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:00:55.242429 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:55.275110 ignition[1243]: INFO : Ignition 2.20.0 Aug 13 00:00:55.276122 ignition[1243]: INFO : Stage: mount Aug 13 00:00:55.276122 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:55.276122 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:00:55.277988 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:00:55.278920 ignition[1243]: INFO : PUT result: OK Aug 13 00:00:55.282421 ignition[1243]: INFO : mount: mount passed Aug 13 00:00:55.282421 ignition[1243]: INFO : Ignition finished successfully Aug 13 00:00:55.284308 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:00:55.287489 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:00:55.292639 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:00:55.302274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:00:55.329539 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1255) Aug 13 00:00:55.329595 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:55.333022 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:55.333083 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:00:55.339414 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:00:55.341543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:00:55.368857 ignition[1272]: INFO : Ignition 2.20.0 Aug 13 00:00:55.368857 ignition[1272]: INFO : Stage: files Aug 13 00:00:55.369895 ignition[1272]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:55.369895 ignition[1272]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:00:55.369895 ignition[1272]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:00:55.370839 ignition[1272]: INFO : PUT result: OK Aug 13 00:00:55.372577 ignition[1272]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:00:55.384703 ignition[1272]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:00:55.384703 ignition[1272]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:00:55.400604 ignition[1272]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:00:55.401742 ignition[1272]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:00:55.401742 ignition[1272]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:00:55.401222 unknown[1272]: wrote ssh authorized keys file for user: core Aug 13 00:00:55.412971 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:00:55.413985 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:00:55.463362 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:00:55.821548 systemd-networkd[1081]: eth0: Gained IPv6LL Aug 13 00:00:55.833143 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:00:55.834008 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:00:55.834008 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:00:56.020103 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:00:56.143758 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:56.152207 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:00:56.598829 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:00:59.925035 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:59.925035 ignition[1272]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:00:59.927074 ignition[1272]: INFO : files: files passed Aug 13 00:00:59.932473 ignition[1272]: INFO : Ignition finished successfully Aug 13 00:00:59.928150 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:00:59.939684 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:00:59.941850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:00:59.945641 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:00:59.946354 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:00:59.964280 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:00:59.964280 initrd-setup-root-after-ignition[1301]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:00:59.967245 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:00:59.967916 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:00:59.968867 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:00:59.979620 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:01:00.004585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:01:00.004740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:01:00.005939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:01:00.007086 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:01:00.007933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:01:00.013588 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:01:00.026793 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:01:00.032641 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:01:00.046360 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:01:00.047200 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:01:00.048223 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:01:00.049103 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:01:00.049300 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:01:00.050475 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:01:00.051444 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:01:00.052217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:01:00.052984 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:01:00.053758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:01:00.054538 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:01:00.055428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:01:00.056208 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:01:00.057346 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:01:00.058110 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:01:00.058833 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:01:00.059167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:01:00.060235 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:01:00.061073 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:01:00.061748 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:01:00.061886 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:01:00.062551 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:01:00.062786 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:01:00.064177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:01:00.064454 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:01:00.065121 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:01:00.065284 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:01:00.078041 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:01:00.082702 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:01:00.084237 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:01:00.084728 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:01:00.086915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:01:00.087124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:01:00.099775 ignition[1325]: INFO : Ignition 2.20.0 Aug 13 00:01:00.099775 ignition[1325]: INFO : Stage: umount Aug 13 00:01:00.103894 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:01:00.103894 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:01:00.103894 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:01:00.103894 ignition[1325]: INFO : PUT result: OK Aug 13 00:01:00.101299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:01:00.108650 ignition[1325]: INFO : umount: umount passed Aug 13 00:01:00.108650 ignition[1325]: INFO : Ignition finished successfully Aug 13 00:01:00.101711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:01:00.109245 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:01:00.109449 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:01:00.111623 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:01:00.111703 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:01:00.113185 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:01:00.113257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:01:00.113757 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:01:00.113819 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:01:00.114744 systemd[1]: Stopped target network.target - Network. Aug 13 00:01:00.115609 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:01:00.115680 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:01:00.118927 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:01:00.119693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:01:00.120171 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:01:00.121468 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:01:00.122528 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:01:00.123162 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:01:00.123223 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:01:00.125507 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:01:00.125566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:01:00.126139 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:01:00.126218 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:01:00.126839 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:01:00.126982 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:01:00.128438 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:01:00.128985 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:01:00.131736 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:01:00.134829 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:01:00.135114 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:01:00.138828 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:01:00.139539 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:01:00.139637 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:01:00.140530 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:01:00.140626 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:01:00.142114 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:01:00.143540 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:01:00.143599 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:01:00.144187 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:01:00.144238 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:01:00.150527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:01:00.152008 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:01:00.152117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:01:00.152573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:01:00.152618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:01:00.153329 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:01:00.153392 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:01:00.154161 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:01:00.154206 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:01:00.155183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:01:00.156851 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:01:00.156917 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:01:00.164566 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:01:00.164713 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:01:00.166199 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:01:00.166253 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:01:00.166835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:01:00.166960 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:01:00.167324 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:01:00.167371 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:01:00.168587 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:01:00.168642 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:01:00.170639 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:01:00.170718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:01:00.183672 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:01:00.184341 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:01:00.184449 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:01:00.185249 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:01:00.185316 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:01:00.185993 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:01:00.186055 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:01:00.189131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:01:00.189201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:00.191171 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:01:00.191264 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:01:00.193820 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:01:00.193959 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:01:00.194687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:01:00.194817 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:01:00.196906 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:01:00.207668 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:01:00.216628 systemd[1]: Switching root. Aug 13 00:01:00.269514 systemd-journald[179]: Journal stopped Aug 13 00:01:04.281652 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Aug 13 00:01:04.281762 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:01:04.281791 kernel: SELinux: policy capability open_perms=1 Aug 13 00:01:04.281815 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:01:04.281838 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:01:04.281861 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:01:04.281886 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:01:04.281921 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:01:04.281944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:01:04.281979 kernel: audit: type=1403 audit(1755043260.936:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:01:04.282004 systemd[1]: Successfully loaded SELinux policy in 234.711ms. Aug 13 00:01:04.282058 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 48.290ms. Aug 13 00:01:04.282084 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:01:04.282110 systemd[1]: Detected virtualization amazon. Aug 13 00:01:04.282134 systemd[1]: Detected architecture x86-64. Aug 13 00:01:04.282157 systemd[1]: Detected first boot. Aug 13 00:01:04.282186 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:01:04.282212 zram_generator::config[1370]: No configuration found. Aug 13 00:01:04.282238 kernel: Guest personality initialized and is inactive Aug 13 00:01:04.282260 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:01:04.282283 kernel: Initialized host personality Aug 13 00:01:04.282306 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:01:04.282328 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:01:04.282356 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:01:04.293153 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:01:04.293237 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:01:04.293266 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:01:04.293295 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:01:04.293320 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:01:04.293345 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:01:04.293371 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:01:04.293434 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:01:04.293462 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:01:04.293493 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:01:04.293524 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:01:04.293551 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:01:04.293578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:01:04.293603 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:01:04.293628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:01:04.293654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:01:04.293681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:01:04.293711 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:01:04.293738 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:01:04.293763 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:01:04.293787 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:01:04.293812 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:01:04.293837 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:01:04.293862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:01:04.293887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:01:04.293912 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:01:04.293941 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:01:04.293967 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:01:04.293990 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:01:04.294015 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:01:04.294041 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:01:04.294067 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:01:04.294091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:01:04.294116 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:01:04.294140 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:01:04.294169 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:01:04.294194 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:01:04.294220 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:04.294246 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:01:04.294270 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:01:04.294294 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:01:04.294318 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:01:04.294345 systemd[1]: Reached target machines.target - Containers. Aug 13 00:01:04.294375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:01:04.296245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:04.296284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:01:04.296312 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:01:04.296337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:01:04.296360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:01:04.296439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:01:04.296460 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:01:04.296478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:01:04.296506 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:01:04.296525 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:01:04.296545 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:01:04.296563 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:01:04.296582 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:01:04.296604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:01:04.296629 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:01:04.296653 kernel: fuse: init (API version 7.39) Aug 13 00:01:04.296684 kernel: loop: module loaded Aug 13 00:01:04.296706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:01:04.296731 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:01:04.296757 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:01:04.296780 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:01:04.296807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:01:04.296832 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:01:04.296858 systemd[1]: Stopped verity-setup.service. Aug 13 00:01:04.296890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:04.296916 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:01:04.296945 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:01:04.296972 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:01:04.296997 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:01:04.297023 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:01:04.297049 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:01:04.297073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:01:04.297097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:01:04.297123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:01:04.297149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:04.297178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:01:04.297203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:04.297227 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:01:04.297252 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:01:04.297280 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:01:04.297310 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:04.297338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:01:04.297365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:01:04.302957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:01:04.303018 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:01:04.303043 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:01:04.303069 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:01:04.303096 kernel: ACPI: bus type drm_connector registered Aug 13 00:01:04.303172 systemd-journald[1449]: Collecting audit messages is disabled. Aug 13 00:01:04.303218 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:01:04.303245 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:01:04.303277 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:01:04.303303 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:01:04.303329 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:01:04.303353 systemd-journald[1449]: Journal started Aug 13 00:01:04.303429 systemd-journald[1449]: Runtime Journal (/run/log/journal/ec20c2734b0a237156c506a190d4f97b) is 4.7M, max 38.2M, 33.4M free. Aug 13 00:01:03.788261 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:01:03.801436 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:01:03.803369 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:01:04.322106 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:01:04.328401 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:01:04.335960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:04.345474 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:01:04.345536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:04.354453 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:01:04.363412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:01:04.376072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:01:04.389222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:01:04.405420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:01:04.412451 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:01:04.416216 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:01:04.418959 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:04.419213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:01:04.420432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:01:04.421374 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:01:04.422219 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:01:04.423279 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:01:04.424349 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:01:04.425756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:01:04.465586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:01:04.471549 kernel: loop0: detected capacity change from 0 to 229808 Aug 13 00:01:04.480680 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:01:04.489464 systemd-journald[1449]: Time spent on flushing to /var/log/journal/ec20c2734b0a237156c506a190d4f97b is 97.667ms for 1021 entries. Aug 13 00:01:04.489464 systemd-journald[1449]: System Journal (/var/log/journal/ec20c2734b0a237156c506a190d4f97b) is 8M, max 195.6M, 187.6M free. Aug 13 00:01:04.610100 systemd-journald[1449]: Received client request to flush runtime journal. Aug 13 00:01:04.610165 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:01:04.498222 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:01:04.501503 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:01:04.519902 systemd-tmpfiles[1488]: ACLs are not supported, ignoring. Aug 13 00:01:04.519927 systemd-tmpfiles[1488]: ACLs are not supported, ignoring. Aug 13 00:01:04.545922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:01:04.564022 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:01:04.576903 udevadm[1517]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:01:04.583319 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:01:04.611786 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:01:04.621628 kernel: loop1: detected capacity change from 0 to 62832 Aug 13 00:01:04.687346 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:01:04.696437 kernel: loop2: detected capacity change from 0 to 147912 Aug 13 00:01:04.697511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:01:04.720482 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Aug 13 00:01:04.720511 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Aug 13 00:01:04.728940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:01:04.811224 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:01:04.841059 kernel: loop3: detected capacity change from 0 to 138176 Aug 13 00:01:05.017421 kernel: loop4: detected capacity change from 0 to 229808 Aug 13 00:01:05.086010 kernel: loop5: detected capacity change from 0 to 62832 Aug 13 00:01:05.155418 kernel: loop6: detected capacity change from 0 to 147912 Aug 13 00:01:05.187419 kernel: loop7: detected capacity change from 0 to 138176 Aug 13 00:01:05.244341 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:01:05.245043 (sd-merge)[1536]: Merged extensions into '/usr'. Aug 13 00:01:05.253166 systemd[1]: Reload requested from client PID 1486 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:01:05.253185 systemd[1]: Reloading... Aug 13 00:01:05.370474 zram_generator::config[1564]: No configuration found. Aug 13 00:01:05.644556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:05.782861 systemd[1]: Reloading finished in 528 ms. Aug 13 00:01:05.804018 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:01:05.816655 systemd[1]: Starting ensure-sysext.service... Aug 13 00:01:05.826643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:01:05.878165 systemd[1]: Reload requested from client PID 1615 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:01:05.878309 systemd[1]: Reloading... Aug 13 00:01:05.881666 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:01:05.882028 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:01:05.887654 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:01:05.888959 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Aug 13 00:01:05.890749 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Aug 13 00:01:05.902044 systemd-tmpfiles[1616]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:01:05.902272 systemd-tmpfiles[1616]: Skipping /boot Aug 13 00:01:05.926916 systemd-tmpfiles[1616]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:01:05.926931 systemd-tmpfiles[1616]: Skipping /boot Aug 13 00:01:06.000406 zram_generator::config[1643]: No configuration found. Aug 13 00:01:06.105655 ldconfig[1482]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:01:06.166449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:06.241952 systemd[1]: Reloading finished in 362 ms. Aug 13 00:01:06.253937 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:01:06.254792 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:01:06.263564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:01:06.275714 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:01:06.280500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:01:06.282502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:01:06.291473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:01:06.294658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:01:06.304665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:01:06.309728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:06.309932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:06.318475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:01:06.326278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:01:06.329655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:01:06.330182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:06.330312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:01:06.334761 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:01:06.336367 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:06.345865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:06.346131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:06.346352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:06.346515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:01:06.346663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:06.349418 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:01:06.360434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:06.360641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:01:06.367349 systemd-udevd[1706]: Using default interface naming scheme 'v255'. Aug 13 00:01:06.367943 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:06.368119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:01:06.377229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:06.377489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:01:06.382658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:01:06.384101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:01:06.384147 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:01:06.384224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:01:06.384265 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:01:06.384688 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:01:06.385141 systemd[1]: Finished ensure-sysext.service. Aug 13 00:01:06.386327 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:01:06.387497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:06.387791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:01:06.391208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:06.396444 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:01:06.401766 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:06.402153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:01:06.433703 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:01:06.443334 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:01:06.455941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:01:06.463616 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:01:06.488786 augenrules[1753]: No rules Aug 13 00:01:06.492162 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:01:06.493039 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:01:06.559192 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:01:06.561098 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:01:06.599987 (udev-worker)[1749]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:01:06.644910 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:01:06.679793 systemd-networkd[1744]: lo: Link UP Aug 13 00:01:06.679805 systemd-networkd[1744]: lo: Gained carrier Aug 13 00:01:06.680939 systemd-networkd[1744]: Enumeration completed Aug 13 00:01:06.681090 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:01:06.688661 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:01:06.692616 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:01:06.696151 systemd-resolved[1705]: Positive Trust Anchors: Aug 13 00:01:06.698796 systemd-resolved[1705]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:01:06.698987 systemd-resolved[1705]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:01:06.709033 systemd-resolved[1705]: Defaulting to hostname 'linux'. Aug 13 00:01:06.712564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:01:06.713168 systemd[1]: Reached target network.target - Network. Aug 13 00:01:06.714468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:01:06.734609 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:01:06.771579 systemd-networkd[1744]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:01:06.771593 systemd-networkd[1744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:01:06.778835 systemd-networkd[1744]: eth0: Link UP Aug 13 00:01:06.781021 systemd-networkd[1744]: eth0: Gained carrier Aug 13 00:01:06.781059 systemd-networkd[1744]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:01:06.781452 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:01:06.785432 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:01:06.790502 systemd-networkd[1744]: eth0: DHCPv4 address 172.31.18.46/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:01:06.800226 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 00:01:06.800626 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 45 scanned by (udev-worker) (1765) Aug 13 00:01:06.817446 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:01:06.820448 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 13 00:01:06.822451 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 00:01:06.974785 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:01:06.982696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:01:07.053415 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:01:07.054585 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:01:07.062754 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:01:07.065145 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:01:07.100413 lvm[1870]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:07.104549 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:01:07.138237 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:01:07.139026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:01:07.147646 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:01:07.151188 lvm[1876]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:07.162333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:01:07.164074 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:01:07.164839 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:01:07.165315 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:01:07.165955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:01:07.166465 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:01:07.167153 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:01:07.167538 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:01:07.167572 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:01:07.167904 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:01:07.170192 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:01:07.172316 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:01:07.175777 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:01:07.176746 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:01:07.177624 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:01:07.180687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:01:07.181775 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:01:07.183716 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:01:07.184485 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:01:07.185683 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:01:07.186262 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:01:07.186791 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:01:07.186838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:01:07.193803 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:01:07.196586 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:01:07.199635 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:01:07.204074 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:01:07.208628 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:01:07.210666 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:01:07.212640 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:01:07.221636 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:01:07.230533 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:01:07.247121 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:01:07.253289 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:01:07.281270 jq[1885]: false Aug 13 00:01:07.282668 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:01:07.293011 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:01:07.295798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:01:07.297029 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:01:07.301049 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:01:07.305495 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:01:07.317954 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:01:07.319472 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:01:07.331799 jq[1905]: true Aug 13 00:01:07.351107 dbus-daemon[1884]: [system] SELinux support is enabled Aug 13 00:01:07.355591 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:01:07.362921 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:01:07.363715 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:01:07.373749 extend-filesystems[1886]: Found loop4 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found loop5 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found loop6 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found loop7 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p1 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p2 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p3 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found usr Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p4 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p6 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p7 Aug 13 00:01:07.373749 extend-filesystems[1886]: Found nvme0n1p9 Aug 13 00:01:07.373749 extend-filesystems[1886]: Checking size of /dev/nvme0n1p9 Aug 13 00:01:07.374829 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:01:07.382283 dbus-daemon[1884]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1744 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:01:07.376094 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:01:07.416184 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:01:07.405763 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:01:07.407680 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:01:07.407721 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:01:07.409733 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:01:07.409762 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:01:07.433710 update_engine[1903]: I20250813 00:01:07.433593 1903 main.cc:92] Flatcar Update Engine starting Aug 13 00:01:07.439607 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:01:07.442236 ntpd[1888]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:59:43 UTC 2025 (1): Starting Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:59:43 UTC 2025 (1): Starting Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: ---------------------------------------------------- Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: corporation. Support and training for ntp-4 are Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: available at https://www.nwtime.org/support Aug 13 00:01:07.444771 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: ---------------------------------------------------- Aug 13 00:01:07.442274 ntpd[1888]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:01:07.442286 ntpd[1888]: ---------------------------------------------------- Aug 13 00:01:07.442297 ntpd[1888]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:01:07.442308 ntpd[1888]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:01:07.442320 ntpd[1888]: corporation. Support and training for ntp-4 are Aug 13 00:01:07.442330 ntpd[1888]: available at https://www.nwtime.org/support Aug 13 00:01:07.442339 ntpd[1888]: ---------------------------------------------------- Aug 13 00:01:07.452063 (ntainerd)[1914]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:01:07.452995 jq[1913]: true Aug 13 00:01:07.464135 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:01:07.467257 ntpd[1888]: proto: precision = 0.106 usec (-23) Aug 13 00:01:07.472765 extend-filesystems[1886]: Resized partition /dev/nvme0n1p9 Aug 13 00:01:07.476022 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: proto: precision = 0.106 usec (-23) Aug 13 00:01:07.476022 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: basedate set to 2025-07-31 Aug 13 00:01:07.476022 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: gps base set to 2025-08-03 (week 2378) Aug 13 00:01:07.476145 update_engine[1903]: I20250813 00:01:07.468919 1903 update_check_scheduler.cc:74] Next update check in 10m16s Aug 13 00:01:07.476188 tar[1907]: linux-amd64/LICENSE Aug 13 00:01:07.476188 tar[1907]: linux-amd64/helm Aug 13 00:01:07.473838 ntpd[1888]: basedate set to 2025-07-31 Aug 13 00:01:07.473229 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:01:07.473862 ntpd[1888]: gps base set to 2025-08-03 (week 2378) Aug 13 00:01:07.494568 extend-filesystems[1935]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Listen normally on 3 eth0 172.31.18.46:123 Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Listen normally on 4 lo [::1]:123 Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: bind(21) AF_INET6 fe80::4d2:fff:fe28:40f1%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: unable to create socket on eth0 (5) for fe80::4d2:fff:fe28:40f1%2#123 Aug 13 00:01:07.506867 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: failed to init interface for address fe80::4d2:fff:fe28:40f1%2 Aug 13 00:01:07.500338 ntpd[1888]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:01:07.500427 ntpd[1888]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:01:07.511678 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: Listening on routing socket on fd #21 for interface updates Aug 13 00:01:07.500627 ntpd[1888]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:01:07.500666 ntpd[1888]: Listen normally on 3 eth0 172.31.18.46:123 Aug 13 00:01:07.500708 ntpd[1888]: Listen normally on 4 lo [::1]:123 Aug 13 00:01:07.500762 ntpd[1888]: bind(21) AF_INET6 fe80::4d2:fff:fe28:40f1%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:01:07.500786 ntpd[1888]: unable to create socket on eth0 (5) for fe80::4d2:fff:fe28:40f1%2#123 Aug 13 00:01:07.500801 ntpd[1888]: failed to init interface for address fe80::4d2:fff:fe28:40f1%2 Aug 13 00:01:07.508477 ntpd[1888]: Listening on routing socket on fd #21 for interface updates Aug 13 00:01:07.514545 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:01:07.514642 coreos-metadata[1883]: Aug 13 00:01:07.512 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:01:07.519935 coreos-metadata[1883]: Aug 13 00:01:07.516 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:01:07.518931 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:01:07.524530 coreos-metadata[1883]: Aug 13 00:01:07.520 INFO Fetch successful Aug 13 00:01:07.524530 coreos-metadata[1883]: Aug 13 00:01:07.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:01:07.524693 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:01:07.524693 ntpd[1888]: 13 Aug 00:01:07 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:01:07.523565 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:01:07.523602 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:01:07.525198 coreos-metadata[1883]: Aug 13 00:01:07.524 INFO Fetch successful Aug 13 00:01:07.525198 coreos-metadata[1883]: Aug 13 00:01:07.525 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:01:07.533709 coreos-metadata[1883]: Aug 13 00:01:07.528 INFO Fetch successful Aug 13 00:01:07.533709 coreos-metadata[1883]: Aug 13 00:01:07.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:01:07.533709 coreos-metadata[1883]: Aug 13 00:01:07.533 INFO Fetch successful Aug 13 00:01:07.533709 coreos-metadata[1883]: Aug 13 00:01:07.533 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:01:07.534545 coreos-metadata[1883]: Aug 13 00:01:07.534 INFO Fetch failed with 404: resource not found Aug 13 00:01:07.534545 coreos-metadata[1883]: Aug 13 00:01:07.534 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:01:07.542093 coreos-metadata[1883]: Aug 13 00:01:07.536 INFO Fetch successful Aug 13 00:01:07.542093 coreos-metadata[1883]: Aug 13 00:01:07.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:01:07.542093 coreos-metadata[1883]: Aug 13 00:01:07.536 INFO Fetch successful Aug 13 00:01:07.542093 coreos-metadata[1883]: Aug 13 00:01:07.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:01:07.542093 coreos-metadata[1883]: Aug 13 00:01:07.538 INFO Fetch successful Aug 13 00:01:07.542093 coreos-metadata[1883]: Aug 13 00:01:07.538 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:01:07.543190 coreos-metadata[1883]: Aug 13 00:01:07.543 INFO Fetch successful Aug 13 00:01:07.543190 coreos-metadata[1883]: Aug 13 00:01:07.543 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:01:07.553211 coreos-metadata[1883]: Aug 13 00:01:07.551 INFO Fetch successful Aug 13 00:01:07.634580 systemd-logind[1898]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:01:07.634610 systemd-logind[1898]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 13 00:01:07.634635 systemd-logind[1898]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:01:07.642374 systemd-logind[1898]: New seat seat0. Aug 13 00:01:07.647426 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:01:07.655188 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:01:07.670433 extend-filesystems[1935]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:01:07.670433 extend-filesystems[1935]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:01:07.670433 extend-filesystems[1935]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:01:07.664587 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:01:07.684595 extend-filesystems[1886]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:01:07.664872 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:01:07.697665 bash[1959]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:01:07.693558 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:01:07.713767 systemd[1]: Starting sshkeys.service... Aug 13 00:01:07.715293 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:01:07.718240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:01:07.764341 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 45 scanned by (udev-worker) (1747) Aug 13 00:01:07.776712 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:01:07.788957 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:01:07.884229 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:01:07.895019 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:01:07.900749 dbus-daemon[1884]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1928 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:01:07.913835 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:01:07.983933 polkitd[1994]: Started polkitd version 121 Aug 13 00:01:08.024910 polkitd[1994]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:01:08.024999 polkitd[1994]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:01:08.026363 polkitd[1994]: Finished loading, compiling and executing 2 rules Aug 13 00:01:08.034137 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:01:08.034344 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:01:08.036004 polkitd[1994]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:01:08.092065 locksmithd[1933]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:01:08.100196 systemd-hostnamed[1928]: Hostname set to (transient) Aug 13 00:01:08.101489 systemd-resolved[1705]: System hostname changed to 'ip-172-31-18-46'. Aug 13 00:01:08.103921 coreos-metadata[1970]: Aug 13 00:01:08.103 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:01:08.108708 coreos-metadata[1970]: Aug 13 00:01:08.108 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:01:08.111595 coreos-metadata[1970]: Aug 13 00:01:08.111 INFO Fetch successful Aug 13 00:01:08.111595 coreos-metadata[1970]: Aug 13 00:01:08.111 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:01:08.114429 coreos-metadata[1970]: Aug 13 00:01:08.113 INFO Fetch successful Aug 13 00:01:08.116792 unknown[1970]: wrote ssh authorized keys file for user: core Aug 13 00:01:08.165775 update-ssh-keys[2049]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:01:08.167884 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:01:08.171838 systemd[1]: Finished sshkeys.service. Aug 13 00:01:08.301535 systemd-networkd[1744]: eth0: Gained IPv6LL Aug 13 00:01:08.306338 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:01:08.309755 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:01:08.317873 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:01:08.329623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:08.335863 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:01:08.389806 containerd[1914]: time="2025-08-13T00:01:08.389710774Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 00:01:08.394517 sshd_keygen[1925]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:01:08.461427 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:01:08.464141 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:01:08.475866 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:01:08.480798 systemd[1]: Started sshd@0-172.31.18.46:22-139.178.68.195:35778.service - OpenSSH per-connection server daemon (139.178.68.195:35778). Aug 13 00:01:08.493095 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:01:08.493454 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:01:08.505168 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:01:08.553987 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:01:08.562831 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:01:08.573865 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:01:08.574911 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:01:08.595511 containerd[1914]: time="2025-08-13T00:01:08.595451301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.600945 containerd[1914]: time="2025-08-13T00:01:08.600885314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:08.600945 containerd[1914]: time="2025-08-13T00:01:08.600941391Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:01:08.601102 containerd[1914]: time="2025-08-13T00:01:08.600965979Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:01:08.601187 containerd[1914]: time="2025-08-13T00:01:08.601165238Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:01:08.601248 containerd[1914]: time="2025-08-13T00:01:08.601206007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.601308 containerd[1914]: time="2025-08-13T00:01:08.601285455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:08.601347 containerd[1914]: time="2025-08-13T00:01:08.601313693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.603519 containerd[1914]: time="2025-08-13T00:01:08.603468671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:08.603519 containerd[1914]: time="2025-08-13T00:01:08.603515043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.603667 containerd[1914]: time="2025-08-13T00:01:08.603537305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:08.603667 containerd[1914]: time="2025-08-13T00:01:08.603551757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.603760 containerd[1914]: time="2025-08-13T00:01:08.603691053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.604013 containerd[1914]: time="2025-08-13T00:01:08.603980440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:08.604253 containerd[1914]: time="2025-08-13T00:01:08.604226311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:08.604310 containerd[1914]: time="2025-08-13T00:01:08.604265122Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:01:08.605584 containerd[1914]: time="2025-08-13T00:01:08.605546667Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:01:08.605769 containerd[1914]: time="2025-08-13T00:01:08.605655654Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:01:08.607916 amazon-ssm-agent[2085]: Initializing new seelog logger Aug 13 00:01:08.608352 amazon-ssm-agent[2085]: New Seelog Logger Creation Complete Aug 13 00:01:08.608476 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.608476 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.609893 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 processing appconfig overrides Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612179867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612257543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612279354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612304536Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612337172Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612573690Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.612905468Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613049652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613072182Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613091914Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613140834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613159551Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613176631Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.613988 containerd[1914]: time="2025-08-13T00:01:08.613194746Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 processing appconfig overrides Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 processing appconfig overrides Aug 13 00:01:08.614568 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO Proxy environment variables: Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613279439Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613301335Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613323123Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613341437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613371997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613405915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613424788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613445955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613465458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613501537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613522575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613543997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613565785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.614814 containerd[1914]: time="2025-08-13T00:01:08.613587925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613605538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613624562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613649902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613673648Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613705942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613727395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613744495Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613801376Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613851748Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613865629Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613881800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613894755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613912912Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:01:08.615311 containerd[1914]: time="2025-08-13T00:01:08.613927584Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:01:08.616917 containerd[1914]: time="2025-08-13T00:01:08.613941061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:01:08.617490 containerd[1914]: time="2025-08-13T00:01:08.614336180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:01:08.617490 containerd[1914]: time="2025-08-13T00:01:08.615570606Z" level=info msg="Connect containerd service" Aug 13 00:01:08.617490 containerd[1914]: time="2025-08-13T00:01:08.615716748Z" level=info msg="using legacy CRI server" Aug 13 00:01:08.617490 containerd[1914]: time="2025-08-13T00:01:08.615730351Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:01:08.617490 containerd[1914]: time="2025-08-13T00:01:08.615931459Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:01:08.618375 containerd[1914]: time="2025-08-13T00:01:08.617825052Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:01:08.618828 containerd[1914]: time="2025-08-13T00:01:08.618583929Z" level=info msg="Start subscribing containerd event" Aug 13 00:01:08.618828 containerd[1914]: time="2025-08-13T00:01:08.618738322Z" level=info msg="Start recovering state" Aug 13 00:01:08.619580 containerd[1914]: time="2025-08-13T00:01:08.619472024Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:01:08.619580 containerd[1914]: time="2025-08-13T00:01:08.619536028Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:01:08.624667 containerd[1914]: time="2025-08-13T00:01:08.624512630Z" level=info msg="Start event monitor" Aug 13 00:01:08.625054 containerd[1914]: time="2025-08-13T00:01:08.624893277Z" level=info msg="Start snapshots syncer" Aug 13 00:01:08.625054 containerd[1914]: time="2025-08-13T00:01:08.624921163Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:01:08.625054 containerd[1914]: time="2025-08-13T00:01:08.624970589Z" level=info msg="Start streaming server" Aug 13 00:01:08.625920 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:01:08.627327 containerd[1914]: time="2025-08-13T00:01:08.626815759Z" level=info msg="containerd successfully booted in 0.239638s" Aug 13 00:01:08.630378 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.630378 amazon-ssm-agent[2085]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:01:08.630378 amazon-ssm-agent[2085]: 2025/08/13 00:01:08 processing appconfig overrides Aug 13 00:01:08.713454 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO https_proxy: Aug 13 00:01:08.788128 sshd[2108]: Accepted publickey for core from 139.178.68.195 port 35778 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:08.789196 sshd-session[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:08.805138 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:01:08.813818 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:01:08.818912 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO http_proxy: Aug 13 00:01:08.845478 systemd-logind[1898]: New session 1 of user core. Aug 13 00:01:08.864318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:01:08.878999 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:01:08.892497 (systemd)[2128]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:01:08.900029 systemd-logind[1898]: New session c1 of user core. Aug 13 00:01:08.917069 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO no_proxy: Aug 13 00:01:09.014614 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:01:09.033028 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:01:09.033028 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO Agent will take identity from EC2 Aug 13 00:01:09.033028 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [Registrar] Starting registrar module Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:08 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:01:09.033224 amazon-ssm-agent[2085]: 2025-08-13 00:01:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:01:09.114110 amazon-ssm-agent[2085]: 2025-08-13 00:01:09 INFO [CredentialRefresher] Next credential rotation will be in 30.8833265504 minutes Aug 13 00:01:09.166516 tar[1907]: linux-amd64/README.md Aug 13 00:01:09.185436 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:01:09.195709 systemd[2128]: Queued start job for default target default.target. Aug 13 00:01:09.199809 systemd[2128]: Created slice app.slice - User Application Slice. Aug 13 00:01:09.200304 systemd[2128]: Reached target paths.target - Paths. Aug 13 00:01:09.200372 systemd[2128]: Reached target timers.target - Timers. Aug 13 00:01:09.202614 systemd[2128]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:01:09.217540 systemd[2128]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:01:09.217709 systemd[2128]: Reached target sockets.target - Sockets. Aug 13 00:01:09.217776 systemd[2128]: Reached target basic.target - Basic System. Aug 13 00:01:09.217828 systemd[2128]: Reached target default.target - Main User Target. Aug 13 00:01:09.217868 systemd[2128]: Startup finished in 296ms. Aug 13 00:01:09.218097 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:01:09.225623 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:01:09.373683 systemd[1]: Started sshd@1-172.31.18.46:22-139.178.68.195:35788.service - OpenSSH per-connection server daemon (139.178.68.195:35788). Aug 13 00:01:09.538495 sshd[2143]: Accepted publickey for core from 139.178.68.195 port 35788 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:09.541126 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:09.548013 systemd-logind[1898]: New session 2 of user core. Aug 13 00:01:09.552586 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:01:09.673280 sshd[2145]: Connection closed by 139.178.68.195 port 35788 Aug 13 00:01:09.673988 sshd-session[2143]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:09.679946 systemd[1]: sshd@1-172.31.18.46:22-139.178.68.195:35788.service: Deactivated successfully. Aug 13 00:01:09.682377 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:01:09.683904 systemd-logind[1898]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:01:09.686041 systemd-logind[1898]: Removed session 2. Aug 13 00:01:09.713407 systemd[1]: Started sshd@2-172.31.18.46:22-139.178.68.195:46426.service - OpenSSH per-connection server daemon (139.178.68.195:46426). Aug 13 00:01:09.878272 sshd[2151]: Accepted publickey for core from 139.178.68.195 port 46426 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:09.880042 sshd-session[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:09.887493 systemd-logind[1898]: New session 3 of user core. Aug 13 00:01:09.891622 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:01:10.015181 sshd[2153]: Connection closed by 139.178.68.195 port 46426 Aug 13 00:01:10.017022 sshd-session[2151]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:10.021868 systemd[1]: sshd@2-172.31.18.46:22-139.178.68.195:46426.service: Deactivated successfully. Aug 13 00:01:10.024409 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:01:10.027476 systemd-logind[1898]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:01:10.028641 systemd-logind[1898]: Removed session 3. Aug 13 00:01:10.047617 amazon-ssm-agent[2085]: 2025-08-13 00:01:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:01:10.149892 amazon-ssm-agent[2085]: 2025-08-13 00:01:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2159) started Aug 13 00:01:10.250417 amazon-ssm-agent[2085]: 2025-08-13 00:01:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:01:10.283489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:10.284803 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:01:10.285928 systemd[1]: Startup finished in 595ms (kernel) + 10.043s (initrd) + 9.581s (userspace) = 20.219s. Aug 13 00:01:10.290935 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:10.442762 ntpd[1888]: Listen normally on 6 eth0 [fe80::4d2:fff:fe28:40f1%2]:123 Aug 13 00:01:10.443276 ntpd[1888]: 13 Aug 00:01:10 ntpd[1888]: Listen normally on 6 eth0 [fe80::4d2:fff:fe28:40f1%2]:123 Aug 13 00:01:11.111700 kubelet[2175]: E0813 00:01:11.111584 2175 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:11.114023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:11.114186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:11.114879 systemd[1]: kubelet.service: Consumed 1.053s CPU time, 268.3M memory peak. Aug 13 00:01:14.954847 systemd-resolved[1705]: Clock change detected. Flushing caches. Aug 13 00:01:20.565768 systemd[1]: Started sshd@3-172.31.18.46:22-139.178.68.195:37716.service - OpenSSH per-connection server daemon (139.178.68.195:37716). Aug 13 00:01:20.730975 sshd[2187]: Accepted publickey for core from 139.178.68.195 port 37716 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:20.732504 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:20.738179 systemd-logind[1898]: New session 4 of user core. Aug 13 00:01:20.745318 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:01:20.867437 sshd[2189]: Connection closed by 139.178.68.195 port 37716 Aug 13 00:01:20.868149 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:20.872135 systemd[1]: sshd@3-172.31.18.46:22-139.178.68.195:37716.service: Deactivated successfully. Aug 13 00:01:20.874216 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:01:20.875139 systemd-logind[1898]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:01:20.876412 systemd-logind[1898]: Removed session 4. Aug 13 00:01:20.911429 systemd[1]: Started sshd@4-172.31.18.46:22-139.178.68.195:37722.service - OpenSSH per-connection server daemon (139.178.68.195:37722). Aug 13 00:01:21.069604 sshd[2195]: Accepted publickey for core from 139.178.68.195 port 37722 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:21.071188 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:21.076040 systemd-logind[1898]: New session 5 of user core. Aug 13 00:01:21.083228 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:01:21.198041 sshd[2197]: Connection closed by 139.178.68.195 port 37722 Aug 13 00:01:21.199312 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:21.209915 systemd[1]: sshd@4-172.31.18.46:22-139.178.68.195:37722.service: Deactivated successfully. Aug 13 00:01:21.212257 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:01:21.213970 systemd-logind[1898]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:01:21.215597 systemd-logind[1898]: Removed session 5. Aug 13 00:01:21.234059 systemd[1]: Started sshd@5-172.31.18.46:22-139.178.68.195:37732.service - OpenSSH per-connection server daemon (139.178.68.195:37732). Aug 13 00:01:21.397775 sshd[2203]: Accepted publickey for core from 139.178.68.195 port 37732 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:21.399300 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:21.404240 systemd-logind[1898]: New session 6 of user core. Aug 13 00:01:21.412245 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:01:21.529747 sshd[2205]: Connection closed by 139.178.68.195 port 37732 Aug 13 00:01:21.530627 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:21.533731 systemd[1]: sshd@5-172.31.18.46:22-139.178.68.195:37732.service: Deactivated successfully. Aug 13 00:01:21.535698 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:01:21.537046 systemd-logind[1898]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:01:21.538240 systemd-logind[1898]: Removed session 6. Aug 13 00:01:21.563916 systemd[1]: Started sshd@6-172.31.18.46:22-139.178.68.195:37738.service - OpenSSH per-connection server daemon (139.178.68.195:37738). Aug 13 00:01:21.693913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:01:21.702664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:21.725501 sshd[2211]: Accepted publickey for core from 139.178.68.195 port 37738 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:21.726271 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:21.734062 systemd-logind[1898]: New session 7 of user core. Aug 13 00:01:21.737202 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:01:21.886613 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:01:21.888414 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:21.901001 sudo[2217]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:21.923950 sshd[2216]: Connection closed by 139.178.68.195 port 37738 Aug 13 00:01:21.924552 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:21.937440 systemd[1]: sshd@6-172.31.18.46:22-139.178.68.195:37738.service: Deactivated successfully. Aug 13 00:01:21.941261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:21.942092 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:01:21.945824 systemd-logind[1898]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:01:21.947551 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:21.962408 systemd[1]: Started sshd@7-172.31.18.46:22-139.178.68.195:37744.service - OpenSSH per-connection server daemon (139.178.68.195:37744). Aug 13 00:01:21.964474 systemd-logind[1898]: Removed session 7. Aug 13 00:01:22.007849 kubelet[2225]: E0813 00:01:22.007792 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:22.012213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:22.012423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:22.012949 systemd[1]: kubelet.service: Consumed 181ms CPU time, 107.9M memory peak. Aug 13 00:01:22.134798 sshd[2232]: Accepted publickey for core from 139.178.68.195 port 37744 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:22.136391 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:22.142083 systemd-logind[1898]: New session 8 of user core. Aug 13 00:01:22.148225 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:01:22.247488 sudo[2239]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:01:22.247786 sudo[2239]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:22.252204 sudo[2239]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:22.258487 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:01:22.258878 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:22.274528 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:01:22.319605 augenrules[2261]: No rules Aug 13 00:01:22.321029 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:01:22.321269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:01:22.322445 sudo[2238]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:22.345251 sshd[2237]: Connection closed by 139.178.68.195 port 37744 Aug 13 00:01:22.345954 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:22.349691 systemd[1]: sshd@7-172.31.18.46:22-139.178.68.195:37744.service: Deactivated successfully. Aug 13 00:01:22.352269 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:01:22.353749 systemd-logind[1898]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:01:22.355092 systemd-logind[1898]: Removed session 8. Aug 13 00:01:22.377055 systemd[1]: Started sshd@8-172.31.18.46:22-139.178.68.195:37752.service - OpenSSH per-connection server daemon (139.178.68.195:37752). Aug 13 00:01:22.539145 sshd[2270]: Accepted publickey for core from 139.178.68.195 port 37752 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:01:22.540537 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:22.546119 systemd-logind[1898]: New session 9 of user core. Aug 13 00:01:22.552222 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:01:22.650052 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:01:22.650641 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:23.264592 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:01:23.266587 (dockerd)[2291]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:01:23.896001 dockerd[2291]: time="2025-08-13T00:01:23.895937413Z" level=info msg="Starting up" Aug 13 00:01:24.079560 dockerd[2291]: time="2025-08-13T00:01:24.079337571Z" level=info msg="Loading containers: start." Aug 13 00:01:24.281012 kernel: Initializing XFRM netlink socket Aug 13 00:01:24.336191 (udev-worker)[2400]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:01:24.402580 systemd-networkd[1744]: docker0: Link UP Aug 13 00:01:24.434592 dockerd[2291]: time="2025-08-13T00:01:24.434528355Z" level=info msg="Loading containers: done." Aug 13 00:01:24.451602 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1750267520-merged.mount: Deactivated successfully. Aug 13 00:01:24.455340 dockerd[2291]: time="2025-08-13T00:01:24.454912980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:01:24.455340 dockerd[2291]: time="2025-08-13T00:01:24.455045231Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 00:01:24.455340 dockerd[2291]: time="2025-08-13T00:01:24.455159158Z" level=info msg="Daemon has completed initialization" Aug 13 00:01:24.491997 dockerd[2291]: time="2025-08-13T00:01:24.491911685Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:01:24.492377 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:01:25.389943 containerd[1914]: time="2025-08-13T00:01:25.389904708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:01:26.009096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292080688.mount: Deactivated successfully. Aug 13 00:01:27.684737 containerd[1914]: time="2025-08-13T00:01:27.684682035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:27.685660 containerd[1914]: time="2025-08-13T00:01:27.685628844Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 00:01:27.688019 containerd[1914]: time="2025-08-13T00:01:27.687327007Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:27.690761 containerd[1914]: time="2025-08-13T00:01:27.690203732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:27.691586 containerd[1914]: time="2025-08-13T00:01:27.691546067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.301600939s" Aug 13 00:01:27.691671 containerd[1914]: time="2025-08-13T00:01:27.691594866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:01:27.692395 containerd[1914]: time="2025-08-13T00:01:27.692358817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:01:29.606637 containerd[1914]: time="2025-08-13T00:01:29.606578428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:29.608004 containerd[1914]: time="2025-08-13T00:01:29.607807863Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 00:01:29.610514 containerd[1914]: time="2025-08-13T00:01:29.610469507Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:29.615540 containerd[1914]: time="2025-08-13T00:01:29.614497220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:29.615540 containerd[1914]: time="2025-08-13T00:01:29.615396908Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.922890976s" Aug 13 00:01:29.615540 containerd[1914]: time="2025-08-13T00:01:29.615426243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:01:29.616046 containerd[1914]: time="2025-08-13T00:01:29.616013536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:01:31.326654 containerd[1914]: time="2025-08-13T00:01:31.326585207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:31.327704 containerd[1914]: time="2025-08-13T00:01:31.327584394Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 00:01:31.328709 containerd[1914]: time="2025-08-13T00:01:31.328674213Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:31.332934 containerd[1914]: time="2025-08-13T00:01:31.332519343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:31.333709 containerd[1914]: time="2025-08-13T00:01:31.333671832Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.717543189s" Aug 13 00:01:31.333801 containerd[1914]: time="2025-08-13T00:01:31.333715194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:01:31.334304 containerd[1914]: time="2025-08-13T00:01:31.334176121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:01:32.025702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:01:32.034294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:32.377950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:32.391819 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:32.474909 kubelet[2555]: E0813 00:01:32.474621 2555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:32.478856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:32.479081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:32.480428 systemd[1]: kubelet.service: Consumed 204ms CPU time, 108.2M memory peak. Aug 13 00:01:32.636076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921269739.mount: Deactivated successfully. Aug 13 00:01:33.304065 containerd[1914]: time="2025-08-13T00:01:33.304003151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:33.309065 containerd[1914]: time="2025-08-13T00:01:33.308788196Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 00:01:33.311860 containerd[1914]: time="2025-08-13T00:01:33.311793298Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:33.321710 containerd[1914]: time="2025-08-13T00:01:33.321657146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:33.322526 containerd[1914]: time="2025-08-13T00:01:33.322465352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 1.988088989s" Aug 13 00:01:33.322526 containerd[1914]: time="2025-08-13T00:01:33.322498903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:01:33.323178 containerd[1914]: time="2025-08-13T00:01:33.323137681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:01:33.844834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011917171.mount: Deactivated successfully. Aug 13 00:01:35.065306 containerd[1914]: time="2025-08-13T00:01:35.065232663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:35.066697 containerd[1914]: time="2025-08-13T00:01:35.066456338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 00:01:35.069013 containerd[1914]: time="2025-08-13T00:01:35.067698396Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:35.072211 containerd[1914]: time="2025-08-13T00:01:35.072089511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:35.074090 containerd[1914]: time="2025-08-13T00:01:35.074050812Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.750881026s" Aug 13 00:01:35.074090 containerd[1914]: time="2025-08-13T00:01:35.074087389Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:01:35.074618 containerd[1914]: time="2025-08-13T00:01:35.074493367Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:01:35.510849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512604610.mount: Deactivated successfully. Aug 13 00:01:35.517127 containerd[1914]: time="2025-08-13T00:01:35.517059217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:35.517971 containerd[1914]: time="2025-08-13T00:01:35.517909343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:01:35.519953 containerd[1914]: time="2025-08-13T00:01:35.518668840Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:35.523164 containerd[1914]: time="2025-08-13T00:01:35.521100812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:35.523164 containerd[1914]: time="2025-08-13T00:01:35.521903479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 447.387687ms" Aug 13 00:01:35.523164 containerd[1914]: time="2025-08-13T00:01:35.521937985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:01:35.523531 containerd[1914]: time="2025-08-13T00:01:35.523440447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:01:35.981310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699464453.mount: Deactivated successfully. Aug 13 00:01:38.049113 containerd[1914]: time="2025-08-13T00:01:38.049011957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:38.052200 containerd[1914]: time="2025-08-13T00:01:38.052135619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 00:01:38.053930 containerd[1914]: time="2025-08-13T00:01:38.053873271Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:38.057376 containerd[1914]: time="2025-08-13T00:01:38.057323119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:38.058612 containerd[1914]: time="2025-08-13T00:01:38.058374608Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.534902543s" Aug 13 00:01:38.058612 containerd[1914]: time="2025-08-13T00:01:38.058413911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:01:38.647375 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:01:40.759687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:40.760057 systemd[1]: kubelet.service: Consumed 204ms CPU time, 108.2M memory peak. Aug 13 00:01:40.766363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:40.806671 systemd[1]: Reload requested from client PID 2709 ('systemctl') (unit session-9.scope)... Aug 13 00:01:40.806689 systemd[1]: Reloading... Aug 13 00:01:40.927077 zram_generator::config[2754]: No configuration found. Aug 13 00:01:41.113678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:41.232621 systemd[1]: Reloading finished in 425 ms. Aug 13 00:01:41.295297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:41.296511 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:01:41.303417 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:41.305179 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:01:41.305553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:41.305634 systemd[1]: kubelet.service: Consumed 136ms CPU time, 99.3M memory peak. Aug 13 00:01:41.311439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:42.000260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:42.007361 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:01:42.078773 kubelet[2820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:01:42.079415 kubelet[2820]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:01:42.079415 kubelet[2820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:01:42.086597 kubelet[2820]: I0813 00:01:42.085547 2820 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:01:43.094714 kubelet[2820]: I0813 00:01:43.094662 2820 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:01:43.094714 kubelet[2820]: I0813 00:01:43.094697 2820 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:01:43.095142 kubelet[2820]: I0813 00:01:43.094938 2820 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:01:43.142573 kubelet[2820]: I0813 00:01:43.142436 2820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:01:43.144547 kubelet[2820]: E0813 00:01:43.144433 2820 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:01:43.175208 kubelet[2820]: E0813 00:01:43.175139 2820 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:01:43.175208 kubelet[2820]: I0813 00:01:43.175199 2820 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:01:43.182533 kubelet[2820]: I0813 00:01:43.182475 2820 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:01:43.186089 kubelet[2820]: I0813 00:01:43.186026 2820 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:01:43.190890 kubelet[2820]: I0813 00:01:43.186076 2820 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:01:43.190890 kubelet[2820]: I0813 00:01:43.190883 2820 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:01:43.190890 kubelet[2820]: I0813 00:01:43.190897 2820 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:01:43.192155 kubelet[2820]: I0813 00:01:43.192129 2820 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:01:43.195440 kubelet[2820]: I0813 00:01:43.195335 2820 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:01:43.195440 kubelet[2820]: I0813 00:01:43.195375 2820 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:01:43.195440 kubelet[2820]: I0813 00:01:43.195400 2820 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:01:43.198019 kubelet[2820]: I0813 00:01:43.197618 2820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:01:43.206003 kubelet[2820]: E0813 00:01:43.204863 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-46&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:01:43.209617 kubelet[2820]: I0813 00:01:43.208808 2820 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:01:43.209617 kubelet[2820]: I0813 00:01:43.209424 2820 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:01:43.211141 kubelet[2820]: W0813 00:01:43.210458 2820 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:01:43.216503 kubelet[2820]: I0813 00:01:43.216390 2820 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:01:43.216503 kubelet[2820]: I0813 00:01:43.216475 2820 server.go:1289] "Started kubelet" Aug 13 00:01:43.219721 kubelet[2820]: E0813 00:01:43.219678 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:01:43.222634 kubelet[2820]: I0813 00:01:43.221822 2820 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:01:43.225819 kubelet[2820]: I0813 00:01:43.225771 2820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:01:43.227081 kubelet[2820]: I0813 00:01:43.227021 2820 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:01:43.228008 kubelet[2820]: I0813 00:01:43.227485 2820 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:01:43.237272 kubelet[2820]: E0813 00:01:43.230832 2820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.46:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-46.185b2a95eda4a74b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-46,UID:ip-172-31-18-46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-46,},FirstTimestamp:2025-08-13 00:01:43.216424779 +0000 UTC m=+1.203756665,LastTimestamp:2025-08-13 00:01:43.216424779 +0000 UTC m=+1.203756665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-46,}" Aug 13 00:01:43.237932 kubelet[2820]: I0813 00:01:43.237909 2820 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:01:43.238203 kubelet[2820]: I0813 00:01:43.238134 2820 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:01:43.243359 kubelet[2820]: I0813 00:01:43.243330 2820 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:01:43.246486 kubelet[2820]: E0813 00:01:43.245148 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:01:43.246486 kubelet[2820]: I0813 00:01:43.243536 2820 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:01:43.249027 kubelet[2820]: E0813 00:01:43.248249 2820 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-46\" not found" Aug 13 00:01:43.249027 kubelet[2820]: E0813 00:01:43.248500 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="200ms" Aug 13 00:01:43.249816 kubelet[2820]: I0813 00:01:43.243470 2820 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:01:43.250728 kubelet[2820]: I0813 00:01:43.250710 2820 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:01:43.250934 kubelet[2820]: I0813 00:01:43.250914 2820 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:01:43.252889 kubelet[2820]: E0813 00:01:43.252868 2820 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:01:43.253161 kubelet[2820]: I0813 00:01:43.253145 2820 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:01:43.272215 kubelet[2820]: I0813 00:01:43.272002 2820 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:01:43.276193 kubelet[2820]: I0813 00:01:43.276153 2820 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:01:43.276193 kubelet[2820]: I0813 00:01:43.276185 2820 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:01:43.276367 kubelet[2820]: I0813 00:01:43.276212 2820 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:01:43.276367 kubelet[2820]: I0813 00:01:43.276221 2820 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:01:43.276367 kubelet[2820]: E0813 00:01:43.276274 2820 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:01:43.287358 kubelet[2820]: E0813 00:01:43.287315 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:01:43.290938 kubelet[2820]: I0813 00:01:43.290635 2820 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:01:43.290938 kubelet[2820]: I0813 00:01:43.290701 2820 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:01:43.290938 kubelet[2820]: I0813 00:01:43.290720 2820 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:01:43.295352 kubelet[2820]: I0813 00:01:43.295075 2820 policy_none.go:49] "None policy: Start" Aug 13 00:01:43.295352 kubelet[2820]: I0813 00:01:43.295106 2820 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:01:43.295352 kubelet[2820]: I0813 00:01:43.295118 2820 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:01:43.306325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:01:43.320430 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:01:43.324430 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:01:43.332842 kubelet[2820]: E0813 00:01:43.332146 2820 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:01:43.332842 kubelet[2820]: I0813 00:01:43.332408 2820 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:01:43.332842 kubelet[2820]: I0813 00:01:43.332423 2820 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:01:43.332842 kubelet[2820]: I0813 00:01:43.332738 2820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:01:43.335127 kubelet[2820]: E0813 00:01:43.335100 2820 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:01:43.335349 kubelet[2820]: E0813 00:01:43.335331 2820 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-46\" not found" Aug 13 00:01:43.391764 systemd[1]: Created slice kubepods-burstable-pod1f21bdafb8aa830ec060231be74e8c16.slice - libcontainer container kubepods-burstable-pod1f21bdafb8aa830ec060231be74e8c16.slice. Aug 13 00:01:43.414645 kubelet[2820]: E0813 00:01:43.414609 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:43.419135 systemd[1]: Created slice kubepods-burstable-pod6ce839bbf081e170fc3b80e39acd9382.slice - libcontainer container kubepods-burstable-pod6ce839bbf081e170fc3b80e39acd9382.slice. Aug 13 00:01:43.421028 kubelet[2820]: E0813 00:01:43.420843 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:43.429884 systemd[1]: Created slice kubepods-burstable-pod9d32ae1f6ed99fc5c0306d389cdbaabf.slice - libcontainer container kubepods-burstable-pod9d32ae1f6ed99fc5c0306d389cdbaabf.slice. Aug 13 00:01:43.432061 kubelet[2820]: E0813 00:01:43.432014 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:43.434550 kubelet[2820]: I0813 00:01:43.434520 2820 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Aug 13 00:01:43.434937 kubelet[2820]: E0813 00:01:43.434907 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Aug 13 00:01:43.449002 kubelet[2820]: E0813 00:01:43.448934 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="400ms" Aug 13 00:01:43.546760 kubelet[2820]: I0813 00:01:43.546654 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:43.546760 kubelet[2820]: I0813 00:01:43.546706 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:43.546760 kubelet[2820]: I0813 00:01:43.546741 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f21bdafb8aa830ec060231be74e8c16-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"1f21bdafb8aa830ec060231be74e8c16\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:43.546760 kubelet[2820]: I0813 00:01:43.546765 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:43.546760 kubelet[2820]: I0813 00:01:43.546788 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:43.547754 kubelet[2820]: I0813 00:01:43.546812 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:43.547754 kubelet[2820]: I0813 00:01:43.546832 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d32ae1f6ed99fc5c0306d389cdbaabf-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-46\" (UID: \"9d32ae1f6ed99fc5c0306d389cdbaabf\") " pod="kube-system/kube-scheduler-ip-172-31-18-46" Aug 13 00:01:43.547754 kubelet[2820]: I0813 00:01:43.546853 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f21bdafb8aa830ec060231be74e8c16-ca-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"1f21bdafb8aa830ec060231be74e8c16\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:43.547754 kubelet[2820]: I0813 00:01:43.546884 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f21bdafb8aa830ec060231be74e8c16-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"1f21bdafb8aa830ec060231be74e8c16\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:43.636937 kubelet[2820]: I0813 00:01:43.636894 2820 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Aug 13 00:01:43.637339 kubelet[2820]: E0813 00:01:43.637308 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Aug 13 00:01:43.716433 containerd[1914]: time="2025-08-13T00:01:43.716305961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-46,Uid:1f21bdafb8aa830ec060231be74e8c16,Namespace:kube-system,Attempt:0,}" Aug 13 00:01:43.722158 containerd[1914]: time="2025-08-13T00:01:43.722066299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-46,Uid:6ce839bbf081e170fc3b80e39acd9382,Namespace:kube-system,Attempt:0,}" Aug 13 00:01:43.733340 containerd[1914]: time="2025-08-13T00:01:43.733297385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-46,Uid:9d32ae1f6ed99fc5c0306d389cdbaabf,Namespace:kube-system,Attempt:0,}" Aug 13 00:01:43.849812 kubelet[2820]: E0813 00:01:43.849751 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="800ms" Aug 13 00:01:44.039673 kubelet[2820]: I0813 00:01:44.039575 2820 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Aug 13 00:01:44.040116 kubelet[2820]: E0813 00:01:44.039943 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Aug 13 00:01:44.210911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987727489.mount: Deactivated successfully. Aug 13 00:01:44.230527 containerd[1914]: time="2025-08-13T00:01:44.230393829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:01:44.231113 kubelet[2820]: E0813 00:01:44.231073 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:01:44.239391 containerd[1914]: time="2025-08-13T00:01:44.239316636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 00:01:44.240015 kubelet[2820]: E0813 00:01:44.239958 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-46&limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:01:44.241399 containerd[1914]: time="2025-08-13T00:01:44.241355339Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:01:44.243859 containerd[1914]: time="2025-08-13T00:01:44.243801502Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:01:44.248627 containerd[1914]: time="2025-08-13T00:01:44.247411594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:01:44.249551 containerd[1914]: time="2025-08-13T00:01:44.249515204Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:01:44.252299 containerd[1914]: time="2025-08-13T00:01:44.252054971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:01:44.253043 containerd[1914]: time="2025-08-13T00:01:44.252890089Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.472266ms" Aug 13 00:01:44.253752 containerd[1914]: time="2025-08-13T00:01:44.253701117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:01:44.257014 containerd[1914]: time="2025-08-13T00:01:44.256717822Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.330208ms" Aug 13 00:01:44.272510 containerd[1914]: time="2025-08-13T00:01:44.267225458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.973703ms" Aug 13 00:01:44.458612 containerd[1914]: time="2025-08-13T00:01:44.455521032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:01:44.458833 containerd[1914]: time="2025-08-13T00:01:44.457605901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:01:44.458833 containerd[1914]: time="2025-08-13T00:01:44.457636737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:44.458833 containerd[1914]: time="2025-08-13T00:01:44.457756945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:44.465694 containerd[1914]: time="2025-08-13T00:01:44.464799434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:01:44.465694 containerd[1914]: time="2025-08-13T00:01:44.465340725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:01:44.465694 containerd[1914]: time="2025-08-13T00:01:44.465378553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:44.465694 containerd[1914]: time="2025-08-13T00:01:44.465528052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:44.468574 kubelet[2820]: E0813 00:01:44.468294 2820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.46:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-46.185b2a95eda4a74b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-46,UID:ip-172-31-18-46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-46,},FirstTimestamp:2025-08-13 00:01:43.216424779 +0000 UTC m=+1.203756665,LastTimestamp:2025-08-13 00:01:43.216424779 +0000 UTC m=+1.203756665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-46,}" Aug 13 00:01:44.474080 containerd[1914]: time="2025-08-13T00:01:44.473588555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:01:44.474080 containerd[1914]: time="2025-08-13T00:01:44.473664260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:01:44.474080 containerd[1914]: time="2025-08-13T00:01:44.473691304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:44.474080 containerd[1914]: time="2025-08-13T00:01:44.473812615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:44.499268 systemd[1]: Started cri-containerd-c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8.scope - libcontainer container c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8. Aug 13 00:01:44.525288 systemd[1]: Started cri-containerd-918396813d2c5364a483d8577fa221ac2dca1abd590ad74ff56665321beae140.scope - libcontainer container 918396813d2c5364a483d8577fa221ac2dca1abd590ad74ff56665321beae140. Aug 13 00:01:44.532698 systemd[1]: Started cri-containerd-9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200.scope - libcontainer container 9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200. Aug 13 00:01:44.597833 containerd[1914]: time="2025-08-13T00:01:44.597789488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-46,Uid:6ce839bbf081e170fc3b80e39acd9382,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8\"" Aug 13 00:01:44.607511 containerd[1914]: time="2025-08-13T00:01:44.607466206Z" level=info msg="CreateContainer within sandbox \"c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:01:44.640706 containerd[1914]: time="2025-08-13T00:01:44.640655880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-46,Uid:9d32ae1f6ed99fc5c0306d389cdbaabf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200\"" Aug 13 00:01:44.649022 containerd[1914]: time="2025-08-13T00:01:44.648806539Z" level=info msg="CreateContainer within sandbox \"9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:01:44.649811 containerd[1914]: time="2025-08-13T00:01:44.649641231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-46,Uid:1f21bdafb8aa830ec060231be74e8c16,Namespace:kube-system,Attempt:0,} returns sandbox id \"918396813d2c5364a483d8577fa221ac2dca1abd590ad74ff56665321beae140\"" Aug 13 00:01:44.650342 containerd[1914]: time="2025-08-13T00:01:44.650082034Z" level=info msg="CreateContainer within sandbox \"c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef\"" Aug 13 00:01:44.650632 kubelet[2820]: E0813 00:01:44.650509 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": dial tcp 172.31.18.46:6443: connect: connection refused" interval="1.6s" Aug 13 00:01:44.651771 containerd[1914]: time="2025-08-13T00:01:44.651731389Z" level=info msg="StartContainer for \"5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef\"" Aug 13 00:01:44.661837 containerd[1914]: time="2025-08-13T00:01:44.661782560Z" level=info msg="CreateContainer within sandbox \"918396813d2c5364a483d8577fa221ac2dca1abd590ad74ff56665321beae140\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:01:44.686657 containerd[1914]: time="2025-08-13T00:01:44.685695999Z" level=info msg="CreateContainer within sandbox \"9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e\"" Aug 13 00:01:44.689162 containerd[1914]: time="2025-08-13T00:01:44.689118825Z" level=info msg="StartContainer for \"86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e\"" Aug 13 00:01:44.695338 systemd[1]: Started cri-containerd-5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef.scope - libcontainer container 5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef. Aug 13 00:01:44.701629 containerd[1914]: time="2025-08-13T00:01:44.701584977Z" level=info msg="CreateContainer within sandbox \"918396813d2c5364a483d8577fa221ac2dca1abd590ad74ff56665321beae140\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e087e9f7e9c9c9897ed980b241831f12d2be25ed6dc190eb9d364e68627910b3\"" Aug 13 00:01:44.706751 containerd[1914]: time="2025-08-13T00:01:44.706671931Z" level=info msg="StartContainer for \"e087e9f7e9c9c9897ed980b241831f12d2be25ed6dc190eb9d364e68627910b3\"" Aug 13 00:01:44.747659 systemd[1]: Started cri-containerd-86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e.scope - libcontainer container 86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e. Aug 13 00:01:44.760318 systemd[1]: Started cri-containerd-e087e9f7e9c9c9897ed980b241831f12d2be25ed6dc190eb9d364e68627910b3.scope - libcontainer container e087e9f7e9c9c9897ed980b241831f12d2be25ed6dc190eb9d364e68627910b3. Aug 13 00:01:44.771586 kubelet[2820]: E0813 00:01:44.769399 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:01:44.823146 containerd[1914]: time="2025-08-13T00:01:44.823098403Z" level=info msg="StartContainer for \"5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef\" returns successfully" Aug 13 00:01:44.828109 kubelet[2820]: E0813 00:01:44.827182 2820 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:01:44.844543 kubelet[2820]: I0813 00:01:44.844506 2820 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Aug 13 00:01:44.845578 kubelet[2820]: E0813 00:01:44.845542 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.46:6443/api/v1/nodes\": dial tcp 172.31.18.46:6443: connect: connection refused" node="ip-172-31-18-46" Aug 13 00:01:44.853772 containerd[1914]: time="2025-08-13T00:01:44.853728746Z" level=info msg="StartContainer for \"86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e\" returns successfully" Aug 13 00:01:44.869703 containerd[1914]: time="2025-08-13T00:01:44.869411329Z" level=info msg="StartContainer for \"e087e9f7e9c9c9897ed980b241831f12d2be25ed6dc190eb9d364e68627910b3\" returns successfully" Aug 13 00:01:45.216633 kubelet[2820]: E0813 00:01:45.216510 2820 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:01:45.306365 kubelet[2820]: E0813 00:01:45.306318 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:45.313544 kubelet[2820]: E0813 00:01:45.313513 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:45.315974 kubelet[2820]: E0813 00:01:45.315945 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:46.317454 kubelet[2820]: E0813 00:01:46.317415 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:46.318182 kubelet[2820]: E0813 00:01:46.318159 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:46.318542 kubelet[2820]: E0813 00:01:46.318523 2820 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:46.447551 kubelet[2820]: I0813 00:01:46.447519 2820 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Aug 13 00:01:48.691202 kubelet[2820]: E0813 00:01:48.691145 2820 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-46\" not found" node="ip-172-31-18-46" Aug 13 00:01:48.809139 kubelet[2820]: I0813 00:01:48.807681 2820 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-46" Aug 13 00:01:48.809139 kubelet[2820]: E0813 00:01:48.807740 2820 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-46\": node \"ip-172-31-18-46\" not found" Aug 13 00:01:48.849686 kubelet[2820]: I0813 00:01:48.849492 2820 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:48.856948 kubelet[2820]: E0813 00:01:48.856755 2820 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:48.856948 kubelet[2820]: I0813 00:01:48.856790 2820 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-46" Aug 13 00:01:48.859276 kubelet[2820]: E0813 00:01:48.859234 2820 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-46" Aug 13 00:01:48.859276 kubelet[2820]: I0813 00:01:48.859275 2820 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:48.861421 kubelet[2820]: E0813 00:01:48.861382 2820 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:49.212551 kubelet[2820]: I0813 00:01:49.212479 2820 apiserver.go:52] "Watching apiserver" Aug 13 00:01:49.250665 kubelet[2820]: I0813 00:01:49.250624 2820 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:01:50.997682 systemd[1]: Reload requested from client PID 3109 ('systemctl') (unit session-9.scope)... Aug 13 00:01:50.997701 systemd[1]: Reloading... Aug 13 00:01:51.133015 zram_generator::config[3157]: No configuration found. Aug 13 00:01:51.297553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:51.482455 systemd[1]: Reloading finished in 484 ms. Aug 13 00:01:51.513307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:51.525075 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:01:51.525317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:51.525373 systemd[1]: kubelet.service: Consumed 1.618s CPU time, 129.5M memory peak. Aug 13 00:01:51.533448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:51.831773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:51.841443 (kubelet)[3214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:01:51.927375 kubelet[3214]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:01:51.927375 kubelet[3214]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:01:51.927375 kubelet[3214]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:01:51.927375 kubelet[3214]: I0813 00:01:51.926705 3214 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:01:51.934463 kubelet[3214]: I0813 00:01:51.934435 3214 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:01:51.934617 kubelet[3214]: I0813 00:01:51.934607 3214 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:01:51.934932 kubelet[3214]: I0813 00:01:51.934919 3214 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:01:51.938259 kubelet[3214]: I0813 00:01:51.938235 3214 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:01:51.947477 kubelet[3214]: I0813 00:01:51.946228 3214 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:01:51.974234 kubelet[3214]: E0813 00:01:51.974173 3214 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:01:51.974450 kubelet[3214]: I0813 00:01:51.974424 3214 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:01:51.984893 kubelet[3214]: I0813 00:01:51.984861 3214 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:01:51.985514 kubelet[3214]: I0813 00:01:51.985476 3214 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:01:51.986142 kubelet[3214]: I0813 00:01:51.985857 3214 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:01:51.986341 kubelet[3214]: I0813 00:01:51.986325 3214 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:01:51.986454 kubelet[3214]: I0813 00:01:51.986438 3214 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:01:51.986594 kubelet[3214]: I0813 00:01:51.986583 3214 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:01:51.986848 kubelet[3214]: I0813 00:01:51.986835 3214 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:01:51.991554 kubelet[3214]: I0813 00:01:51.988271 3214 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:01:51.991554 kubelet[3214]: I0813 00:01:51.988316 3214 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:01:51.991554 kubelet[3214]: I0813 00:01:51.988336 3214 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:01:51.993873 kubelet[3214]: I0813 00:01:51.993834 3214 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:01:51.996472 kubelet[3214]: I0813 00:01:51.996425 3214 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:01:52.009432 kubelet[3214]: I0813 00:01:52.009372 3214 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:01:52.009697 kubelet[3214]: I0813 00:01:52.009535 3214 server.go:1289] "Started kubelet" Aug 13 00:01:52.038025 kubelet[3214]: I0813 00:01:52.037968 3214 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:01:52.038801 kubelet[3214]: I0813 00:01:52.038755 3214 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:01:52.039486 sudo[3229]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:01:52.040318 sudo[3229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:01:52.052375 kubelet[3214]: I0813 00:01:52.052307 3214 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:01:52.056712 kubelet[3214]: I0813 00:01:52.056302 3214 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:01:52.077023 kubelet[3214]: I0813 00:01:52.076702 3214 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:01:52.078621 kubelet[3214]: I0813 00:01:52.078424 3214 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:01:52.080129 kubelet[3214]: I0813 00:01:52.080105 3214 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:01:52.084503 kubelet[3214]: I0813 00:01:52.082614 3214 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:01:52.084503 kubelet[3214]: I0813 00:01:52.082759 3214 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:01:52.084503 kubelet[3214]: I0813 00:01:52.083309 3214 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:01:52.084503 kubelet[3214]: I0813 00:01:52.083339 3214 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:01:52.084503 kubelet[3214]: I0813 00:01:52.083361 3214 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:01:52.084503 kubelet[3214]: I0813 00:01:52.083371 3214 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:01:52.084503 kubelet[3214]: E0813 00:01:52.083420 3214 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:01:52.086278 kubelet[3214]: I0813 00:01:52.085315 3214 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:01:52.089217 kubelet[3214]: E0813 00:01:52.089186 3214 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:01:52.095700 kubelet[3214]: I0813 00:01:52.094716 3214 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:01:52.095700 kubelet[3214]: I0813 00:01:52.094826 3214 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:01:52.100637 kubelet[3214]: I0813 00:01:52.100431 3214 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:01:52.189099 kubelet[3214]: E0813 00:01:52.189053 3214 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192474 3214 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192496 3214 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192524 3214 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192757 3214 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192771 3214 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192803 3214 policy_none.go:49] "None policy: Start" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192817 3214 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:01:52.192876 kubelet[3214]: I0813 00:01:52.192830 3214 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:01:52.196001 kubelet[3214]: I0813 00:01:52.195268 3214 state_mem.go:75] "Updated machine memory state" Aug 13 00:01:52.215872 kubelet[3214]: E0813 00:01:52.215424 3214 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:01:52.217537 kubelet[3214]: I0813 00:01:52.217372 3214 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:01:52.217537 kubelet[3214]: I0813 00:01:52.217507 3214 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:01:52.218968 kubelet[3214]: I0813 00:01:52.218563 3214 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:01:52.224318 kubelet[3214]: E0813 00:01:52.224082 3214 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:01:52.342935 kubelet[3214]: I0813 00:01:52.340246 3214 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-46" Aug 13 00:01:52.357148 kubelet[3214]: I0813 00:01:52.357110 3214 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-46" Aug 13 00:01:52.357319 kubelet[3214]: I0813 00:01:52.357205 3214 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-46" Aug 13 00:01:52.391178 kubelet[3214]: I0813 00:01:52.390672 3214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:52.391178 kubelet[3214]: I0813 00:01:52.390878 3214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:52.391178 kubelet[3214]: I0813 00:01:52.390684 3214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-46" Aug 13 00:01:52.491568 kubelet[3214]: I0813 00:01:52.491522 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f21bdafb8aa830ec060231be74e8c16-ca-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"1f21bdafb8aa830ec060231be74e8c16\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:52.491568 kubelet[3214]: I0813 00:01:52.491570 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f21bdafb8aa830ec060231be74e8c16-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"1f21bdafb8aa830ec060231be74e8c16\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:52.491847 kubelet[3214]: I0813 00:01:52.491593 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f21bdafb8aa830ec060231be74e8c16-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-46\" (UID: \"1f21bdafb8aa830ec060231be74e8c16\") " pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:52.491847 kubelet[3214]: I0813 00:01:52.491619 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:52.491847 kubelet[3214]: I0813 00:01:52.491643 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:52.491847 kubelet[3214]: I0813 00:01:52.491667 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:52.491847 kubelet[3214]: I0813 00:01:52.491695 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d32ae1f6ed99fc5c0306d389cdbaabf-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-46\" (UID: \"9d32ae1f6ed99fc5c0306d389cdbaabf\") " pod="kube-system/kube-scheduler-ip-172-31-18-46" Aug 13 00:01:52.493094 kubelet[3214]: I0813 00:01:52.493054 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:52.493222 kubelet[3214]: I0813 00:01:52.493100 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ce839bbf081e170fc3b80e39acd9382-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-46\" (UID: \"6ce839bbf081e170fc3b80e39acd9382\") " pod="kube-system/kube-controller-manager-ip-172-31-18-46" Aug 13 00:01:52.761137 sudo[3229]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:52.990888 kubelet[3214]: I0813 00:01:52.990622 3214 apiserver.go:52] "Watching apiserver" Aug 13 00:01:53.083832 kubelet[3214]: I0813 00:01:53.083757 3214 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:01:53.144923 kubelet[3214]: I0813 00:01:53.143591 3214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:53.151360 kubelet[3214]: E0813 00:01:53.151313 3214 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-46\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-46" Aug 13 00:01:53.183725 kubelet[3214]: I0813 00:01:53.183596 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-46" podStartSLOduration=1.183576804 podStartE2EDuration="1.183576804s" podCreationTimestamp="2025-08-13 00:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:01:53.170900793 +0000 UTC m=+1.319997981" watchObservedRunningTime="2025-08-13 00:01:53.183576804 +0000 UTC m=+1.332673995" Aug 13 00:01:53.195821 kubelet[3214]: I0813 00:01:53.194893 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-46" podStartSLOduration=1.194873269 podStartE2EDuration="1.194873269s" podCreationTimestamp="2025-08-13 00:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:01:53.185039951 +0000 UTC m=+1.334137134" watchObservedRunningTime="2025-08-13 00:01:53.194873269 +0000 UTC m=+1.343970459" Aug 13 00:01:53.580011 update_engine[1903]: I20250813 00:01:53.579041 1903 update_attempter.cc:509] Updating boot flags... Aug 13 00:01:53.662152 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 45 scanned by (udev-worker) (3286) Aug 13 00:01:53.928026 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 45 scanned by (udev-worker) (3285) Aug 13 00:01:54.242257 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 45 scanned by (udev-worker) (3285) Aug 13 00:01:54.764674 sudo[2273]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:54.777130 kubelet[3214]: I0813 00:01:54.777076 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-46" podStartSLOduration=2.777052227 podStartE2EDuration="2.777052227s" podCreationTimestamp="2025-08-13 00:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:01:53.195707891 +0000 UTC m=+1.344805092" watchObservedRunningTime="2025-08-13 00:01:54.777052227 +0000 UTC m=+2.926149413" Aug 13 00:01:54.786750 sshd[2272]: Connection closed by 139.178.68.195 port 37752 Aug 13 00:01:54.787789 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:54.792709 systemd-logind[1898]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:01:54.793501 systemd[1]: sshd@8-172.31.18.46:22-139.178.68.195:37752.service: Deactivated successfully. Aug 13 00:01:54.796888 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:01:54.797732 systemd[1]: session-9.scope: Consumed 4.817s CPU time, 209.2M memory peak. Aug 13 00:01:54.800781 systemd-logind[1898]: Removed session 9. Aug 13 00:01:56.142369 kubelet[3214]: I0813 00:01:56.142343 3214 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:01:56.143404 containerd[1914]: time="2025-08-13T00:01:56.143191301Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:01:56.143822 kubelet[3214]: I0813 00:01:56.143786 3214 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:01:57.045587 systemd[1]: Created slice kubepods-besteffort-podf0b79a2f_df94_4474_aaa2_2e46e073e0b0.slice - libcontainer container kubepods-besteffort-podf0b79a2f_df94_4474_aaa2_2e46e073e0b0.slice. Aug 13 00:01:57.064498 systemd[1]: Created slice kubepods-burstable-poda1d2e28c_61c7_4f7d_a985_b8b27ef2c1e0.slice - libcontainer container kubepods-burstable-poda1d2e28c_61c7_4f7d_a985_b8b27ef2c1e0.slice. Aug 13 00:01:57.126282 kubelet[3214]: I0813 00:01:57.126231 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0b79a2f-df94-4474-aaa2-2e46e073e0b0-kube-proxy\") pod \"kube-proxy-nj5hh\" (UID: \"f0b79a2f-df94-4474-aaa2-2e46e073e0b0\") " pod="kube-system/kube-proxy-nj5hh" Aug 13 00:01:57.126282 kubelet[3214]: I0813 00:01:57.126276 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0b79a2f-df94-4474-aaa2-2e46e073e0b0-xtables-lock\") pod \"kube-proxy-nj5hh\" (UID: \"f0b79a2f-df94-4474-aaa2-2e46e073e0b0\") " pod="kube-system/kube-proxy-nj5hh" Aug 13 00:01:57.127066 kubelet[3214]: I0813 00:01:57.126305 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-run\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127066 kubelet[3214]: I0813 00:01:57.126327 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hostproc\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127066 kubelet[3214]: I0813 00:01:57.126350 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-xtables-lock\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127066 kubelet[3214]: I0813 00:01:57.126369 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-net\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127066 kubelet[3214]: I0813 00:01:57.126393 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0b79a2f-df94-4474-aaa2-2e46e073e0b0-lib-modules\") pod \"kube-proxy-nj5hh\" (UID: \"f0b79a2f-df94-4474-aaa2-2e46e073e0b0\") " pod="kube-system/kube-proxy-nj5hh" Aug 13 00:01:57.127066 kubelet[3214]: I0813 00:01:57.126412 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-bpf-maps\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127378 kubelet[3214]: I0813 00:01:57.126433 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-etc-cni-netd\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127378 kubelet[3214]: I0813 00:01:57.126454 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-clustermesh-secrets\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127378 kubelet[3214]: I0813 00:01:57.126475 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-config-path\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127378 kubelet[3214]: I0813 00:01:57.126499 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-kernel\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127378 kubelet[3214]: I0813 00:01:57.126523 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hubble-tls\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127579 kubelet[3214]: I0813 00:01:57.126546 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vhv\" (UniqueName: \"kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-kube-api-access-f5vhv\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127579 kubelet[3214]: I0813 00:01:57.126575 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmdc8\" (UniqueName: \"kubernetes.io/projected/f0b79a2f-df94-4474-aaa2-2e46e073e0b0-kube-api-access-xmdc8\") pod \"kube-proxy-nj5hh\" (UID: \"f0b79a2f-df94-4474-aaa2-2e46e073e0b0\") " pod="kube-system/kube-proxy-nj5hh" Aug 13 00:01:57.127579 kubelet[3214]: I0813 00:01:57.126596 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-cgroup\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127579 kubelet[3214]: I0813 00:01:57.126616 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cni-path\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.127579 kubelet[3214]: I0813 00:01:57.126638 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-lib-modules\") pod \"cilium-ct59d\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " pod="kube-system/cilium-ct59d" Aug 13 00:01:57.231333 systemd[1]: Created slice kubepods-besteffort-pode4f01dc3_731d_46f0_956f_49f12d76b5b1.slice - libcontainer container kubepods-besteffort-pode4f01dc3_731d_46f0_956f_49f12d76b5b1.slice. Aug 13 00:01:57.330197 kubelet[3214]: I0813 00:01:57.329847 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f01dc3-731d-46f0-956f-49f12d76b5b1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f5nzf\" (UID: \"e4f01dc3-731d-46f0-956f-49f12d76b5b1\") " pod="kube-system/cilium-operator-6c4d7847fc-f5nzf" Aug 13 00:01:57.330197 kubelet[3214]: I0813 00:01:57.329893 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fm6w\" (UniqueName: \"kubernetes.io/projected/e4f01dc3-731d-46f0-956f-49f12d76b5b1-kube-api-access-6fm6w\") pod \"cilium-operator-6c4d7847fc-f5nzf\" (UID: \"e4f01dc3-731d-46f0-956f-49f12d76b5b1\") " pod="kube-system/cilium-operator-6c4d7847fc-f5nzf" Aug 13 00:01:57.359246 containerd[1914]: time="2025-08-13T00:01:57.359194855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj5hh,Uid:f0b79a2f-df94-4474-aaa2-2e46e073e0b0,Namespace:kube-system,Attempt:0,}" Aug 13 00:01:57.371465 containerd[1914]: time="2025-08-13T00:01:57.371423550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct59d,Uid:a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0,Namespace:kube-system,Attempt:0,}" Aug 13 00:01:57.411049 containerd[1914]: time="2025-08-13T00:01:57.410743956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:01:57.411049 containerd[1914]: time="2025-08-13T00:01:57.410809083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:01:57.411049 containerd[1914]: time="2025-08-13T00:01:57.410824229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:57.411049 containerd[1914]: time="2025-08-13T00:01:57.410940830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:57.424517 containerd[1914]: time="2025-08-13T00:01:57.424224847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:01:57.424517 containerd[1914]: time="2025-08-13T00:01:57.424285879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:01:57.424517 containerd[1914]: time="2025-08-13T00:01:57.424304951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:57.424517 containerd[1914]: time="2025-08-13T00:01:57.424423394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:57.434202 systemd[1]: Started cri-containerd-bbed68b54d00d73eca664f05bb17e4e6cfb1ae08b04438ecbc4ca5cd4b0f8584.scope - libcontainer container bbed68b54d00d73eca664f05bb17e4e6cfb1ae08b04438ecbc4ca5cd4b0f8584. Aug 13 00:01:57.462323 systemd[1]: Started cri-containerd-867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459.scope - libcontainer container 867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459. Aug 13 00:01:57.493624 containerd[1914]: time="2025-08-13T00:01:57.493564518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj5hh,Uid:f0b79a2f-df94-4474-aaa2-2e46e073e0b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbed68b54d00d73eca664f05bb17e4e6cfb1ae08b04438ecbc4ca5cd4b0f8584\"" Aug 13 00:01:57.505374 containerd[1914]: time="2025-08-13T00:01:57.505327039Z" level=info msg="CreateContainer within sandbox \"bbed68b54d00d73eca664f05bb17e4e6cfb1ae08b04438ecbc4ca5cd4b0f8584\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:01:57.508059 containerd[1914]: time="2025-08-13T00:01:57.508012830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct59d,Uid:a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\"" Aug 13 00:01:57.512015 containerd[1914]: time="2025-08-13T00:01:57.509848157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:01:57.547552 containerd[1914]: time="2025-08-13T00:01:57.547493974Z" level=info msg="CreateContainer within sandbox \"bbed68b54d00d73eca664f05bb17e4e6cfb1ae08b04438ecbc4ca5cd4b0f8584\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e58a528e6a8197fd38dd1c2945503edb3a7c9a3642ac1211e6ab884bc7f70d1a\"" Aug 13 00:01:57.549037 containerd[1914]: time="2025-08-13T00:01:57.548475968Z" level=info msg="StartContainer for \"e58a528e6a8197fd38dd1c2945503edb3a7c9a3642ac1211e6ab884bc7f70d1a\"" Aug 13 00:01:57.561513 containerd[1914]: time="2025-08-13T00:01:57.561468357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f5nzf,Uid:e4f01dc3-731d-46f0-956f-49f12d76b5b1,Namespace:kube-system,Attempt:0,}" Aug 13 00:01:57.584170 systemd[1]: Started cri-containerd-e58a528e6a8197fd38dd1c2945503edb3a7c9a3642ac1211e6ab884bc7f70d1a.scope - libcontainer container e58a528e6a8197fd38dd1c2945503edb3a7c9a3642ac1211e6ab884bc7f70d1a. Aug 13 00:01:57.604837 containerd[1914]: time="2025-08-13T00:01:57.604587531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:01:57.605032 containerd[1914]: time="2025-08-13T00:01:57.604693265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:01:57.605032 containerd[1914]: time="2025-08-13T00:01:57.604710808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:57.607909 containerd[1914]: time="2025-08-13T00:01:57.607802504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:01:57.627523 containerd[1914]: time="2025-08-13T00:01:57.627409599Z" level=info msg="StartContainer for \"e58a528e6a8197fd38dd1c2945503edb3a7c9a3642ac1211e6ab884bc7f70d1a\" returns successfully" Aug 13 00:01:57.636253 systemd[1]: Started cri-containerd-a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160.scope - libcontainer container a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160. Aug 13 00:01:57.684829 containerd[1914]: time="2025-08-13T00:01:57.683890506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f5nzf,Uid:e4f01dc3-731d-46f0-956f-49f12d76b5b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\"" Aug 13 00:01:58.187701 kubelet[3214]: I0813 00:01:58.187646 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nj5hh" podStartSLOduration=1.187629304 podStartE2EDuration="1.187629304s" podCreationTimestamp="2025-08-13 00:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:01:58.185475629 +0000 UTC m=+6.334572834" watchObservedRunningTime="2025-08-13 00:01:58.187629304 +0000 UTC m=+6.336726493" Aug 13 00:02:03.924371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292235301.mount: Deactivated successfully. Aug 13 00:02:06.660332 containerd[1914]: time="2025-08-13T00:02:06.660284268Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:06.662558 containerd[1914]: time="2025-08-13T00:02:06.662478803Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:02:06.665689 containerd[1914]: time="2025-08-13T00:02:06.664718129Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:06.740594 containerd[1914]: time="2025-08-13T00:02:06.740544916Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.230658358s" Aug 13 00:02:06.740594 containerd[1914]: time="2025-08-13T00:02:06.740589738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:02:06.743099 containerd[1914]: time="2025-08-13T00:02:06.742376242Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:02:06.747519 containerd[1914]: time="2025-08-13T00:02:06.747438340Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:02:06.835145 containerd[1914]: time="2025-08-13T00:02:06.835096757Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\"" Aug 13 00:02:06.835909 containerd[1914]: time="2025-08-13T00:02:06.835874105Z" level=info msg="StartContainer for \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\"" Aug 13 00:02:06.951510 systemd[1]: run-containerd-runc-k8s.io-b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615-runc.HFIgIk.mount: Deactivated successfully. Aug 13 00:02:06.966302 systemd[1]: Started cri-containerd-b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615.scope - libcontainer container b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615. Aug 13 00:02:07.000594 containerd[1914]: time="2025-08-13T00:02:07.000432800Z" level=info msg="StartContainer for \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\" returns successfully" Aug 13 00:02:07.013649 systemd[1]: cri-containerd-b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615.scope: Deactivated successfully. Aug 13 00:02:07.263265 containerd[1914]: time="2025-08-13T00:02:07.245398565Z" level=info msg="shim disconnected" id=b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615 namespace=k8s.io Aug 13 00:02:07.263265 containerd[1914]: time="2025-08-13T00:02:07.262992523Z" level=warning msg="cleaning up after shim disconnected" id=b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615 namespace=k8s.io Aug 13 00:02:07.263265 containerd[1914]: time="2025-08-13T00:02:07.263009760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:07.826038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615-rootfs.mount: Deactivated successfully. Aug 13 00:02:08.199739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147215317.mount: Deactivated successfully. Aug 13 00:02:08.243387 containerd[1914]: time="2025-08-13T00:02:08.243340096Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:02:08.277007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664574021.mount: Deactivated successfully. Aug 13 00:02:08.296252 containerd[1914]: time="2025-08-13T00:02:08.296195253Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\"" Aug 13 00:02:08.298859 containerd[1914]: time="2025-08-13T00:02:08.298043183Z" level=info msg="StartContainer for \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\"" Aug 13 00:02:08.355220 systemd[1]: Started cri-containerd-53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90.scope - libcontainer container 53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90. Aug 13 00:02:08.403004 containerd[1914]: time="2025-08-13T00:02:08.402948965Z" level=info msg="StartContainer for \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\" returns successfully" Aug 13 00:02:08.424705 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:02:08.425657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:02:08.426511 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:02:08.437286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:02:08.437597 systemd[1]: cri-containerd-53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90.scope: Deactivated successfully. Aug 13 00:02:08.491089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:02:08.555717 containerd[1914]: time="2025-08-13T00:02:08.555657966Z" level=info msg="shim disconnected" id=53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90 namespace=k8s.io Aug 13 00:02:08.555717 containerd[1914]: time="2025-08-13T00:02:08.555713993Z" level=warning msg="cleaning up after shim disconnected" id=53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90 namespace=k8s.io Aug 13 00:02:08.556341 containerd[1914]: time="2025-08-13T00:02:08.555727119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:09.132928 containerd[1914]: time="2025-08-13T00:02:09.132877681Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:09.142286 containerd[1914]: time="2025-08-13T00:02:09.139616990Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:02:09.160009 containerd[1914]: time="2025-08-13T00:02:09.158508499Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:09.160009 containerd[1914]: time="2025-08-13T00:02:09.159910643Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.41749689s" Aug 13 00:02:09.160009 containerd[1914]: time="2025-08-13T00:02:09.159941791Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:02:09.168582 containerd[1914]: time="2025-08-13T00:02:09.168420053Z" level=info msg="CreateContainer within sandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:02:09.193800 containerd[1914]: time="2025-08-13T00:02:09.193756682Z" level=info msg="CreateContainer within sandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\"" Aug 13 00:02:09.195034 containerd[1914]: time="2025-08-13T00:02:09.194608736Z" level=info msg="StartContainer for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\"" Aug 13 00:02:09.243285 systemd[1]: Started cri-containerd-8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45.scope - libcontainer container 8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45. Aug 13 00:02:09.250764 containerd[1914]: time="2025-08-13T00:02:09.250571406Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:02:09.319915 containerd[1914]: time="2025-08-13T00:02:09.319552836Z" level=info msg="StartContainer for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" returns successfully" Aug 13 00:02:09.319915 containerd[1914]: time="2025-08-13T00:02:09.319585587Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\"" Aug 13 00:02:09.322140 containerd[1914]: time="2025-08-13T00:02:09.321203848Z" level=info msg="StartContainer for \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\"" Aug 13 00:02:09.372898 systemd[1]: Started cri-containerd-ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897.scope - libcontainer container ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897. Aug 13 00:02:09.436752 containerd[1914]: time="2025-08-13T00:02:09.436121508Z" level=info msg="StartContainer for \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\" returns successfully" Aug 13 00:02:09.447189 systemd[1]: cri-containerd-ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897.scope: Deactivated successfully. Aug 13 00:02:09.447550 systemd[1]: cri-containerd-ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897.scope: Consumed 26ms CPU time, 2.8M memory peak, 1M read from disk. Aug 13 00:02:09.587795 containerd[1914]: time="2025-08-13T00:02:09.587641486Z" level=info msg="shim disconnected" id=ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897 namespace=k8s.io Aug 13 00:02:09.588236 containerd[1914]: time="2025-08-13T00:02:09.588023244Z" level=warning msg="cleaning up after shim disconnected" id=ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897 namespace=k8s.io Aug 13 00:02:09.588236 containerd[1914]: time="2025-08-13T00:02:09.588043081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:09.613746 containerd[1914]: time="2025-08-13T00:02:09.611120120Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:02:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:02:09.829505 systemd[1]: run-containerd-runc-k8s.io-8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45-runc.lfHKBo.mount: Deactivated successfully. Aug 13 00:02:10.258165 containerd[1914]: time="2025-08-13T00:02:10.257975614Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:02:10.300411 containerd[1914]: time="2025-08-13T00:02:10.300183714Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\"" Aug 13 00:02:10.301004 containerd[1914]: time="2025-08-13T00:02:10.300824043Z" level=info msg="StartContainer for \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\"" Aug 13 00:02:10.362988 systemd[1]: Started cri-containerd-38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004.scope - libcontainer container 38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004. Aug 13 00:02:10.417789 kubelet[3214]: I0813 00:02:10.414274 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f5nzf" podStartSLOduration=1.939601266 podStartE2EDuration="13.414251709s" podCreationTimestamp="2025-08-13 00:01:57 +0000 UTC" firstStartedPulling="2025-08-13 00:01:57.686311556 +0000 UTC m=+5.835408723" lastFinishedPulling="2025-08-13 00:02:09.160961999 +0000 UTC m=+17.310059166" observedRunningTime="2025-08-13 00:02:10.372195335 +0000 UTC m=+18.521292525" watchObservedRunningTime="2025-08-13 00:02:10.414251709 +0000 UTC m=+18.563348898" Aug 13 00:02:10.452334 systemd[1]: cri-containerd-38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004.scope: Deactivated successfully. Aug 13 00:02:10.462792 containerd[1914]: time="2025-08-13T00:02:10.462626655Z" level=info msg="StartContainer for \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\" returns successfully" Aug 13 00:02:10.522269 containerd[1914]: time="2025-08-13T00:02:10.522027392Z" level=info msg="shim disconnected" id=38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004 namespace=k8s.io Aug 13 00:02:10.522269 containerd[1914]: time="2025-08-13T00:02:10.522089928Z" level=warning msg="cleaning up after shim disconnected" id=38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004 namespace=k8s.io Aug 13 00:02:10.522269 containerd[1914]: time="2025-08-13T00:02:10.522103287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:02:10.826053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004-rootfs.mount: Deactivated successfully. Aug 13 00:02:11.283158 containerd[1914]: time="2025-08-13T00:02:11.281687847Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:02:11.327078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943789610.mount: Deactivated successfully. Aug 13 00:02:11.327749 containerd[1914]: time="2025-08-13T00:02:11.327699563Z" level=info msg="CreateContainer within sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\"" Aug 13 00:02:11.330100 containerd[1914]: time="2025-08-13T00:02:11.330043801Z" level=info msg="StartContainer for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\"" Aug 13 00:02:11.382252 systemd[1]: Started cri-containerd-34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f.scope - libcontainer container 34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f. Aug 13 00:02:11.418952 containerd[1914]: time="2025-08-13T00:02:11.418892228Z" level=info msg="StartContainer for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" returns successfully" Aug 13 00:02:11.841580 kubelet[3214]: I0813 00:02:11.841534 3214 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:02:11.912640 systemd[1]: Created slice kubepods-burstable-podef4f819e_d663_4634_a38c_18127e35f648.slice - libcontainer container kubepods-burstable-podef4f819e_d663_4634_a38c_18127e35f648.slice. Aug 13 00:02:11.924428 systemd[1]: Created slice kubepods-burstable-pod18ae6206_24d9_4f8d_8d44_72794e85cdfe.slice - libcontainer container kubepods-burstable-pod18ae6206_24d9_4f8d_8d44_72794e85cdfe.slice. Aug 13 00:02:11.970848 kubelet[3214]: I0813 00:02:11.970797 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnv5f\" (UniqueName: \"kubernetes.io/projected/18ae6206-24d9-4f8d-8d44-72794e85cdfe-kube-api-access-gnv5f\") pod \"coredns-674b8bbfcf-nxnhd\" (UID: \"18ae6206-24d9-4f8d-8d44-72794e85cdfe\") " pod="kube-system/coredns-674b8bbfcf-nxnhd" Aug 13 00:02:11.970848 kubelet[3214]: I0813 00:02:11.970866 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef4f819e-d663-4634-a38c-18127e35f648-config-volume\") pod \"coredns-674b8bbfcf-qzp59\" (UID: \"ef4f819e-d663-4634-a38c-18127e35f648\") " pod="kube-system/coredns-674b8bbfcf-qzp59" Aug 13 00:02:11.971167 kubelet[3214]: I0813 00:02:11.970894 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcm6\" (UniqueName: \"kubernetes.io/projected/ef4f819e-d663-4634-a38c-18127e35f648-kube-api-access-qpcm6\") pod \"coredns-674b8bbfcf-qzp59\" (UID: \"ef4f819e-d663-4634-a38c-18127e35f648\") " pod="kube-system/coredns-674b8bbfcf-qzp59" Aug 13 00:02:11.971167 kubelet[3214]: I0813 00:02:11.970939 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18ae6206-24d9-4f8d-8d44-72794e85cdfe-config-volume\") pod \"coredns-674b8bbfcf-nxnhd\" (UID: \"18ae6206-24d9-4f8d-8d44-72794e85cdfe\") " pod="kube-system/coredns-674b8bbfcf-nxnhd" Aug 13 00:02:12.221998 containerd[1914]: time="2025-08-13T00:02:12.221218475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qzp59,Uid:ef4f819e-d663-4634-a38c-18127e35f648,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:12.231015 containerd[1914]: time="2025-08-13T00:02:12.230657550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nxnhd,Uid:18ae6206-24d9-4f8d-8d44-72794e85cdfe,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:14.267175 systemd-networkd[1744]: cilium_host: Link UP Aug 13 00:02:14.267768 (udev-worker)[4286]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:02:14.269390 (udev-worker)[4288]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:02:14.270105 systemd-networkd[1744]: cilium_net: Link UP Aug 13 00:02:14.270311 systemd-networkd[1744]: cilium_net: Gained carrier Aug 13 00:02:14.270441 systemd-networkd[1744]: cilium_host: Gained carrier Aug 13 00:02:14.395043 (udev-worker)[4331]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:02:14.411686 systemd-networkd[1744]: cilium_vxlan: Link UP Aug 13 00:02:14.411912 systemd-networkd[1744]: cilium_vxlan: Gained carrier Aug 13 00:02:14.861296 systemd-networkd[1744]: cilium_net: Gained IPv6LL Aug 13 00:02:14.984205 kernel: NET: Registered PF_ALG protocol family Aug 13 00:02:15.053311 systemd-networkd[1744]: cilium_host: Gained IPv6LL Aug 13 00:02:15.768753 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:02:15.771220 systemd-networkd[1744]: lxc_health: Link UP Aug 13 00:02:15.775358 systemd-networkd[1744]: lxc_health: Gained carrier Aug 13 00:02:15.821148 systemd-networkd[1744]: cilium_vxlan: Gained IPv6LL Aug 13 00:02:16.417478 systemd-networkd[1744]: lxcaf67a9d87ee4: Link UP Aug 13 00:02:16.427381 kernel: eth0: renamed from tmp7dd5f Aug 13 00:02:16.429763 (udev-worker)[4652]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:02:16.429943 systemd-networkd[1744]: lxcaf67a9d87ee4: Gained carrier Aug 13 00:02:16.464722 systemd-networkd[1744]: lxcd6aa85800f12: Link UP Aug 13 00:02:16.474137 kernel: eth0: renamed from tmpc9bce Aug 13 00:02:16.484342 systemd-networkd[1744]: lxcd6aa85800f12: Gained carrier Aug 13 00:02:16.781149 systemd-networkd[1744]: lxc_health: Gained IPv6LL Aug 13 00:02:17.409784 kubelet[3214]: I0813 00:02:17.409598 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ct59d" podStartSLOduration=11.177356518 podStartE2EDuration="20.40936113s" podCreationTimestamp="2025-08-13 00:01:57 +0000 UTC" firstStartedPulling="2025-08-13 00:01:57.509465568 +0000 UTC m=+5.658562752" lastFinishedPulling="2025-08-13 00:02:06.741470197 +0000 UTC m=+14.890567364" observedRunningTime="2025-08-13 00:02:12.355722305 +0000 UTC m=+20.504819498" watchObservedRunningTime="2025-08-13 00:02:17.40936113 +0000 UTC m=+25.558458320" Aug 13 00:02:17.549269 systemd-networkd[1744]: lxcaf67a9d87ee4: Gained IPv6LL Aug 13 00:02:18.510383 systemd-networkd[1744]: lxcd6aa85800f12: Gained IPv6LL Aug 13 00:02:20.112886 kubelet[3214]: I0813 00:02:20.112028 3214 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:02:20.889335 containerd[1914]: time="2025-08-13T00:02:20.888722323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:20.889335 containerd[1914]: time="2025-08-13T00:02:20.888795481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:20.889335 containerd[1914]: time="2025-08-13T00:02:20.888815165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.889335 containerd[1914]: time="2025-08-13T00:02:20.888922889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.925740 containerd[1914]: time="2025-08-13T00:02:20.925200894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:20.925740 containerd[1914]: time="2025-08-13T00:02:20.925375411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:20.925740 containerd[1914]: time="2025-08-13T00:02:20.925436950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:20.926237 containerd[1914]: time="2025-08-13T00:02:20.925936074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:21.006240 systemd[1]: Started cri-containerd-c9bce617b3cef09798b1c76c3b69f53bbd0a00516fdce2a5585370b24185ea9a.scope - libcontainer container c9bce617b3cef09798b1c76c3b69f53bbd0a00516fdce2a5585370b24185ea9a. Aug 13 00:02:21.020399 systemd[1]: Started cri-containerd-7dd5fa8be87049aa08bd3d090966fe32f1cda1d6e91cba516b4d20819bde9fa1.scope - libcontainer container 7dd5fa8be87049aa08bd3d090966fe32f1cda1d6e91cba516b4d20819bde9fa1. Aug 13 00:02:21.129084 containerd[1914]: time="2025-08-13T00:02:21.129027716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nxnhd,Uid:18ae6206-24d9-4f8d-8d44-72794e85cdfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9bce617b3cef09798b1c76c3b69f53bbd0a00516fdce2a5585370b24185ea9a\"" Aug 13 00:02:21.156874 containerd[1914]: time="2025-08-13T00:02:21.156754160Z" level=info msg="CreateContainer within sandbox \"c9bce617b3cef09798b1c76c3b69f53bbd0a00516fdce2a5585370b24185ea9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:02:21.183970 containerd[1914]: time="2025-08-13T00:02:21.183928312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qzp59,Uid:ef4f819e-d663-4634-a38c-18127e35f648,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dd5fa8be87049aa08bd3d090966fe32f1cda1d6e91cba516b4d20819bde9fa1\"" Aug 13 00:02:21.198513 containerd[1914]: time="2025-08-13T00:02:21.198472229Z" level=info msg="CreateContainer within sandbox \"7dd5fa8be87049aa08bd3d090966fe32f1cda1d6e91cba516b4d20819bde9fa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:02:21.205364 containerd[1914]: time="2025-08-13T00:02:21.205311620Z" level=info msg="CreateContainer within sandbox \"c9bce617b3cef09798b1c76c3b69f53bbd0a00516fdce2a5585370b24185ea9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2cc1d799d6e24ffb4e4c2fc256d899af3d59d0a3520cc35eb0663d2eee25c00\"" Aug 13 00:02:21.207773 containerd[1914]: time="2025-08-13T00:02:21.205834471Z" level=info msg="StartContainer for \"e2cc1d799d6e24ffb4e4c2fc256d899af3d59d0a3520cc35eb0663d2eee25c00\"" Aug 13 00:02:21.228628 containerd[1914]: time="2025-08-13T00:02:21.228589031Z" level=info msg="CreateContainer within sandbox \"7dd5fa8be87049aa08bd3d090966fe32f1cda1d6e91cba516b4d20819bde9fa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d45ebfafb2088452794258d0fbdc76f199f88d35a771b9ff8546f4f287c535b\"" Aug 13 00:02:21.237578 containerd[1914]: time="2025-08-13T00:02:21.236301718Z" level=info msg="StartContainer for \"6d45ebfafb2088452794258d0fbdc76f199f88d35a771b9ff8546f4f287c535b\"" Aug 13 00:02:21.246204 systemd[1]: Started cri-containerd-e2cc1d799d6e24ffb4e4c2fc256d899af3d59d0a3520cc35eb0663d2eee25c00.scope - libcontainer container e2cc1d799d6e24ffb4e4c2fc256d899af3d59d0a3520cc35eb0663d2eee25c00. Aug 13 00:02:21.292272 systemd[1]: Started cri-containerd-6d45ebfafb2088452794258d0fbdc76f199f88d35a771b9ff8546f4f287c535b.scope - libcontainer container 6d45ebfafb2088452794258d0fbdc76f199f88d35a771b9ff8546f4f287c535b. Aug 13 00:02:21.299222 containerd[1914]: time="2025-08-13T00:02:21.299148761Z" level=info msg="StartContainer for \"e2cc1d799d6e24ffb4e4c2fc256d899af3d59d0a3520cc35eb0663d2eee25c00\" returns successfully" Aug 13 00:02:21.348706 containerd[1914]: time="2025-08-13T00:02:21.348649265Z" level=info msg="StartContainer for \"6d45ebfafb2088452794258d0fbdc76f199f88d35a771b9ff8546f4f287c535b\" returns successfully" Aug 13 00:02:21.900273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055093789.mount: Deactivated successfully. Aug 13 00:02:22.359624 kubelet[3214]: I0813 00:02:22.358939 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qzp59" podStartSLOduration=25.358923271 podStartE2EDuration="25.358923271s" podCreationTimestamp="2025-08-13 00:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:22.358685105 +0000 UTC m=+30.507782295" watchObservedRunningTime="2025-08-13 00:02:22.358923271 +0000 UTC m=+30.508020459" Aug 13 00:02:22.359624 kubelet[3214]: I0813 00:02:22.359048 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nxnhd" podStartSLOduration=25.359043126 podStartE2EDuration="25.359043126s" podCreationTimestamp="2025-08-13 00:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:02:21.364870856 +0000 UTC m=+29.513968058" watchObservedRunningTime="2025-08-13 00:02:22.359043126 +0000 UTC m=+30.508140315" Aug 13 00:02:23.353451 systemd[1]: Started sshd@9-172.31.18.46:22-139.178.68.195:50832.service - OpenSSH per-connection server daemon (139.178.68.195:50832). Aug 13 00:02:23.570575 sshd[4863]: Accepted publickey for core from 139.178.68.195 port 50832 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:23.572593 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:23.579240 systemd-logind[1898]: New session 10 of user core. Aug 13 00:02:23.587241 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:02:23.958172 ntpd[1888]: Listen normally on 7 cilium_host 192.168.0.199:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 7 cilium_host 192.168.0.199:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 8 cilium_net [fe80::10bd:e1ff:fea8:3e54%4]:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 9 cilium_host [fe80::10ff:6bff:fe3b:4ff0%5]:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 10 cilium_vxlan [fe80::905f:7bff:fe14:fc5c%6]:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 11 lxc_health [fe80::838:4aff:fed0:915%8]:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 12 lxcaf67a9d87ee4 [fe80::dc7e:cfff:fe9d:604b%10]:123 Aug 13 00:02:23.959418 ntpd[1888]: 13 Aug 00:02:23 ntpd[1888]: Listen normally on 13 lxcd6aa85800f12 [fe80::785d:fff:fe32:f06c%12]:123 Aug 13 00:02:23.958247 ntpd[1888]: Listen normally on 8 cilium_net [fe80::10bd:e1ff:fea8:3e54%4]:123 Aug 13 00:02:23.958293 ntpd[1888]: Listen normally on 9 cilium_host [fe80::10ff:6bff:fe3b:4ff0%5]:123 Aug 13 00:02:23.958322 ntpd[1888]: Listen normally on 10 cilium_vxlan [fe80::905f:7bff:fe14:fc5c%6]:123 Aug 13 00:02:23.958350 ntpd[1888]: Listen normally on 11 lxc_health [fe80::838:4aff:fed0:915%8]:123 Aug 13 00:02:23.958379 ntpd[1888]: Listen normally on 12 lxcaf67a9d87ee4 [fe80::dc7e:cfff:fe9d:604b%10]:123 Aug 13 00:02:23.958413 ntpd[1888]: Listen normally on 13 lxcd6aa85800f12 [fe80::785d:fff:fe32:f06c%12]:123 Aug 13 00:02:24.417052 sshd[4869]: Connection closed by 139.178.68.195 port 50832 Aug 13 00:02:24.418897 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:24.422016 systemd[1]: sshd@9-172.31.18.46:22-139.178.68.195:50832.service: Deactivated successfully. Aug 13 00:02:24.424094 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:02:24.426327 systemd-logind[1898]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:02:24.427495 systemd-logind[1898]: Removed session 10. Aug 13 00:02:29.456463 systemd[1]: Started sshd@10-172.31.18.46:22-139.178.68.195:50844.service - OpenSSH per-connection server daemon (139.178.68.195:50844). Aug 13 00:02:29.622674 sshd[4887]: Accepted publickey for core from 139.178.68.195 port 50844 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:29.624292 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:29.629562 systemd-logind[1898]: New session 11 of user core. Aug 13 00:02:29.636253 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:02:29.844503 sshd[4889]: Connection closed by 139.178.68.195 port 50844 Aug 13 00:02:29.846291 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:29.852197 systemd-logind[1898]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:02:29.852944 systemd[1]: sshd@10-172.31.18.46:22-139.178.68.195:50844.service: Deactivated successfully. Aug 13 00:02:29.855634 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:02:29.857199 systemd-logind[1898]: Removed session 11. Aug 13 00:02:34.878621 systemd[1]: Started sshd@11-172.31.18.46:22-139.178.68.195:49500.service - OpenSSH per-connection server daemon (139.178.68.195:49500). Aug 13 00:02:35.071746 sshd[4902]: Accepted publickey for core from 139.178.68.195 port 49500 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:35.073273 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:35.079747 systemd-logind[1898]: New session 12 of user core. Aug 13 00:02:35.087216 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:02:35.312061 sshd[4904]: Connection closed by 139.178.68.195 port 49500 Aug 13 00:02:35.313875 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:35.318220 systemd[1]: sshd@11-172.31.18.46:22-139.178.68.195:49500.service: Deactivated successfully. Aug 13 00:02:35.320736 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:02:35.322108 systemd-logind[1898]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:02:35.323439 systemd-logind[1898]: Removed session 12. Aug 13 00:02:40.352362 systemd[1]: Started sshd@12-172.31.18.46:22-139.178.68.195:41838.service - OpenSSH per-connection server daemon (139.178.68.195:41838). Aug 13 00:02:40.514483 sshd[4917]: Accepted publickey for core from 139.178.68.195 port 41838 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:40.515945 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:40.520501 systemd-logind[1898]: New session 13 of user core. Aug 13 00:02:40.528233 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:02:40.715062 sshd[4919]: Connection closed by 139.178.68.195 port 41838 Aug 13 00:02:40.715881 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:40.719819 systemd-logind[1898]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:02:40.720568 systemd[1]: sshd@12-172.31.18.46:22-139.178.68.195:41838.service: Deactivated successfully. Aug 13 00:02:40.722678 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:02:40.724005 systemd-logind[1898]: Removed session 13. Aug 13 00:02:40.753576 systemd[1]: Started sshd@13-172.31.18.46:22-139.178.68.195:41854.service - OpenSSH per-connection server daemon (139.178.68.195:41854). Aug 13 00:02:40.912808 sshd[4932]: Accepted publickey for core from 139.178.68.195 port 41854 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:40.914347 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:40.920024 systemd-logind[1898]: New session 14 of user core. Aug 13 00:02:40.925420 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:02:41.169311 sshd[4934]: Connection closed by 139.178.68.195 port 41854 Aug 13 00:02:41.171741 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:41.179728 systemd-logind[1898]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:02:41.182378 systemd[1]: sshd@13-172.31.18.46:22-139.178.68.195:41854.service: Deactivated successfully. Aug 13 00:02:41.187667 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:02:41.191615 systemd-logind[1898]: Removed session 14. Aug 13 00:02:41.216112 systemd[1]: Started sshd@14-172.31.18.46:22-139.178.68.195:41856.service - OpenSSH per-connection server daemon (139.178.68.195:41856). Aug 13 00:02:41.399491 sshd[4944]: Accepted publickey for core from 139.178.68.195 port 41856 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:41.400136 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:41.406130 systemd-logind[1898]: New session 15 of user core. Aug 13 00:02:41.412209 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:02:41.611796 sshd[4947]: Connection closed by 139.178.68.195 port 41856 Aug 13 00:02:41.613531 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:41.618027 systemd[1]: sshd@14-172.31.18.46:22-139.178.68.195:41856.service: Deactivated successfully. Aug 13 00:02:41.620581 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:02:41.622039 systemd-logind[1898]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:02:41.623467 systemd-logind[1898]: Removed session 15. Aug 13 00:02:46.644124 systemd[1]: Started sshd@15-172.31.18.46:22-139.178.68.195:41866.service - OpenSSH per-connection server daemon (139.178.68.195:41866). Aug 13 00:02:46.807949 sshd[4959]: Accepted publickey for core from 139.178.68.195 port 41866 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:46.809523 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:46.815504 systemd-logind[1898]: New session 16 of user core. Aug 13 00:02:46.823263 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:02:47.005278 sshd[4961]: Connection closed by 139.178.68.195 port 41866 Aug 13 00:02:47.006784 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:47.010159 systemd[1]: sshd@15-172.31.18.46:22-139.178.68.195:41866.service: Deactivated successfully. Aug 13 00:02:47.012359 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:02:47.013572 systemd-logind[1898]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:02:47.014791 systemd-logind[1898]: Removed session 16. Aug 13 00:02:52.048179 systemd[1]: Started sshd@16-172.31.18.46:22-139.178.68.195:52514.service - OpenSSH per-connection server daemon (139.178.68.195:52514). Aug 13 00:02:52.215572 sshd[4974]: Accepted publickey for core from 139.178.68.195 port 52514 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:52.217128 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:52.222393 systemd-logind[1898]: New session 17 of user core. Aug 13 00:02:52.229229 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:02:52.418722 sshd[4978]: Connection closed by 139.178.68.195 port 52514 Aug 13 00:02:52.419450 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:52.422713 systemd[1]: sshd@16-172.31.18.46:22-139.178.68.195:52514.service: Deactivated successfully. Aug 13 00:02:52.424803 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:02:52.426425 systemd-logind[1898]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:02:52.428058 systemd-logind[1898]: Removed session 17. Aug 13 00:02:52.452324 systemd[1]: Started sshd@17-172.31.18.46:22-139.178.68.195:52526.service - OpenSSH per-connection server daemon (139.178.68.195:52526). Aug 13 00:02:52.616051 sshd[4990]: Accepted publickey for core from 139.178.68.195 port 52526 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:52.617533 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:52.623621 systemd-logind[1898]: New session 18 of user core. Aug 13 00:02:52.628224 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:02:53.293898 sshd[4992]: Connection closed by 139.178.68.195 port 52526 Aug 13 00:02:53.295279 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:53.299210 systemd-logind[1898]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:02:53.299475 systemd[1]: sshd@17-172.31.18.46:22-139.178.68.195:52526.service: Deactivated successfully. Aug 13 00:02:53.301858 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:02:53.305009 systemd-logind[1898]: Removed session 18. Aug 13 00:02:53.330075 systemd[1]: Started sshd@18-172.31.18.46:22-139.178.68.195:52528.service - OpenSSH per-connection server daemon (139.178.68.195:52528). Aug 13 00:02:53.508008 sshd[5002]: Accepted publickey for core from 139.178.68.195 port 52528 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:53.508927 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:53.514685 systemd-logind[1898]: New session 19 of user core. Aug 13 00:02:53.523248 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:02:54.315161 sshd[5004]: Connection closed by 139.178.68.195 port 52528 Aug 13 00:02:54.316205 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:54.323424 systemd-logind[1898]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:02:54.324384 systemd[1]: sshd@18-172.31.18.46:22-139.178.68.195:52528.service: Deactivated successfully. Aug 13 00:02:54.329768 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:02:54.331499 systemd-logind[1898]: Removed session 19. Aug 13 00:02:54.350584 systemd[1]: Started sshd@19-172.31.18.46:22-139.178.68.195:52530.service - OpenSSH per-connection server daemon (139.178.68.195:52530). Aug 13 00:02:54.529362 sshd[5021]: Accepted publickey for core from 139.178.68.195 port 52530 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:54.531667 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:54.537417 systemd-logind[1898]: New session 20 of user core. Aug 13 00:02:54.550248 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:02:54.905726 sshd[5023]: Connection closed by 139.178.68.195 port 52530 Aug 13 00:02:54.910937 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:54.915667 systemd-logind[1898]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:02:54.916680 systemd[1]: sshd@19-172.31.18.46:22-139.178.68.195:52530.service: Deactivated successfully. Aug 13 00:02:54.919506 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:02:54.920734 systemd-logind[1898]: Removed session 20. Aug 13 00:02:54.943339 systemd[1]: Started sshd@20-172.31.18.46:22-139.178.68.195:52544.service - OpenSSH per-connection server daemon (139.178.68.195:52544). Aug 13 00:02:55.129449 sshd[5033]: Accepted publickey for core from 139.178.68.195 port 52544 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:02:55.131086 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:02:55.136095 systemd-logind[1898]: New session 21 of user core. Aug 13 00:02:55.148248 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:02:55.333820 sshd[5035]: Connection closed by 139.178.68.195 port 52544 Aug 13 00:02:55.334808 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:55.339780 systemd[1]: sshd@20-172.31.18.46:22-139.178.68.195:52544.service: Deactivated successfully. Aug 13 00:02:55.342871 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:02:55.343838 systemd-logind[1898]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:02:55.345403 systemd-logind[1898]: Removed session 21. Aug 13 00:03:00.374382 systemd[1]: Started sshd@21-172.31.18.46:22-139.178.68.195:60592.service - OpenSSH per-connection server daemon (139.178.68.195:60592). Aug 13 00:03:00.541110 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 60592 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:00.542726 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:00.548133 systemd-logind[1898]: New session 22 of user core. Aug 13 00:03:00.553247 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:03:00.749349 sshd[5053]: Connection closed by 139.178.68.195 port 60592 Aug 13 00:03:00.751127 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:00.756066 systemd[1]: sshd@21-172.31.18.46:22-139.178.68.195:60592.service: Deactivated successfully. Aug 13 00:03:00.756270 systemd-logind[1898]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:03:00.759582 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:03:00.760852 systemd-logind[1898]: Removed session 22. Aug 13 00:03:05.787325 systemd[1]: Started sshd@22-172.31.18.46:22-139.178.68.195:60606.service - OpenSSH per-connection server daemon (139.178.68.195:60606). Aug 13 00:03:05.949653 sshd[5065]: Accepted publickey for core from 139.178.68.195 port 60606 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:05.951134 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:05.956087 systemd-logind[1898]: New session 23 of user core. Aug 13 00:03:05.968292 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:03:06.158731 sshd[5067]: Connection closed by 139.178.68.195 port 60606 Aug 13 00:03:06.160428 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:06.165881 systemd[1]: sshd@22-172.31.18.46:22-139.178.68.195:60606.service: Deactivated successfully. Aug 13 00:03:06.170098 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:03:06.171811 systemd-logind[1898]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:03:06.173054 systemd-logind[1898]: Removed session 23. Aug 13 00:03:11.206335 systemd[1]: Started sshd@23-172.31.18.46:22-139.178.68.195:54408.service - OpenSSH per-connection server daemon (139.178.68.195:54408). Aug 13 00:03:11.380027 sshd[5079]: Accepted publickey for core from 139.178.68.195 port 54408 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:11.381089 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:11.386521 systemd-logind[1898]: New session 24 of user core. Aug 13 00:03:11.398283 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:03:11.587860 sshd[5081]: Connection closed by 139.178.68.195 port 54408 Aug 13 00:03:11.589491 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:11.596299 systemd[1]: sshd@23-172.31.18.46:22-139.178.68.195:54408.service: Deactivated successfully. Aug 13 00:03:11.599493 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:03:11.612265 systemd-logind[1898]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:03:11.629348 systemd[1]: Started sshd@24-172.31.18.46:22-139.178.68.195:54420.service - OpenSSH per-connection server daemon (139.178.68.195:54420). Aug 13 00:03:11.630973 systemd-logind[1898]: Removed session 24. Aug 13 00:03:11.799841 sshd[5092]: Accepted publickey for core from 139.178.68.195 port 54420 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:11.801484 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:11.807049 systemd-logind[1898]: New session 25 of user core. Aug 13 00:03:11.812212 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:03:13.329197 containerd[1914]: time="2025-08-13T00:03:13.329149979Z" level=info msg="StopContainer for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" with timeout 30 (s)" Aug 13 00:03:13.337232 containerd[1914]: time="2025-08-13T00:03:13.337196403Z" level=info msg="Stop container \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" with signal terminated" Aug 13 00:03:13.390491 systemd[1]: cri-containerd-8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45.scope: Deactivated successfully. Aug 13 00:03:13.414520 containerd[1914]: time="2025-08-13T00:03:13.414471582Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:03:13.434644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45-rootfs.mount: Deactivated successfully. Aug 13 00:03:13.444547 containerd[1914]: time="2025-08-13T00:03:13.444311854Z" level=info msg="StopContainer for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" with timeout 2 (s)" Aug 13 00:03:13.446338 containerd[1914]: time="2025-08-13T00:03:13.445123843Z" level=info msg="Stop container \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" with signal terminated" Aug 13 00:03:13.452899 containerd[1914]: time="2025-08-13T00:03:13.452822569Z" level=info msg="shim disconnected" id=8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45 namespace=k8s.io Aug 13 00:03:13.452899 containerd[1914]: time="2025-08-13T00:03:13.452892903Z" level=warning msg="cleaning up after shim disconnected" id=8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45 namespace=k8s.io Aug 13 00:03:13.452899 containerd[1914]: time="2025-08-13T00:03:13.452903910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:13.459377 systemd-networkd[1744]: lxc_health: Link DOWN Aug 13 00:03:13.459386 systemd-networkd[1744]: lxc_health: Lost carrier Aug 13 00:03:13.484540 systemd[1]: cri-containerd-34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f.scope: Deactivated successfully. Aug 13 00:03:13.484892 systemd[1]: cri-containerd-34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f.scope: Consumed 8.071s CPU time, 195.3M memory peak, 75.7M read from disk, 13.3M written to disk. Aug 13 00:03:13.504023 containerd[1914]: time="2025-08-13T00:03:13.503562239Z" level=info msg="StopContainer for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" returns successfully" Aug 13 00:03:13.505251 containerd[1914]: time="2025-08-13T00:03:13.505096477Z" level=info msg="StopPodSandbox for \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\"" Aug 13 00:03:13.509121 containerd[1914]: time="2025-08-13T00:03:13.509033974Z" level=info msg="Container to stop \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:13.518317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160-shm.mount: Deactivated successfully. Aug 13 00:03:13.534058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f-rootfs.mount: Deactivated successfully. Aug 13 00:03:13.535529 systemd[1]: cri-containerd-a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160.scope: Deactivated successfully. Aug 13 00:03:13.551901 containerd[1914]: time="2025-08-13T00:03:13.551378281Z" level=info msg="shim disconnected" id=34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f namespace=k8s.io Aug 13 00:03:13.551901 containerd[1914]: time="2025-08-13T00:03:13.551463837Z" level=warning msg="cleaning up after shim disconnected" id=34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f namespace=k8s.io Aug 13 00:03:13.551901 containerd[1914]: time="2025-08-13T00:03:13.551476565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:13.568684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160-rootfs.mount: Deactivated successfully. Aug 13 00:03:13.579027 containerd[1914]: time="2025-08-13T00:03:13.577595001Z" level=info msg="shim disconnected" id=a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160 namespace=k8s.io Aug 13 00:03:13.579027 containerd[1914]: time="2025-08-13T00:03:13.577657543Z" level=warning msg="cleaning up after shim disconnected" id=a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160 namespace=k8s.io Aug 13 00:03:13.579027 containerd[1914]: time="2025-08-13T00:03:13.577669195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:13.601205 containerd[1914]: time="2025-08-13T00:03:13.600368504Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:03:13.602359 containerd[1914]: time="2025-08-13T00:03:13.601958366Z" level=info msg="TearDown network for sandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" successfully" Aug 13 00:03:13.602521 containerd[1914]: time="2025-08-13T00:03:13.602496459Z" level=info msg="StopPodSandbox for \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" returns successfully" Aug 13 00:03:13.602688 containerd[1914]: time="2025-08-13T00:03:13.602415262Z" level=info msg="StopContainer for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" returns successfully" Aug 13 00:03:13.603685 containerd[1914]: time="2025-08-13T00:03:13.603605361Z" level=info msg="StopPodSandbox for \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\"" Aug 13 00:03:13.603685 containerd[1914]: time="2025-08-13T00:03:13.603649236Z" level=info msg="Container to stop \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:13.603815 containerd[1914]: time="2025-08-13T00:03:13.603694538Z" level=info msg="Container to stop \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:13.603815 containerd[1914]: time="2025-08-13T00:03:13.603708180Z" level=info msg="Container to stop \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:13.603815 containerd[1914]: time="2025-08-13T00:03:13.603721246Z" level=info msg="Container to stop \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:13.603815 containerd[1914]: time="2025-08-13T00:03:13.603734016Z" level=info msg="Container to stop \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:03:13.614071 systemd[1]: cri-containerd-867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459.scope: Deactivated successfully. Aug 13 00:03:13.650620 kubelet[3214]: I0813 00:03:13.650574 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fm6w\" (UniqueName: \"kubernetes.io/projected/e4f01dc3-731d-46f0-956f-49f12d76b5b1-kube-api-access-6fm6w\") pod \"e4f01dc3-731d-46f0-956f-49f12d76b5b1\" (UID: \"e4f01dc3-731d-46f0-956f-49f12d76b5b1\") " Aug 13 00:03:13.652613 kubelet[3214]: I0813 00:03:13.650665 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f01dc3-731d-46f0-956f-49f12d76b5b1-cilium-config-path\") pod \"e4f01dc3-731d-46f0-956f-49f12d76b5b1\" (UID: \"e4f01dc3-731d-46f0-956f-49f12d76b5b1\") " Aug 13 00:03:13.666384 containerd[1914]: time="2025-08-13T00:03:13.665899281Z" level=info msg="shim disconnected" id=867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459 namespace=k8s.io Aug 13 00:03:13.669437 containerd[1914]: time="2025-08-13T00:03:13.669134748Z" level=warning msg="cleaning up after shim disconnected" id=867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459 namespace=k8s.io Aug 13 00:03:13.669437 containerd[1914]: time="2025-08-13T00:03:13.669201167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:13.680378 kubelet[3214]: I0813 00:03:13.679404 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f01dc3-731d-46f0-956f-49f12d76b5b1-kube-api-access-6fm6w" (OuterVolumeSpecName: "kube-api-access-6fm6w") pod "e4f01dc3-731d-46f0-956f-49f12d76b5b1" (UID: "e4f01dc3-731d-46f0-956f-49f12d76b5b1"). InnerVolumeSpecName "kube-api-access-6fm6w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:03:13.684736 kubelet[3214]: I0813 00:03:13.677925 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f01dc3-731d-46f0-956f-49f12d76b5b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4f01dc3-731d-46f0-956f-49f12d76b5b1" (UID: "e4f01dc3-731d-46f0-956f-49f12d76b5b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:03:13.707748 containerd[1914]: time="2025-08-13T00:03:13.707603678Z" level=info msg="TearDown network for sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" successfully" Aug 13 00:03:13.707748 containerd[1914]: time="2025-08-13T00:03:13.707639579Z" level=info msg="StopPodSandbox for \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" returns successfully" Aug 13 00:03:13.757005 kubelet[3214]: I0813 00:03:13.755488 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cni-path\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757005 kubelet[3214]: I0813 00:03:13.755551 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-run\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757005 kubelet[3214]: I0813 00:03:13.755572 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-xtables-lock\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757005 kubelet[3214]: I0813 00:03:13.755594 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-net\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757005 kubelet[3214]: I0813 00:03:13.755620 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-cgroup\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757005 kubelet[3214]: I0813 00:03:13.755644 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-lib-modules\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757402 kubelet[3214]: I0813 00:03:13.755672 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5vhv\" (UniqueName: \"kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-kube-api-access-f5vhv\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757402 kubelet[3214]: I0813 00:03:13.755693 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hostproc\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757402 kubelet[3214]: I0813 00:03:13.755718 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-clustermesh-secrets\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757402 kubelet[3214]: I0813 00:03:13.755746 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-config-path\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757402 kubelet[3214]: I0813 00:03:13.755769 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hubble-tls\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757402 kubelet[3214]: I0813 00:03:13.755791 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-etc-cni-netd\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757655 kubelet[3214]: I0813 00:03:13.755812 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-kernel\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757655 kubelet[3214]: I0813 00:03:13.755837 3214 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-bpf-maps\") pod \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\" (UID: \"a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0\") " Aug 13 00:03:13.757655 kubelet[3214]: I0813 00:03:13.755893 3214 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6fm6w\" (UniqueName: \"kubernetes.io/projected/e4f01dc3-731d-46f0-956f-49f12d76b5b1-kube-api-access-6fm6w\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.757655 kubelet[3214]: I0813 00:03:13.755909 3214 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f01dc3-731d-46f0-956f-49f12d76b5b1-cilium-config-path\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.760781 kubelet[3214]: I0813 00:03:13.755971 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.760781 kubelet[3214]: I0813 00:03:13.760506 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766605 kubelet[3214]: I0813 00:03:13.765883 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:03:13.766605 kubelet[3214]: I0813 00:03:13.766113 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-kube-api-access-f5vhv" (OuterVolumeSpecName: "kube-api-access-f5vhv") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "kube-api-access-f5vhv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:03:13.766605 kubelet[3214]: I0813 00:03:13.766152 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766605 kubelet[3214]: I0813 00:03:13.766177 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766605 kubelet[3214]: I0813 00:03:13.766196 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766929 kubelet[3214]: I0813 00:03:13.766226 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766929 kubelet[3214]: I0813 00:03:13.766243 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766929 kubelet[3214]: I0813 00:03:13.766261 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.766929 kubelet[3214]: I0813 00:03:13.766282 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.773209 kubelet[3214]: I0813 00:03:13.773169 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:03:13.773424 kubelet[3214]: I0813 00:03:13.773405 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:03:13.773964 kubelet[3214]: I0813 00:03:13.773917 3214 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" (UID: "a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857069 3214 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-kernel\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857110 3214 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-bpf-maps\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857123 3214 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cni-path\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857134 3214 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-run\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857146 3214 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-xtables-lock\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857157 3214 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-host-proc-sys-net\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857168 3214 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-cgroup\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857213 kubelet[3214]: I0813 00:03:13.857179 3214 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-lib-modules\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857768 kubelet[3214]: I0813 00:03:13.857190 3214 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5vhv\" (UniqueName: \"kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-kube-api-access-f5vhv\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857768 kubelet[3214]: I0813 00:03:13.857201 3214 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hostproc\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857768 kubelet[3214]: I0813 00:03:13.857214 3214 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-clustermesh-secrets\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857768 kubelet[3214]: I0813 00:03:13.857230 3214 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-cilium-config-path\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857768 kubelet[3214]: I0813 00:03:13.857241 3214 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-hubble-tls\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:13.857768 kubelet[3214]: I0813 00:03:13.857251 3214 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0-etc-cni-netd\") on node \"ip-172-31-18-46\" DevicePath \"\"" Aug 13 00:03:14.092668 systemd[1]: Removed slice kubepods-besteffort-pode4f01dc3_731d_46f0_956f_49f12d76b5b1.slice - libcontainer container kubepods-besteffort-pode4f01dc3_731d_46f0_956f_49f12d76b5b1.slice. Aug 13 00:03:14.094615 systemd[1]: Removed slice kubepods-burstable-poda1d2e28c_61c7_4f7d_a985_b8b27ef2c1e0.slice - libcontainer container kubepods-burstable-poda1d2e28c_61c7_4f7d_a985_b8b27ef2c1e0.slice. Aug 13 00:03:14.094725 systemd[1]: kubepods-burstable-poda1d2e28c_61c7_4f7d_a985_b8b27ef2c1e0.slice: Consumed 8.177s CPU time, 195.6M memory peak, 76.8M read from disk, 13.3M written to disk. Aug 13 00:03:14.383826 systemd[1]: var-lib-kubelet-pods-e4f01dc3\x2d731d\x2d46f0\x2d956f\x2d49f12d76b5b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6fm6w.mount: Deactivated successfully. Aug 13 00:03:14.384232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459-rootfs.mount: Deactivated successfully. Aug 13 00:03:14.384367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459-shm.mount: Deactivated successfully. Aug 13 00:03:14.384478 systemd[1]: var-lib-kubelet-pods-a1d2e28c\x2d61c7\x2d4f7d\x2da985\x2db8b27ef2c1e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5vhv.mount: Deactivated successfully. Aug 13 00:03:14.384590 systemd[1]: var-lib-kubelet-pods-a1d2e28c\x2d61c7\x2d4f7d\x2da985\x2db8b27ef2c1e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:03:14.384697 systemd[1]: var-lib-kubelet-pods-a1d2e28c\x2d61c7\x2d4f7d\x2da985\x2db8b27ef2c1e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:03:14.476061 kubelet[3214]: I0813 00:03:14.474401 3214 scope.go:117] "RemoveContainer" containerID="8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45" Aug 13 00:03:14.507017 containerd[1914]: time="2025-08-13T00:03:14.506949203Z" level=info msg="RemoveContainer for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\"" Aug 13 00:03:14.516583 containerd[1914]: time="2025-08-13T00:03:14.516318139Z" level=info msg="RemoveContainer for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" returns successfully" Aug 13 00:03:14.525878 kubelet[3214]: I0813 00:03:14.525295 3214 scope.go:117] "RemoveContainer" containerID="8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45" Aug 13 00:03:14.527384 containerd[1914]: time="2025-08-13T00:03:14.527340180Z" level=error msg="ContainerStatus for \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\": not found" Aug 13 00:03:14.538426 kubelet[3214]: E0813 00:03:14.538359 3214 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\": not found" containerID="8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45" Aug 13 00:03:14.549668 kubelet[3214]: I0813 00:03:14.538708 3214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45"} err="failed to get container status \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\": rpc error: code = NotFound desc = an error occurred when try to find container \"8342e4b9c83923ffd6d8c15fdbbce185a00f44c4de7efe6e6ced92fb4deb8c45\": not found" Aug 13 00:03:14.549668 kubelet[3214]: I0813 00:03:14.549289 3214 scope.go:117] "RemoveContainer" containerID="34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f" Aug 13 00:03:14.551287 containerd[1914]: time="2025-08-13T00:03:14.550976320Z" level=info msg="RemoveContainer for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\"" Aug 13 00:03:14.556103 containerd[1914]: time="2025-08-13T00:03:14.556061865Z" level=info msg="RemoveContainer for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" returns successfully" Aug 13 00:03:14.556586 kubelet[3214]: I0813 00:03:14.556432 3214 scope.go:117] "RemoveContainer" containerID="38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004" Aug 13 00:03:14.557747 containerd[1914]: time="2025-08-13T00:03:14.557655537Z" level=info msg="RemoveContainer for \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\"" Aug 13 00:03:14.563232 containerd[1914]: time="2025-08-13T00:03:14.563192564Z" level=info msg="RemoveContainer for \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\" returns successfully" Aug 13 00:03:14.563424 kubelet[3214]: I0813 00:03:14.563399 3214 scope.go:117] "RemoveContainer" containerID="ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897" Aug 13 00:03:14.564390 containerd[1914]: time="2025-08-13T00:03:14.564342645Z" level=info msg="RemoveContainer for \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\"" Aug 13 00:03:14.569843 containerd[1914]: time="2025-08-13T00:03:14.569792665Z" level=info msg="RemoveContainer for \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\" returns successfully" Aug 13 00:03:14.570684 kubelet[3214]: I0813 00:03:14.570657 3214 scope.go:117] "RemoveContainer" containerID="53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90" Aug 13 00:03:14.575438 containerd[1914]: time="2025-08-13T00:03:14.575325555Z" level=info msg="RemoveContainer for \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\"" Aug 13 00:03:14.595837 containerd[1914]: time="2025-08-13T00:03:14.595732661Z" level=info msg="RemoveContainer for \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\" returns successfully" Aug 13 00:03:14.596444 kubelet[3214]: I0813 00:03:14.596316 3214 scope.go:117] "RemoveContainer" containerID="b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615" Aug 13 00:03:14.598125 containerd[1914]: time="2025-08-13T00:03:14.597933031Z" level=info msg="RemoveContainer for \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\"" Aug 13 00:03:14.615468 containerd[1914]: time="2025-08-13T00:03:14.615416685Z" level=info msg="RemoveContainer for \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\" returns successfully" Aug 13 00:03:14.615772 kubelet[3214]: I0813 00:03:14.615723 3214 scope.go:117] "RemoveContainer" containerID="34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f" Aug 13 00:03:14.616064 containerd[1914]: time="2025-08-13T00:03:14.616017353Z" level=error msg="ContainerStatus for \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\": not found" Aug 13 00:03:14.616275 kubelet[3214]: E0813 00:03:14.616237 3214 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\": not found" containerID="34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f" Aug 13 00:03:14.616417 kubelet[3214]: I0813 00:03:14.616274 3214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f"} err="failed to get container status \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"34cc2454baace42501f8ea5751bc282fc59ef31a138d44f82161c37e8bf52b8f\": not found" Aug 13 00:03:14.616417 kubelet[3214]: I0813 00:03:14.616301 3214 scope.go:117] "RemoveContainer" containerID="38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004" Aug 13 00:03:14.616610 containerd[1914]: time="2025-08-13T00:03:14.616574884Z" level=error msg="ContainerStatus for \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\": not found" Aug 13 00:03:14.616885 kubelet[3214]: E0813 00:03:14.616841 3214 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\": not found" containerID="38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004" Aug 13 00:03:14.616885 kubelet[3214]: I0813 00:03:14.616873 3214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004"} err="failed to get container status \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\": rpc error: code = NotFound desc = an error occurred when try to find container \"38bf2beccadac31b0aadb781b0a8c8dc1192f8b8fa4fc015fd07a750349bb004\": not found" Aug 13 00:03:14.617039 kubelet[3214]: I0813 00:03:14.616895 3214 scope.go:117] "RemoveContainer" containerID="ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897" Aug 13 00:03:14.617188 containerd[1914]: time="2025-08-13T00:03:14.617149978Z" level=error msg="ContainerStatus for \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\": not found" Aug 13 00:03:14.617545 kubelet[3214]: E0813 00:03:14.617508 3214 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\": not found" containerID="ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897" Aug 13 00:03:14.617629 kubelet[3214]: I0813 00:03:14.617536 3214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897"} err="failed to get container status \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac7234c1e9f07705f6856b9f32eff176272d5d58584536c31fe7934b4cfbb897\": not found" Aug 13 00:03:14.617629 kubelet[3214]: I0813 00:03:14.617559 3214 scope.go:117] "RemoveContainer" containerID="53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90" Aug 13 00:03:14.618457 kubelet[3214]: E0813 00:03:14.617894 3214 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\": not found" containerID="53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90" Aug 13 00:03:14.618457 kubelet[3214]: I0813 00:03:14.617923 3214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90"} err="failed to get container status \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\": rpc error: code = NotFound desc = an error occurred when try to find container \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\": not found" Aug 13 00:03:14.618457 kubelet[3214]: I0813 00:03:14.617946 3214 scope.go:117] "RemoveContainer" containerID="b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615" Aug 13 00:03:14.619149 containerd[1914]: time="2025-08-13T00:03:14.617737811Z" level=error msg="ContainerStatus for \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53aa4d4b755e42080b579a049d11f16158d733188b400f4609c93ee529e73c90\": not found" Aug 13 00:03:14.619149 containerd[1914]: time="2025-08-13T00:03:14.618158894Z" level=error msg="ContainerStatus for \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\": not found" Aug 13 00:03:14.619222 kubelet[3214]: E0813 00:03:14.618547 3214 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\": not found" containerID="b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615" Aug 13 00:03:14.619222 kubelet[3214]: I0813 00:03:14.618570 3214 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615"} err="failed to get container status \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8fc1ca6a4f95f0dfb1a4062ea3fb005171939b8baf91d60ac61f70e22eaf615\": not found" Aug 13 00:03:15.264737 sshd[5095]: Connection closed by 139.178.68.195 port 54420 Aug 13 00:03:15.265764 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:15.270098 systemd[1]: sshd@24-172.31.18.46:22-139.178.68.195:54420.service: Deactivated successfully. Aug 13 00:03:15.272412 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:03:15.273937 systemd-logind[1898]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:03:15.275030 systemd-logind[1898]: Removed session 25. Aug 13 00:03:15.302829 systemd[1]: Started sshd@25-172.31.18.46:22-139.178.68.195:54422.service - OpenSSH per-connection server daemon (139.178.68.195:54422). Aug 13 00:03:15.478265 sshd[5256]: Accepted publickey for core from 139.178.68.195 port 54422 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:15.479804 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:15.485960 systemd-logind[1898]: New session 26 of user core. Aug 13 00:03:15.495192 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:03:15.954419 ntpd[1888]: Deleting interface #11 lxc_health, fe80::838:4aff:fed0:915%8#123, interface stats: received=0, sent=0, dropped=0, active_time=52 secs Aug 13 00:03:15.954904 ntpd[1888]: 13 Aug 00:03:15 ntpd[1888]: Deleting interface #11 lxc_health, fe80::838:4aff:fed0:915%8#123, interface stats: received=0, sent=0, dropped=0, active_time=52 secs Aug 13 00:03:16.087993 kubelet[3214]: I0813 00:03:16.087915 3214 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0" path="/var/lib/kubelet/pods/a1d2e28c-61c7-4f7d-a985-b8b27ef2c1e0/volumes" Aug 13 00:03:16.090358 kubelet[3214]: I0813 00:03:16.089131 3214 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f01dc3-731d-46f0-956f-49f12d76b5b1" path="/var/lib/kubelet/pods/e4f01dc3-731d-46f0-956f-49f12d76b5b1/volumes" Aug 13 00:03:16.109939 sshd[5258]: Connection closed by 139.178.68.195 port 54422 Aug 13 00:03:16.111132 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:16.119093 systemd[1]: sshd@25-172.31.18.46:22-139.178.68.195:54422.service: Deactivated successfully. Aug 13 00:03:16.119404 systemd-logind[1898]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:03:16.122859 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:03:16.129632 systemd-logind[1898]: Removed session 26. Aug 13 00:03:16.157466 systemd[1]: Started sshd@26-172.31.18.46:22-139.178.68.195:54428.service - OpenSSH per-connection server daemon (139.178.68.195:54428). Aug 13 00:03:16.179975 systemd[1]: Created slice kubepods-burstable-pod7111bb26_96d0_4fdd_9123_66e4dbb9c5d8.slice - libcontainer container kubepods-burstable-pod7111bb26_96d0_4fdd_9123_66e4dbb9c5d8.slice. Aug 13 00:03:16.193026 kubelet[3214]: I0813 00:03:16.192821 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-cni-path\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193026 kubelet[3214]: I0813 00:03:16.192865 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-cilium-ipsec-secrets\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193026 kubelet[3214]: I0813 00:03:16.192883 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-host-proc-sys-kernel\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193026 kubelet[3214]: I0813 00:03:16.192899 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-cilium-run\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193026 kubelet[3214]: I0813 00:03:16.192956 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-bpf-maps\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193026 kubelet[3214]: I0813 00:03:16.192971 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-lib-modules\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193305 kubelet[3214]: I0813 00:03:16.193025 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-xtables-lock\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193305 kubelet[3214]: I0813 00:03:16.193041 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-cilium-config-path\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193521 kubelet[3214]: I0813 00:03:16.193500 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-cilium-cgroup\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193600 kubelet[3214]: I0813 00:03:16.193528 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-host-proc-sys-net\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193600 kubelet[3214]: I0813 00:03:16.193570 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-hubble-tls\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193600 kubelet[3214]: I0813 00:03:16.193587 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl2tk\" (UniqueName: \"kubernetes.io/projected/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-kube-api-access-sl2tk\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193735 kubelet[3214]: I0813 00:03:16.193604 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-etc-cni-netd\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193735 kubelet[3214]: I0813 00:03:16.193619 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-hostproc\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.193735 kubelet[3214]: I0813 00:03:16.193636 3214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7111bb26-96d0-4fdd-9123-66e4dbb9c5d8-clustermesh-secrets\") pod \"cilium-qmclm\" (UID: \"7111bb26-96d0-4fdd-9123-66e4dbb9c5d8\") " pod="kube-system/cilium-qmclm" Aug 13 00:03:16.335357 sshd[5268]: Accepted publickey for core from 139.178.68.195 port 54428 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:16.337123 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:16.342258 systemd-logind[1898]: New session 27 of user core. Aug 13 00:03:16.347201 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:03:16.471249 sshd[5275]: Connection closed by 139.178.68.195 port 54428 Aug 13 00:03:16.471837 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:16.476176 systemd-logind[1898]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:03:16.476358 systemd[1]: sshd@26-172.31.18.46:22-139.178.68.195:54428.service: Deactivated successfully. Aug 13 00:03:16.478553 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:03:16.479783 systemd-logind[1898]: Removed session 27. Aug 13 00:03:16.495035 containerd[1914]: time="2025-08-13T00:03:16.494721874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmclm,Uid:7111bb26-96d0-4fdd-9123-66e4dbb9c5d8,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:16.522338 systemd[1]: Started sshd@27-172.31.18.46:22-139.178.68.195:54434.service - OpenSSH per-connection server daemon (139.178.68.195:54434). Aug 13 00:03:16.564322 containerd[1914]: time="2025-08-13T00:03:16.564214944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:16.564322 containerd[1914]: time="2025-08-13T00:03:16.564292743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:16.564601 containerd[1914]: time="2025-08-13T00:03:16.564548427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:16.565572 containerd[1914]: time="2025-08-13T00:03:16.565306991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:16.594005 systemd[1]: Started cri-containerd-787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d.scope - libcontainer container 787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d. Aug 13 00:03:16.626470 containerd[1914]: time="2025-08-13T00:03:16.626182732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmclm,Uid:7111bb26-96d0-4fdd-9123-66e4dbb9c5d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\"" Aug 13 00:03:16.636607 containerd[1914]: time="2025-08-13T00:03:16.636564862Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:03:16.658019 containerd[1914]: time="2025-08-13T00:03:16.657958631Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840\"" Aug 13 00:03:16.659196 containerd[1914]: time="2025-08-13T00:03:16.659139280Z" level=info msg="StartContainer for \"845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840\"" Aug 13 00:03:16.692449 sshd[5282]: Accepted publickey for core from 139.178.68.195 port 54434 ssh2: RSA SHA256:tE+UAy7Iby4Ug1y4oHlPrc3nQAXYFKVNjRvFeG8iCz8 Aug 13 00:03:16.694590 systemd[1]: Started cri-containerd-845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840.scope - libcontainer container 845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840. Aug 13 00:03:16.696116 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:03:16.711269 systemd-logind[1898]: New session 28 of user core. Aug 13 00:03:16.713930 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:03:16.751306 containerd[1914]: time="2025-08-13T00:03:16.751250272Z" level=info msg="StartContainer for \"845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840\" returns successfully" Aug 13 00:03:16.769568 systemd[1]: cri-containerd-845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840.scope: Deactivated successfully. Aug 13 00:03:16.769924 systemd[1]: cri-containerd-845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840.scope: Consumed 26ms CPU time, 9.9M memory peak, 3.2M read from disk. Aug 13 00:03:16.829437 containerd[1914]: time="2025-08-13T00:03:16.829371689Z" level=info msg="shim disconnected" id=845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840 namespace=k8s.io Aug 13 00:03:16.829437 containerd[1914]: time="2025-08-13T00:03:16.829426492Z" level=warning msg="cleaning up after shim disconnected" id=845562be820d3f6813ac32ad0b6c5a7774805c050ce722af8f9f423eea608840 namespace=k8s.io Aug 13 00:03:16.829437 containerd[1914]: time="2025-08-13T00:03:16.829437791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:17.320099 kubelet[3214]: E0813 00:03:17.320038 3214 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:03:17.521639 containerd[1914]: time="2025-08-13T00:03:17.521598444Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:03:17.549282 containerd[1914]: time="2025-08-13T00:03:17.549229611Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd\"" Aug 13 00:03:17.550108 containerd[1914]: time="2025-08-13T00:03:17.549974079Z" level=info msg="StartContainer for \"2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd\"" Aug 13 00:03:17.587204 systemd[1]: Started cri-containerd-2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd.scope - libcontainer container 2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd. Aug 13 00:03:17.620872 containerd[1914]: time="2025-08-13T00:03:17.620715829Z" level=info msg="StartContainer for \"2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd\" returns successfully" Aug 13 00:03:17.631260 systemd[1]: cri-containerd-2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd.scope: Deactivated successfully. Aug 13 00:03:17.631538 systemd[1]: cri-containerd-2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd.scope: Consumed 19ms CPU time, 7.2M memory peak, 2M read from disk. Aug 13 00:03:17.669747 containerd[1914]: time="2025-08-13T00:03:17.669522443Z" level=info msg="shim disconnected" id=2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd namespace=k8s.io Aug 13 00:03:17.669747 containerd[1914]: time="2025-08-13T00:03:17.669577014Z" level=warning msg="cleaning up after shim disconnected" id=2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd namespace=k8s.io Aug 13 00:03:17.669747 containerd[1914]: time="2025-08-13T00:03:17.669585556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:18.310594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2111fe735d2555ed661455663e21843a81e92a7486eb09a01b89feb1453239fd-rootfs.mount: Deactivated successfully. Aug 13 00:03:18.528418 containerd[1914]: time="2025-08-13T00:03:18.528385190Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:03:18.553551 containerd[1914]: time="2025-08-13T00:03:18.553294234Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9\"" Aug 13 00:03:18.554483 containerd[1914]: time="2025-08-13T00:03:18.554204039Z" level=info msg="StartContainer for \"3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9\"" Aug 13 00:03:18.606254 systemd[1]: Started cri-containerd-3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9.scope - libcontainer container 3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9. Aug 13 00:03:18.645290 containerd[1914]: time="2025-08-13T00:03:18.645223850Z" level=info msg="StartContainer for \"3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9\" returns successfully" Aug 13 00:03:18.653793 systemd[1]: cri-containerd-3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9.scope: Deactivated successfully. Aug 13 00:03:18.690607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9-rootfs.mount: Deactivated successfully. Aug 13 00:03:18.706860 containerd[1914]: time="2025-08-13T00:03:18.706778104Z" level=info msg="shim disconnected" id=3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9 namespace=k8s.io Aug 13 00:03:18.706860 containerd[1914]: time="2025-08-13T00:03:18.706831793Z" level=warning msg="cleaning up after shim disconnected" id=3299021c51e33590c1979b9cdcaa2d1434c304e8bb4789a5ee1397fda4e73dd9 namespace=k8s.io Aug 13 00:03:18.706860 containerd[1914]: time="2025-08-13T00:03:18.706842768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:19.565927 containerd[1914]: time="2025-08-13T00:03:19.565878963Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:03:19.585873 containerd[1914]: time="2025-08-13T00:03:19.585824942Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab\"" Aug 13 00:03:19.588352 containerd[1914]: time="2025-08-13T00:03:19.586550780Z" level=info msg="StartContainer for \"710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab\"" Aug 13 00:03:19.625877 systemd[1]: run-containerd-runc-k8s.io-710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab-runc.8jxJwu.mount: Deactivated successfully. Aug 13 00:03:19.642251 systemd[1]: Started cri-containerd-710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab.scope - libcontainer container 710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab. Aug 13 00:03:19.673765 systemd[1]: cri-containerd-710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab.scope: Deactivated successfully. Aug 13 00:03:19.679817 containerd[1914]: time="2025-08-13T00:03:19.679740864Z" level=info msg="StartContainer for \"710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab\" returns successfully" Aug 13 00:03:19.707653 containerd[1914]: time="2025-08-13T00:03:19.707590883Z" level=info msg="shim disconnected" id=710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab namespace=k8s.io Aug 13 00:03:19.707653 containerd[1914]: time="2025-08-13T00:03:19.707636853Z" level=warning msg="cleaning up after shim disconnected" id=710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab namespace=k8s.io Aug 13 00:03:19.707653 containerd[1914]: time="2025-08-13T00:03:19.707644691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:20.539154 containerd[1914]: time="2025-08-13T00:03:20.539100157Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:03:20.563806 containerd[1914]: time="2025-08-13T00:03:20.563756582Z" level=info msg="CreateContainer within sandbox \"787113f291acb4ee88a7fff368b7f54a217fb38f433fa17fef5c6dc4a0361f9d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8544dd5c73cc3b0bce6471716de3933374442e5ee0fc36259fde8a66f817439\"" Aug 13 00:03:20.564330 containerd[1914]: time="2025-08-13T00:03:20.564304322Z" level=info msg="StartContainer for \"f8544dd5c73cc3b0bce6471716de3933374442e5ee0fc36259fde8a66f817439\"" Aug 13 00:03:20.584122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-710a691670d8db9b2601d3d3b0aba99b487c5660e39ff7bc2d42aa66f47b6eab-rootfs.mount: Deactivated successfully. Aug 13 00:03:20.595163 systemd[1]: Started cri-containerd-f8544dd5c73cc3b0bce6471716de3933374442e5ee0fc36259fde8a66f817439.scope - libcontainer container f8544dd5c73cc3b0bce6471716de3933374442e5ee0fc36259fde8a66f817439. Aug 13 00:03:20.629065 containerd[1914]: time="2025-08-13T00:03:20.628807531Z" level=info msg="StartContainer for \"f8544dd5c73cc3b0bce6471716de3933374442e5ee0fc36259fde8a66f817439\" returns successfully" Aug 13 00:03:21.382081 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:03:21.557429 kubelet[3214]: I0813 00:03:21.557064 3214 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qmclm" podStartSLOduration=5.55704096 podStartE2EDuration="5.55704096s" podCreationTimestamp="2025-08-13 00:03:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:21.555565615 +0000 UTC m=+89.704662840" watchObservedRunningTime="2025-08-13 00:03:21.55704096 +0000 UTC m=+89.706138150" Aug 13 00:03:23.379111 systemd[1]: run-containerd-runc-k8s.io-f8544dd5c73cc3b0bce6471716de3933374442e5ee0fc36259fde8a66f817439-runc.Ki9BG4.mount: Deactivated successfully. Aug 13 00:03:24.519527 (udev-worker)[6133]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:03:24.521414 systemd-networkd[1744]: lxc_health: Link UP Aug 13 00:03:24.551893 systemd-networkd[1744]: lxc_health: Gained carrier Aug 13 00:03:26.093170 systemd-networkd[1744]: lxc_health: Gained IPv6LL Aug 13 00:03:30.040701 sshd[5352]: Connection closed by 139.178.68.195 port 54434 Aug 13 00:03:30.042474 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:30.048445 systemd-logind[1898]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:03:30.049419 systemd[1]: sshd@27-172.31.18.46:22-139.178.68.195:54434.service: Deactivated successfully. Aug 13 00:03:30.052326 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:03:30.053573 systemd-logind[1898]: Removed session 28. Aug 13 00:03:30.954393 ntpd[1888]: Listen normally on 14 lxc_health [fe80::8051:11ff:fe90:11de%14]:123 Aug 13 00:03:30.954857 ntpd[1888]: 13 Aug 00:03:30 ntpd[1888]: Listen normally on 14 lxc_health [fe80::8051:11ff:fe90:11de%14]:123 Aug 13 00:03:43.567377 systemd[1]: cri-containerd-5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef.scope: Deactivated successfully. Aug 13 00:03:43.567672 systemd[1]: cri-containerd-5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef.scope: Consumed 4.117s CPU time, 85.8M memory peak, 43.4M read from disk. Aug 13 00:03:43.592793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef-rootfs.mount: Deactivated successfully. Aug 13 00:03:43.617945 containerd[1914]: time="2025-08-13T00:03:43.617742306Z" level=info msg="shim disconnected" id=5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef namespace=k8s.io Aug 13 00:03:43.617945 containerd[1914]: time="2025-08-13T00:03:43.617789849Z" level=warning msg="cleaning up after shim disconnected" id=5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef namespace=k8s.io Aug 13 00:03:43.617945 containerd[1914]: time="2025-08-13T00:03:43.617797561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:44.571696 kubelet[3214]: E0813 00:03:44.571622 3214 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-18-46)" Aug 13 00:03:44.590705 kubelet[3214]: I0813 00:03:44.588661 3214 scope.go:117] "RemoveContainer" containerID="5fbb3ddb2da45683cde437a328e415b0c65764f6f103d2c574fd9affc9cbc1ef" Aug 13 00:03:44.596750 containerd[1914]: time="2025-08-13T00:03:44.596696992Z" level=info msg="CreateContainer within sandbox \"c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:03:44.622385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158235631.mount: Deactivated successfully. Aug 13 00:03:44.629649 containerd[1914]: time="2025-08-13T00:03:44.629586796Z" level=info msg="CreateContainer within sandbox \"c1627cdce28c1534ab9654f9909bb66f1a1a4434a94aac657ac786184d0d04b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e25f619dcea51b0f9e30bcda1e956075636c977f81043ab96c82e69b09b0615f\"" Aug 13 00:03:44.630336 containerd[1914]: time="2025-08-13T00:03:44.630163197Z" level=info msg="StartContainer for \"e25f619dcea51b0f9e30bcda1e956075636c977f81043ab96c82e69b09b0615f\"" Aug 13 00:03:44.663189 systemd[1]: Started cri-containerd-e25f619dcea51b0f9e30bcda1e956075636c977f81043ab96c82e69b09b0615f.scope - libcontainer container e25f619dcea51b0f9e30bcda1e956075636c977f81043ab96c82e69b09b0615f. Aug 13 00:03:44.714511 containerd[1914]: time="2025-08-13T00:03:44.714464033Z" level=info msg="StartContainer for \"e25f619dcea51b0f9e30bcda1e956075636c977f81043ab96c82e69b09b0615f\" returns successfully" Aug 13 00:03:48.373068 systemd[1]: cri-containerd-86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e.scope: Deactivated successfully. Aug 13 00:03:48.373561 systemd[1]: cri-containerd-86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e.scope: Consumed 2.791s CPU time, 30.6M memory peak, 16.2M read from disk. Aug 13 00:03:48.399518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e-rootfs.mount: Deactivated successfully. Aug 13 00:03:48.421450 containerd[1914]: time="2025-08-13T00:03:48.421358134Z" level=info msg="shim disconnected" id=86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e namespace=k8s.io Aug 13 00:03:48.421450 containerd[1914]: time="2025-08-13T00:03:48.421409775Z" level=warning msg="cleaning up after shim disconnected" id=86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e namespace=k8s.io Aug 13 00:03:48.421450 containerd[1914]: time="2025-08-13T00:03:48.421417992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:48.604502 kubelet[3214]: I0813 00:03:48.604472 3214 scope.go:117] "RemoveContainer" containerID="86988cebe81cd1117453fe499901cafe96667ab9a92d93f3a9e52a7286f4c61e" Aug 13 00:03:48.607401 containerd[1914]: time="2025-08-13T00:03:48.607363295Z" level=info msg="CreateContainer within sandbox \"9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:03:48.637724 containerd[1914]: time="2025-08-13T00:03:48.637580310Z" level=info msg="CreateContainer within sandbox \"9c197661ead5d408039a9c7d30f7da59a4c168634d4a403f92396cb9c6125200\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7fbce59dc9624637ed21c37d843e7bdf52a2d5d4ba118c1bc8251c5be5f544f4\"" Aug 13 00:03:48.640030 containerd[1914]: time="2025-08-13T00:03:48.638330758Z" level=info msg="StartContainer for \"7fbce59dc9624637ed21c37d843e7bdf52a2d5d4ba118c1bc8251c5be5f544f4\"" Aug 13 00:03:48.679325 systemd[1]: Started cri-containerd-7fbce59dc9624637ed21c37d843e7bdf52a2d5d4ba118c1bc8251c5be5f544f4.scope - libcontainer container 7fbce59dc9624637ed21c37d843e7bdf52a2d5d4ba118c1bc8251c5be5f544f4. Aug 13 00:03:48.727031 containerd[1914]: time="2025-08-13T00:03:48.726958605Z" level=info msg="StartContainer for \"7fbce59dc9624637ed21c37d843e7bdf52a2d5d4ba118c1bc8251c5be5f544f4\" returns successfully" Aug 13 00:03:52.077247 containerd[1914]: time="2025-08-13T00:03:52.077157511Z" level=info msg="StopPodSandbox for \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\"" Aug 13 00:03:52.077694 containerd[1914]: time="2025-08-13T00:03:52.077641500Z" level=info msg="TearDown network for sandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" successfully" Aug 13 00:03:52.077694 containerd[1914]: time="2025-08-13T00:03:52.077664770Z" level=info msg="StopPodSandbox for \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" returns successfully" Aug 13 00:03:52.078129 containerd[1914]: time="2025-08-13T00:03:52.078086201Z" level=info msg="RemovePodSandbox for \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\"" Aug 13 00:03:52.078129 containerd[1914]: time="2025-08-13T00:03:52.078115263Z" level=info msg="Forcibly stopping sandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\"" Aug 13 00:03:52.078260 containerd[1914]: time="2025-08-13T00:03:52.078163674Z" level=info msg="TearDown network for sandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" successfully" Aug 13 00:03:52.085929 containerd[1914]: time="2025-08-13T00:03:52.085754342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:03:52.085929 containerd[1914]: time="2025-08-13T00:03:52.085825849Z" level=info msg="RemovePodSandbox \"a629e7e0e78e6a11d7c91272200eae39dab89923ec195c848ed27ce25ad02160\" returns successfully" Aug 13 00:03:52.086250 containerd[1914]: time="2025-08-13T00:03:52.086220542Z" level=info msg="StopPodSandbox for \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\"" Aug 13 00:03:52.086316 containerd[1914]: time="2025-08-13T00:03:52.086299868Z" level=info msg="TearDown network for sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" successfully" Aug 13 00:03:52.086316 containerd[1914]: time="2025-08-13T00:03:52.086313871Z" level=info msg="StopPodSandbox for \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" returns successfully" Aug 13 00:03:52.086659 containerd[1914]: time="2025-08-13T00:03:52.086616344Z" level=info msg="RemovePodSandbox for \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\"" Aug 13 00:03:52.086659 containerd[1914]: time="2025-08-13T00:03:52.086642565Z" level=info msg="Forcibly stopping sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\"" Aug 13 00:03:52.086829 containerd[1914]: time="2025-08-13T00:03:52.086697649Z" level=info msg="TearDown network for sandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" successfully" Aug 13 00:03:52.091942 containerd[1914]: time="2025-08-13T00:03:52.091885671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:03:52.092058 containerd[1914]: time="2025-08-13T00:03:52.091955589Z" level=info msg="RemovePodSandbox \"867ba09f97b830be860f7ef5ef47e0918ee6d373bf575ea83acc18215fd7d459\" returns successfully" Aug 13 00:03:54.572279 kubelet[3214]: E0813 00:03:54.572222 3214 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 00:04:04.583184 kubelet[3214]: E0813 00:04:04.583126 3214 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-46?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"