Jun 20 18:53:47.917663 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 18:53:47.917691 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:53:47.917704 kernel: BIOS-provided physical RAM map: Jun 20 18:53:47.917712 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 18:53:47.917718 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jun 20 18:53:47.917725 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jun 20 18:53:47.917734 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jun 20 18:53:47.917741 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jun 20 18:53:47.917748 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jun 20 18:53:47.917755 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jun 20 18:53:47.917765 kernel: NX (Execute Disable) protection: active Jun 20 18:53:47.917773 kernel: APIC: Static calls initialized Jun 20 18:53:47.917780 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jun 20 18:53:47.917787 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jun 20 18:53:47.917796 kernel: extended physical RAM map: Jun 20 18:53:47.917804 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 18:53:47.917815 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jun 20 18:53:47.917823 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jun 20 18:53:47.917831 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jun 20 18:53:47.917839 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jun 20 18:53:47.917847 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jun 20 18:53:47.917854 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jun 20 18:53:47.917862 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jun 20 18:53:47.917870 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jun 20 18:53:47.917877 kernel: efi: EFI v2.7 by EDK II Jun 20 18:53:47.917885 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jun 20 18:53:47.917896 kernel: secureboot: Secure boot disabled Jun 20 18:53:47.917904 kernel: SMBIOS 2.7 present. Jun 20 18:53:47.917912 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 20 18:53:47.917919 kernel: Hypervisor detected: KVM Jun 20 18:53:47.917927 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 18:53:47.917935 kernel: kvm-clock: using sched offset of 3952668278 cycles Jun 20 18:53:47.917943 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:53:47.917951 kernel: tsc: Detected 2500.004 MHz processor Jun 20 18:53:47.917960 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 18:53:47.917968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 18:53:47.917976 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jun 20 18:53:47.917987 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 18:53:47.917995 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 18:53:47.918004 kernel: Using GB pages for direct mapping Jun 20 18:53:47.918015 kernel: ACPI: Early table checksum verification disabled Jun 20 18:53:47.918024 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jun 20 18:53:47.918032 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jun 20 18:53:47.918043 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 20 18:53:47.918051 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 20 18:53:47.918061 kernel: ACPI: FACS 0x00000000789D0000 000040 Jun 20 18:53:47.918069 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 20 18:53:47.918077 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 20 18:53:47.918086 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 20 18:53:47.918094 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 20 18:53:47.918103 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 20 18:53:47.918115 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 20 18:53:47.918124 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 20 18:53:47.918132 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jun 20 18:53:47.918140 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jun 20 18:53:47.918149 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jun 20 18:53:47.918157 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jun 20 18:53:47.918166 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jun 20 18:53:47.918174 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jun 20 18:53:47.918182 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jun 20 18:53:47.918193 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jun 20 18:53:47.918202 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jun 20 18:53:47.918210 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jun 20 18:53:47.918219 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jun 20 18:53:47.918227 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jun 20 18:53:47.919299 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 18:53:47.919313 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 18:53:47.919321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 20 18:53:47.919330 kernel: NUMA: Initialized distance table, cnt=1 Jun 20 18:53:47.919343 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Jun 20 18:53:47.919352 kernel: Zone ranges: Jun 20 18:53:47.919361 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 18:53:47.919370 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jun 20 18:53:47.919378 kernel: Normal empty Jun 20 18:53:47.919387 kernel: Movable zone start for each node Jun 20 18:53:47.919395 kernel: Early memory node ranges Jun 20 18:53:47.919404 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 18:53:47.919412 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jun 20 18:53:47.919423 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jun 20 18:53:47.919431 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jun 20 18:53:47.919440 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 18:53:47.919448 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 18:53:47.919457 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 20 18:53:47.919466 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jun 20 18:53:47.919474 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 20 18:53:47.919483 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 18:53:47.919491 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 20 18:53:47.919502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 18:53:47.919511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:53:47.919520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 18:53:47.919528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 18:53:47.919537 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 18:53:47.919545 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 18:53:47.919554 kernel: TSC deadline timer available Jun 20 18:53:47.919562 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:53:47.919571 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 18:53:47.919579 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jun 20 18:53:47.919590 kernel: Booting paravirtualized kernel on KVM Jun 20 18:53:47.919598 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 18:53:47.919607 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:53:47.919616 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 18:53:47.919624 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 18:53:47.919632 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:53:47.919641 kernel: kvm-guest: PV spinlocks enabled Jun 20 18:53:47.919650 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 18:53:47.919662 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:53:47.919671 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:53:47.919679 kernel: random: crng init done Jun 20 18:53:47.919688 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:53:47.919696 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 18:53:47.919705 kernel: Fallback order for Node 0: 0 Jun 20 18:53:47.919713 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jun 20 18:53:47.919721 kernel: Policy zone: DMA32 Jun 20 18:53:47.919732 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:53:47.919741 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 165012K reserved, 0K cma-reserved) Jun 20 18:53:47.919749 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:53:47.919758 kernel: Kernel/User page tables isolation: enabled Jun 20 18:53:47.919767 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 18:53:47.919783 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 18:53:47.919795 kernel: Dynamic Preempt: voluntary Jun 20 18:53:47.919804 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:53:47.919814 kernel: rcu: RCU event tracing is enabled. Jun 20 18:53:47.919823 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:53:47.919832 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:53:47.919841 kernel: Rude variant of Tasks RCU enabled. Jun 20 18:53:47.919852 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:53:47.919861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:53:47.919870 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:53:47.919879 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 18:53:47.919888 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:53:47.919900 kernel: Console: colour dummy device 80x25 Jun 20 18:53:47.919909 kernel: printk: console [tty0] enabled Jun 20 18:53:47.919918 kernel: printk: console [ttyS0] enabled Jun 20 18:53:47.919926 kernel: ACPI: Core revision 20230628 Jun 20 18:53:47.919936 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 20 18:53:47.919945 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 18:53:47.919954 kernel: x2apic enabled Jun 20 18:53:47.919963 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 18:53:47.919972 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jun 20 18:53:47.919983 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jun 20 18:53:47.919992 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 18:53:47.920001 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 18:53:47.920010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 18:53:47.920019 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 18:53:47.920027 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 18:53:47.920036 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 18:53:47.920045 kernel: RETBleed: Vulnerable Jun 20 18:53:47.920054 kernel: Speculative Store Bypass: Vulnerable Jun 20 18:53:47.920062 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:53:47.920073 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:53:47.920082 kernel: GDS: Unknown: Dependent on hypervisor status Jun 20 18:53:47.920091 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 18:53:47.920100 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 18:53:47.920108 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 18:53:47.920117 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 18:53:47.921822 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 20 18:53:47.921835 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 20 18:53:47.921845 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 18:53:47.921855 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 18:53:47.921864 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 18:53:47.921878 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 20 18:53:47.921887 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 18:53:47.921896 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 20 18:53:47.921905 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 20 18:53:47.921914 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 20 18:53:47.921922 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 20 18:53:47.921931 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 20 18:53:47.921940 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 20 18:53:47.921949 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 20 18:53:47.921958 kernel: Freeing SMP alternatives memory: 32K Jun 20 18:53:47.921966 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:53:47.921975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:53:47.921987 kernel: landlock: Up and running. Jun 20 18:53:47.921995 kernel: SELinux: Initializing. Jun 20 18:53:47.922004 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:53:47.922013 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:53:47.922022 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 20 18:53:47.922031 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:53:47.922040 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:53:47.922050 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:53:47.922059 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 20 18:53:47.922068 kernel: signal: max sigframe size: 3632 Jun 20 18:53:47.922079 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:53:47.922089 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:53:47.922098 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 18:53:47.922107 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:53:47.922116 kernel: smpboot: x86: Booting SMP configuration: Jun 20 18:53:47.922125 kernel: .... node #0, CPUs: #1 Jun 20 18:53:47.922135 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 20 18:53:47.922145 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 18:53:47.922157 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:53:47.922166 kernel: smpboot: Max logical packages: 1 Jun 20 18:53:47.922175 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jun 20 18:53:47.922184 kernel: devtmpfs: initialized Jun 20 18:53:47.922193 kernel: x86/mm: Memory block size: 128MB Jun 20 18:53:47.922202 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jun 20 18:53:47.922211 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:53:47.922220 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:53:47.922229 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:53:47.923310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:53:47.923325 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:53:47.923335 kernel: audit: type=2000 audit(1750445626.988:1): state=initialized audit_enabled=0 res=1 Jun 20 18:53:47.923344 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:53:47.923353 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 18:53:47.923363 kernel: cpuidle: using governor menu Jun 20 18:53:47.923372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:53:47.923382 kernel: dca service started, version 1.12.1 Jun 20 18:53:47.923391 kernel: PCI: Using configuration type 1 for base access Jun 20 18:53:47.923404 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 18:53:47.923414 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:53:47.923423 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:53:47.923432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:53:47.923441 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:53:47.923450 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:53:47.923459 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:53:47.923468 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:53:47.923478 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 20 18:53:47.923489 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 18:53:47.923498 kernel: ACPI: Interpreter enabled Jun 20 18:53:47.923507 kernel: ACPI: PM: (supports S0 S5) Jun 20 18:53:47.923516 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:53:47.923525 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 18:53:47.923534 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 18:53:47.923543 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 18:53:47.923552 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 18:53:47.923722 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:53:47.923828 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 18:53:47.923922 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 18:53:47.923933 kernel: acpiphp: Slot [3] registered Jun 20 18:53:47.923943 kernel: acpiphp: Slot [4] registered Jun 20 18:53:47.923952 kernel: acpiphp: Slot [5] registered Jun 20 18:53:47.923961 kernel: acpiphp: Slot [6] registered Jun 20 18:53:47.923970 kernel: acpiphp: Slot [7] registered Jun 20 18:53:47.923979 kernel: acpiphp: Slot [8] registered Jun 20 18:53:47.923990 kernel: acpiphp: Slot [9] registered Jun 20 18:53:47.924000 kernel: acpiphp: Slot [10] registered Jun 20 18:53:47.924008 kernel: acpiphp: Slot [11] registered Jun 20 18:53:47.924017 kernel: acpiphp: Slot [12] registered Jun 20 18:53:47.924026 kernel: acpiphp: Slot [13] registered Jun 20 18:53:47.924035 kernel: acpiphp: Slot [14] registered Jun 20 18:53:47.924044 kernel: acpiphp: Slot [15] registered Jun 20 18:53:47.924053 kernel: acpiphp: Slot [16] registered Jun 20 18:53:47.924062 kernel: acpiphp: Slot [17] registered Jun 20 18:53:47.924073 kernel: acpiphp: Slot [18] registered Jun 20 18:53:47.924082 kernel: acpiphp: Slot [19] registered Jun 20 18:53:47.924091 kernel: acpiphp: Slot [20] registered Jun 20 18:53:47.924100 kernel: acpiphp: Slot [21] registered Jun 20 18:53:47.924109 kernel: acpiphp: Slot [22] registered Jun 20 18:53:47.924117 kernel: acpiphp: Slot [23] registered Jun 20 18:53:47.924126 kernel: acpiphp: Slot [24] registered Jun 20 18:53:47.924135 kernel: acpiphp: Slot [25] registered Jun 20 18:53:47.924144 kernel: acpiphp: Slot [26] registered Jun 20 18:53:47.924153 kernel: acpiphp: Slot [27] registered Jun 20 18:53:47.924164 kernel: acpiphp: Slot [28] registered Jun 20 18:53:47.924172 kernel: acpiphp: Slot [29] registered Jun 20 18:53:47.924181 kernel: acpiphp: Slot [30] registered Jun 20 18:53:47.924190 kernel: acpiphp: Slot [31] registered Jun 20 18:53:47.924199 kernel: PCI host bridge to bus 0000:00 Jun 20 18:53:47.926378 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 18:53:47.926486 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 18:53:47.926573 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 18:53:47.926665 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 20 18:53:47.926748 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jun 20 18:53:47.926830 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 18:53:47.926941 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 20 18:53:47.927051 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 20 18:53:47.927153 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 20 18:53:47.927267 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 20 18:53:47.927363 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 20 18:53:47.927457 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 20 18:53:47.927553 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 20 18:53:47.927648 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 20 18:53:47.927742 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 20 18:53:47.927835 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 20 18:53:47.927943 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 20 18:53:47.928039 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jun 20 18:53:47.928135 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jun 20 18:53:47.928229 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jun 20 18:53:47.930564 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 18:53:47.930680 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 20 18:53:47.930778 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jun 20 18:53:47.930889 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 20 18:53:47.930987 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jun 20 18:53:47.930999 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 18:53:47.931009 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 18:53:47.931019 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 18:53:47.931028 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 18:53:47.931037 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 18:53:47.931050 kernel: iommu: Default domain type: Translated Jun 20 18:53:47.931059 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 18:53:47.931069 kernel: efivars: Registered efivars operations Jun 20 18:53:47.931078 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:53:47.931087 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 18:53:47.931096 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jun 20 18:53:47.931105 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jun 20 18:53:47.931113 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jun 20 18:53:47.931206 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 20 18:53:47.931338 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 20 18:53:47.931435 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 18:53:47.931447 kernel: vgaarb: loaded Jun 20 18:53:47.931456 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 20 18:53:47.931465 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 20 18:53:47.931475 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 18:53:47.931484 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:53:47.931493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:53:47.931502 kernel: pnp: PnP ACPI init Jun 20 18:53:47.931516 kernel: pnp: PnP ACPI: found 5 devices Jun 20 18:53:47.931525 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 18:53:47.931535 kernel: NET: Registered PF_INET protocol family Jun 20 18:53:47.931544 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:53:47.931553 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 18:53:47.931562 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:53:47.931571 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:53:47.931580 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 18:53:47.931589 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 18:53:47.931601 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:53:47.931610 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:53:47.931620 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:53:47.931629 kernel: NET: Registered PF_XDP protocol family Jun 20 18:53:47.931721 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 18:53:47.931807 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 18:53:47.931891 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 18:53:47.931976 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 20 18:53:47.932065 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jun 20 18:53:47.932165 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 18:53:47.932177 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:53:47.932187 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 18:53:47.932196 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jun 20 18:53:47.932205 kernel: clocksource: Switched to clocksource tsc Jun 20 18:53:47.932215 kernel: Initialise system trusted keyrings Jun 20 18:53:47.932224 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 18:53:47.932272 kernel: Key type asymmetric registered Jun 20 18:53:47.932285 kernel: Asymmetric key parser 'x509' registered Jun 20 18:53:47.932294 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 18:53:47.932303 kernel: io scheduler mq-deadline registered Jun 20 18:53:47.932312 kernel: io scheduler kyber registered Jun 20 18:53:47.932321 kernel: io scheduler bfq registered Jun 20 18:53:47.932330 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 18:53:47.932339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:53:47.932349 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 18:53:47.932358 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 18:53:47.932370 kernel: i8042: Warning: Keylock active Jun 20 18:53:47.932380 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 18:53:47.932389 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 18:53:47.932498 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 20 18:53:47.932593 kernel: rtc_cmos 00:00: registered as rtc0 Jun 20 18:53:47.932681 kernel: rtc_cmos 00:00: setting system clock to 2025-06-20T18:53:47 UTC (1750445627) Jun 20 18:53:47.932768 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 20 18:53:47.932783 kernel: intel_pstate: CPU model not supported Jun 20 18:53:47.932793 kernel: efifb: probing for efifb Jun 20 18:53:47.932802 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jun 20 18:53:47.932812 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jun 20 18:53:47.932839 kernel: efifb: scrolling: redraw Jun 20 18:53:47.932851 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:53:47.932860 kernel: Console: switching to colour frame buffer device 100x37 Jun 20 18:53:47.932870 kernel: fb0: EFI VGA frame buffer device Jun 20 18:53:47.932880 kernel: pstore: Using crash dump compression: deflate Jun 20 18:53:47.932892 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 18:53:47.932901 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:53:47.932911 kernel: Segment Routing with IPv6 Jun 20 18:53:47.932920 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:53:47.932930 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:53:47.932939 kernel: Key type dns_resolver registered Jun 20 18:53:47.932949 kernel: IPI shorthand broadcast: enabled Jun 20 18:53:47.932958 kernel: sched_clock: Marking stable (464003914, 144072966)->(694733741, -86656861) Jun 20 18:53:47.932968 kernel: registered taskstats version 1 Jun 20 18:53:47.932977 kernel: Loading compiled-in X.509 certificates Jun 20 18:53:47.932990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 18:53:47.933002 kernel: Key type .fscrypt registered Jun 20 18:53:47.933011 kernel: Key type fscrypt-provisioning registered Jun 20 18:53:47.933021 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:53:47.933031 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:53:47.933040 kernel: ima: No architecture policies found Jun 20 18:53:47.933049 kernel: clk: Disabling unused clocks Jun 20 18:53:47.933059 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 18:53:47.933071 kernel: Write protecting the kernel read-only data: 38912k Jun 20 18:53:47.933080 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 18:53:47.933090 kernel: Run /init as init process Jun 20 18:53:47.933099 kernel: with arguments: Jun 20 18:53:47.933108 kernel: /init Jun 20 18:53:47.933118 kernel: with environment: Jun 20 18:53:47.933127 kernel: HOME=/ Jun 20 18:53:47.933136 kernel: TERM=linux Jun 20 18:53:47.933146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:53:47.933159 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:53:47.933172 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:53:47.933183 systemd[1]: Detected virtualization amazon. Jun 20 18:53:47.933193 systemd[1]: Detected architecture x86-64. Jun 20 18:53:47.933203 systemd[1]: Running in initrd. Jun 20 18:53:47.933215 systemd[1]: No hostname configured, using default hostname. Jun 20 18:53:47.933226 systemd[1]: Hostname set to . Jun 20 18:53:47.933247 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:53:47.933257 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:53:47.933266 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:53:47.933276 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:53:47.933288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:53:47.933301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:53:47.933311 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:53:47.933322 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:53:47.933334 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:53:47.933344 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:53:47.933354 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:53:47.933365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:53:47.933377 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:53:47.933387 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:53:47.933397 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:53:47.933407 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:53:47.933417 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:53:47.933427 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:53:47.933437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:53:47.933447 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:53:47.933458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:53:47.933470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:53:47.933480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:53:47.933491 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:53:47.933501 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:53:47.933511 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:53:47.933521 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:53:47.933531 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:53:47.933541 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:53:47.933554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:53:47.933564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:53:47.933574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:53:47.933585 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:53:47.933596 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:53:47.933633 systemd-journald[179]: Collecting audit messages is disabled. Jun 20 18:53:47.933658 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:53:47.933668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:53:47.933679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:53:47.933692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:53:47.933703 systemd-journald[179]: Journal started Jun 20 18:53:47.933726 systemd-journald[179]: Runtime Journal (/run/log/journal/ec244650b2849d4d65070ad0612c44f6) is 4.7M, max 38.1M, 33.4M free. Jun 20 18:53:47.918285 systemd-modules-load[180]: Inserted module 'overlay' Jun 20 18:53:47.938967 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:53:47.946443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:53:47.950906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:53:47.957893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:53:47.969794 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:53:47.969821 kernel: Bridge firewalling registered Jun 20 18:53:47.961531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:53:47.967454 systemd-modules-load[180]: Inserted module 'br_netfilter' Jun 20 18:53:47.972810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:53:47.976901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:53:47.983450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:53:47.985021 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:53:47.988589 dracut-cmdline[208]: dracut-dracut-053 Jun 20 18:53:47.991999 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:53:47.993762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:53:48.002100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:53:48.008354 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 20 18:53:48.032734 systemd-resolved[229]: Positive Trust Anchors: Jun 20 18:53:48.033372 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:53:48.033413 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:53:48.036286 systemd-resolved[229]: Defaulting to hostname 'linux'. Jun 20 18:53:48.037265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:53:48.038835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:53:48.072274 kernel: SCSI subsystem initialized Jun 20 18:53:48.082263 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:53:48.094262 kernel: iscsi: registered transport (tcp) Jun 20 18:53:48.119464 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:53:48.119545 kernel: QLogic iSCSI HBA Driver Jun 20 18:53:48.159073 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:53:48.163479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:53:48.190444 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:53:48.190549 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:53:48.193260 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:53:48.234266 kernel: raid6: avx512x4 gen() 18045 MB/s Jun 20 18:53:48.252266 kernel: raid6: avx512x2 gen() 18253 MB/s Jun 20 18:53:48.270261 kernel: raid6: avx512x1 gen() 18257 MB/s Jun 20 18:53:48.288258 kernel: raid6: avx2x4 gen() 18251 MB/s Jun 20 18:53:48.306258 kernel: raid6: avx2x2 gen() 18231 MB/s Jun 20 18:53:48.324589 kernel: raid6: avx2x1 gen() 13924 MB/s Jun 20 18:53:48.324659 kernel: raid6: using algorithm avx512x1 gen() 18257 MB/s Jun 20 18:53:48.343562 kernel: raid6: .... xor() 21590 MB/s, rmw enabled Jun 20 18:53:48.343626 kernel: raid6: using avx512x2 recovery algorithm Jun 20 18:53:48.365274 kernel: xor: automatically using best checksumming function avx Jun 20 18:53:48.521272 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:53:48.531821 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:53:48.539446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:53:48.553667 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jun 20 18:53:48.559745 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:53:48.569248 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:53:48.586840 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jun 20 18:53:48.617516 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:53:48.622499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:53:48.677535 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:53:48.686521 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:53:48.713370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:53:48.716018 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:53:48.717382 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:53:48.718519 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:53:48.725466 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:53:48.751514 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:53:48.781256 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 18:53:48.798261 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 18:53:48.803258 kernel: AES CTR mode by8 optimization enabled Jun 20 18:53:48.807493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:53:48.808501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:53:48.809290 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:53:48.809852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:53:48.810033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:53:48.812440 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:53:48.822410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:53:48.828608 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 20 18:53:48.828874 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 20 18:53:48.833261 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 20 18:53:48.839036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:53:48.845368 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:4f:09:d8:39:e7 Jun 20 18:53:48.839164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:53:48.845193 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:53:48.853943 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 20 18:53:48.854263 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 18:53:48.867473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:53:48.878270 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 20 18:53:48.886573 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:53:48.886643 kernel: GPT:9289727 != 16777215 Jun 20 18:53:48.886663 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:53:48.886681 kernel: GPT:9289727 != 16777215 Jun 20 18:53:48.886699 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:53:48.886717 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:53:48.894672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:53:48.903445 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:53:48.924813 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:53:48.991276 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (445) Jun 20 18:53:48.998266 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (455) Jun 20 18:53:49.013784 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 20 18:53:49.049205 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 20 18:53:49.049771 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 20 18:53:49.077812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:53:49.089421 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 20 18:53:49.095431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:53:49.102286 disk-uuid[632]: Primary Header is updated. Jun 20 18:53:49.102286 disk-uuid[632]: Secondary Entries is updated. Jun 20 18:53:49.102286 disk-uuid[632]: Secondary Header is updated. Jun 20 18:53:49.108261 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:53:49.115262 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:53:50.117359 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 20 18:53:50.118635 disk-uuid[633]: The operation has completed successfully. Jun 20 18:53:50.250583 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:53:50.250726 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:53:50.307518 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:53:50.311330 sh[891]: Success Jun 20 18:53:50.333254 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 18:53:50.436956 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:53:50.444345 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:53:50.445477 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:53:50.475168 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 18:53:50.475260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:53:50.475276 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:53:50.477511 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:53:50.479790 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:53:50.600261 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 18:53:50.613810 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:53:50.615730 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:53:50.626546 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:53:50.630431 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:53:50.659800 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:53:50.659871 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:53:50.662230 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:53:50.669255 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:53:50.676282 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:53:50.678662 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:53:50.686520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:53:50.722688 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:53:50.728454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:53:50.756073 systemd-networkd[1080]: lo: Link UP Jun 20 18:53:50.756086 systemd-networkd[1080]: lo: Gained carrier Jun 20 18:53:50.757949 systemd-networkd[1080]: Enumeration completed Jun 20 18:53:50.758460 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:53:50.758467 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:53:50.759447 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:53:50.760864 systemd[1]: Reached target network.target - Network. Jun 20 18:53:50.762722 systemd-networkd[1080]: eth0: Link UP Jun 20 18:53:50.762727 systemd-networkd[1080]: eth0: Gained carrier Jun 20 18:53:50.762742 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:53:50.778689 systemd-networkd[1080]: eth0: DHCPv4 address 172.31.22.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:53:51.089713 ignition[1025]: Ignition 2.20.0 Jun 20 18:53:51.089724 ignition[1025]: Stage: fetch-offline Jun 20 18:53:51.089916 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:53:51.089925 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:53:51.090671 ignition[1025]: Ignition finished successfully Jun 20 18:53:51.092125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:53:51.099480 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:53:51.113088 ignition[1090]: Ignition 2.20.0 Jun 20 18:53:51.113102 ignition[1090]: Stage: fetch Jun 20 18:53:51.113572 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:53:51.113586 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:53:51.113719 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:53:51.133712 ignition[1090]: PUT result: OK Jun 20 18:53:51.144547 ignition[1090]: parsed url from cmdline: "" Jun 20 18:53:51.144558 ignition[1090]: no config URL provided Jun 20 18:53:51.144567 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:53:51.144580 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:53:51.144600 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:53:51.145908 ignition[1090]: PUT result: OK Jun 20 18:53:51.145970 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 20 18:53:51.147975 ignition[1090]: GET result: OK Jun 20 18:53:51.148083 ignition[1090]: parsing config with SHA512: 91652b56660a1c377a1517c88a5718b9eca82abbaf3bf6122a45c48f349661eb031f04386e7273db46dc86953463d85b12eabe6d7bbb51203a7000ac5c4e7218 Jun 20 18:53:51.152286 unknown[1090]: fetched base config from "system" Jun 20 18:53:51.152624 ignition[1090]: fetch: fetch complete Jun 20 18:53:51.152296 unknown[1090]: fetched base config from "system" Jun 20 18:53:51.152628 ignition[1090]: fetch: fetch passed Jun 20 18:53:51.152301 unknown[1090]: fetched user config from "aws" Jun 20 18:53:51.152667 ignition[1090]: Ignition finished successfully Jun 20 18:53:51.154824 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:53:51.160445 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:53:51.174745 ignition[1096]: Ignition 2.20.0 Jun 20 18:53:51.174757 ignition[1096]: Stage: kargs Jun 20 18:53:51.175078 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:53:51.175088 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:53:51.175179 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:53:51.176207 ignition[1096]: PUT result: OK Jun 20 18:53:51.179069 ignition[1096]: kargs: kargs passed Jun 20 18:53:51.179127 ignition[1096]: Ignition finished successfully Jun 20 18:53:51.180152 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:53:51.187525 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:53:51.201845 ignition[1102]: Ignition 2.20.0 Jun 20 18:53:51.201860 ignition[1102]: Stage: disks Jun 20 18:53:51.202313 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:53:51.202327 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:53:51.202533 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:53:51.203697 ignition[1102]: PUT result: OK Jun 20 18:53:51.206738 ignition[1102]: disks: disks passed Jun 20 18:53:51.206805 ignition[1102]: Ignition finished successfully Jun 20 18:53:51.207781 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:53:51.208585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:53:51.208958 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:53:51.209581 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:53:51.210224 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:53:51.210796 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:53:51.215413 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:53:51.253013 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 20 18:53:51.255913 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:53:51.261375 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:53:51.361278 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 18:53:51.361988 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:53:51.363437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:53:51.379397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:53:51.382297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:53:51.383520 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 18:53:51.383595 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:53:51.383629 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:53:51.391513 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:53:51.393714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:53:51.404849 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1129) Jun 20 18:53:51.404913 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:53:51.408165 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:53:51.408219 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:53:51.416261 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:53:51.418475 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:53:51.701779 initrd-setup-root[1153]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:53:51.719392 initrd-setup-root[1160]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:53:51.723839 initrd-setup-root[1167]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:53:51.740760 initrd-setup-root[1174]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:53:52.004096 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:53:52.009388 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:53:52.013192 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:53:52.017106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:53:52.019387 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:53:52.044607 ignition[1241]: INFO : Ignition 2.20.0 Jun 20 18:53:52.046387 ignition[1241]: INFO : Stage: mount Jun 20 18:53:52.046387 ignition[1241]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:53:52.046387 ignition[1241]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:53:52.046387 ignition[1241]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:53:52.050327 ignition[1241]: INFO : PUT result: OK Jun 20 18:53:52.052692 ignition[1241]: INFO : mount: mount passed Jun 20 18:53:52.054427 ignition[1241]: INFO : Ignition finished successfully Jun 20 18:53:52.055731 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:53:52.060349 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:53:52.064276 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:53:52.081585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:53:52.103271 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1254) Jun 20 18:53:52.107450 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:53:52.107526 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:53:52.107541 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 20 18:53:52.114792 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 20 18:53:52.114485 systemd-networkd[1080]: eth0: Gained IPv6LL Jun 20 18:53:52.116353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:53:52.133301 ignition[1271]: INFO : Ignition 2.20.0 Jun 20 18:53:52.133301 ignition[1271]: INFO : Stage: files Jun 20 18:53:52.135007 ignition[1271]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:53:52.135007 ignition[1271]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:53:52.135007 ignition[1271]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:53:52.135966 ignition[1271]: INFO : PUT result: OK Jun 20 18:53:52.138148 ignition[1271]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:53:52.160723 ignition[1271]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:53:52.160723 ignition[1271]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:53:52.200322 ignition[1271]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:53:52.201057 ignition[1271]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:53:52.201057 ignition[1271]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:53:52.200714 unknown[1271]: wrote ssh authorized keys file for user: core Jun 20 18:53:52.202885 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 18:53:52.202885 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 18:53:52.291127 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:53:52.505768 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 18:53:52.505768 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:53:52.507927 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 18:54:00.186022 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:54:00.337337 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:54:00.338784 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 18:54:01.038682 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:54:01.942756 ignition[1271]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:54:01.953444 ignition[1271]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:54:01.964379 ignition[1271]: INFO : files: files passed Jun 20 18:54:02.012325 ignition[1271]: INFO : Ignition finished successfully Jun 20 18:54:01.968368 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:54:02.001577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:54:02.025786 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:54:02.042758 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:54:02.043458 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:54:02.074616 initrd-setup-root-after-ignition[1299]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:54:02.074616 initrd-setup-root-after-ignition[1299]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:54:02.082428 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:54:02.089796 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:54:02.091139 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:54:02.102566 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:54:02.189403 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:54:02.189551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:54:02.190948 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:54:02.192694 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:54:02.193696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:54:02.199590 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:54:02.225366 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:54:02.231570 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:54:02.249927 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:54:02.251115 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:54:02.252850 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:54:02.253884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:54:02.255722 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:54:02.256927 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:54:02.258134 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:54:02.262694 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:54:02.263834 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:54:02.264701 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:54:02.265620 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:54:02.266605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:54:02.267595 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:54:02.270939 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:54:02.271792 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:54:02.272580 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:54:02.272799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:54:02.274042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:54:02.275062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:54:02.275848 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:54:02.276604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:54:02.277086 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:54:02.277287 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:54:02.278944 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:54:02.282907 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:54:02.287411 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:54:02.287647 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:54:02.304199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:54:02.305824 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:54:02.306077 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:54:02.319185 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:54:02.320847 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:54:02.321527 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:54:02.323695 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:54:02.323897 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:54:02.334306 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:54:02.335202 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:54:02.338992 ignition[1323]: INFO : Ignition 2.20.0 Jun 20 18:54:02.338992 ignition[1323]: INFO : Stage: umount Jun 20 18:54:02.341567 ignition[1323]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:54:02.341567 ignition[1323]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 20 18:54:02.343040 ignition[1323]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 20 18:54:02.343998 ignition[1323]: INFO : PUT result: OK Jun 20 18:54:02.348659 ignition[1323]: INFO : umount: umount passed Jun 20 18:54:02.348659 ignition[1323]: INFO : Ignition finished successfully Jun 20 18:54:02.349263 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:54:02.349408 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:54:02.350830 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:54:02.350964 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:54:02.351936 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:54:02.352005 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:54:02.353391 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:54:02.353458 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:54:02.354857 systemd[1]: Stopped target network.target - Network. Jun 20 18:54:02.355305 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:54:02.355380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:54:02.355864 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:54:02.358349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:54:02.362397 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:54:02.362835 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:54:02.363742 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:54:02.364699 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:54:02.364773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:54:02.365448 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:54:02.365503 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:54:02.366114 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:54:02.366192 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:54:02.368484 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:54:02.368564 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:54:02.369342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:54:02.370040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:54:02.372474 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:54:02.373599 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:54:02.373732 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:54:02.375907 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:54:02.376050 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:54:02.379706 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:54:02.380007 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:54:02.380157 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:54:02.382534 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:54:02.384365 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:54:02.384438 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:54:02.385205 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:54:02.385345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:54:02.390507 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:54:02.391339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:54:02.391434 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:54:02.392802 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:54:02.392872 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:54:02.394493 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:54:02.394564 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:54:02.395449 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:54:02.395517 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:54:02.396301 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:54:02.398741 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:54:02.398840 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:54:02.406222 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:54:02.406532 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:54:02.410018 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:54:02.410130 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:54:02.411555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:54:02.411636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:54:02.412532 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:54:02.412580 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:54:02.413330 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:54:02.413408 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:54:02.414563 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:54:02.414632 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:54:02.415713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:54:02.415781 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:54:02.423516 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:54:02.424250 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:54:02.424337 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:54:02.429445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:54:02.429529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:54:02.434871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:54:02.434979 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:54:02.436356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:54:02.436505 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:54:02.437824 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:54:02.447580 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:54:02.457610 systemd[1]: Switching root. Jun 20 18:54:02.502898 systemd-journald[179]: Journal stopped Jun 20 18:54:04.767499 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jun 20 18:54:04.767595 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:54:04.767619 kernel: SELinux: policy capability open_perms=1 Jun 20 18:54:04.767640 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:54:04.767659 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:54:04.767682 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:54:04.767708 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:54:04.767727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:54:04.767747 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:54:04.767767 kernel: audit: type=1403 audit(1750445643.276:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:54:04.767789 systemd[1]: Successfully loaded SELinux policy in 68.248ms. Jun 20 18:54:04.767821 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.128ms. Jun 20 18:54:04.767844 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:54:04.767866 systemd[1]: Detected virtualization amazon. Jun 20 18:54:04.767886 systemd[1]: Detected architecture x86-64. Jun 20 18:54:04.767914 systemd[1]: Detected first boot. Jun 20 18:54:04.767934 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:54:04.767955 zram_generator::config[1367]: No configuration found. Jun 20 18:54:04.767982 kernel: Guest personality initialized and is inactive Jun 20 18:54:04.768002 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 18:54:04.768021 kernel: Initialized host personality Jun 20 18:54:04.768041 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:54:04.768061 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:54:04.768087 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:54:04.768108 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:54:04.768126 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:54:04.768145 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:54:04.768164 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:54:04.768183 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:54:04.768202 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:54:04.768222 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:54:04.768280 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:54:04.768304 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:54:04.768323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:54:04.768342 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:54:04.768361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:54:04.768381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:54:04.768400 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:54:04.768420 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:54:04.768439 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:54:04.768461 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:54:04.768480 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:54:04.768500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:54:04.768518 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:54:04.768537 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:54:04.768556 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:54:04.768575 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:54:04.768594 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:54:04.768616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:54:04.768636 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:54:04.768656 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:54:04.768677 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:54:04.768695 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:54:04.768714 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:54:04.768734 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:54:04.768752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:54:04.768770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:54:04.768792 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:54:04.768811 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:54:04.768830 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:54:04.768853 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:54:04.768872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:54:04.768891 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:54:04.768911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:54:04.768934 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:54:04.768953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:54:04.768976 systemd[1]: Reached target machines.target - Containers. Jun 20 18:54:04.768995 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:54:04.769014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:54:04.769031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:54:04.769049 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:54:04.769069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:54:04.769092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:54:04.769114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:54:04.769139 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:54:04.769160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:54:04.769182 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:54:04.769204 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:54:04.769225 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:54:04.771302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:54:04.771331 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:54:04.771351 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:54:04.771378 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:54:04.771398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:54:04.771420 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:54:04.771440 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:54:04.771461 kernel: loop: module loaded Jun 20 18:54:04.771481 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:54:04.771500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:54:04.771526 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:54:04.771546 systemd[1]: Stopped verity-setup.service. Jun 20 18:54:04.771567 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:54:04.771588 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:54:04.771613 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:54:04.771633 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:54:04.771655 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:54:04.771722 systemd-journald[1450]: Collecting audit messages is disabled. Jun 20 18:54:04.771766 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:54:04.771788 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:54:04.771809 systemd-journald[1450]: Journal started Jun 20 18:54:04.771854 systemd-journald[1450]: Runtime Journal (/run/log/journal/ec244650b2849d4d65070ad0612c44f6) is 4.7M, max 38.1M, 33.4M free. Jun 20 18:54:04.447349 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:54:04.458754 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 20 18:54:04.459208 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:54:04.799125 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:54:04.799211 kernel: fuse: init (API version 7.39) Jun 20 18:54:04.781573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:54:04.782686 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:54:04.784338 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:54:04.785442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:54:04.787316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:54:04.788593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:54:04.789435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:54:04.790513 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:54:04.792358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:54:04.793418 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:54:04.793626 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:54:04.802287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:54:04.814827 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:54:04.816368 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:54:04.823907 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:54:04.839498 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:54:04.844263 kernel: ACPI: bus type drm_connector registered Jun 20 18:54:04.845364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:54:04.846055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:54:04.846106 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:54:04.853033 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:54:04.870518 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:54:04.878532 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:54:04.879374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:54:04.882469 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:54:04.891018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:54:04.891697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:54:04.894509 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:54:04.895293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:54:04.901501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:54:04.909468 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:54:04.916931 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:54:04.918312 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:54:04.921010 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:54:04.923281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:54:04.924621 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:54:04.927085 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:54:04.970278 kernel: loop0: detected capacity change from 0 to 229808 Jun 20 18:54:04.966585 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:54:04.977026 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:54:04.987034 systemd-journald[1450]: Time spent on flushing to /var/log/journal/ec244650b2849d4d65070ad0612c44f6 is 79.737ms for 1013 entries. Jun 20 18:54:04.987034 systemd-journald[1450]: System Journal (/var/log/journal/ec244650b2849d4d65070ad0612c44f6) is 8M, max 195.6M, 187.6M free. Jun 20 18:54:05.105940 systemd-journald[1450]: Received client request to flush runtime journal. Jun 20 18:54:05.106082 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:54:05.106120 kernel: loop1: detected capacity change from 0 to 138176 Jun 20 18:54:04.980744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:54:04.996055 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:54:04.999478 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:54:05.001860 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:54:05.012804 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:54:05.036996 udevadm[1510]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:54:05.082014 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:54:05.111960 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:54:05.120343 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:54:05.127430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:54:05.152917 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:54:05.185735 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 20 18:54:05.185764 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jun 20 18:54:05.197308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:54:05.280379 kernel: loop2: detected capacity change from 0 to 147912 Jun 20 18:54:05.435635 kernel: loop3: detected capacity change from 0 to 62832 Jun 20 18:54:05.460876 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:54:05.573274 kernel: loop4: detected capacity change from 0 to 229808 Jun 20 18:54:05.612302 kernel: loop5: detected capacity change from 0 to 138176 Jun 20 18:54:05.649754 kernel: loop6: detected capacity change from 0 to 147912 Jun 20 18:54:05.688284 kernel: loop7: detected capacity change from 0 to 62832 Jun 20 18:54:05.699975 (sd-merge)[1529]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 20 18:54:05.700748 (sd-merge)[1529]: Merged extensions into '/usr'. Jun 20 18:54:05.707209 systemd[1]: Reload requested from client PID 1500 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:54:05.707421 systemd[1]: Reloading... Jun 20 18:54:05.843567 zram_generator::config[1557]: No configuration found. Jun 20 18:54:06.039914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:54:06.124398 systemd[1]: Reloading finished in 416 ms. Jun 20 18:54:06.145957 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:54:06.146924 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:54:06.155055 systemd[1]: Starting ensure-sysext.service... Jun 20 18:54:06.158485 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:54:06.168536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:54:06.189337 systemd[1]: Reload requested from client PID 1609 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:54:06.189366 systemd[1]: Reloading... Jun 20 18:54:06.227025 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:54:06.229550 systemd-udevd[1611]: Using default interface naming scheme 'v255'. Jun 20 18:54:06.230104 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:54:06.232221 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:54:06.232738 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Jun 20 18:54:06.232841 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Jun 20 18:54:06.238198 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:54:06.238212 systemd-tmpfiles[1610]: Skipping /boot Jun 20 18:54:06.263170 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:54:06.263195 systemd-tmpfiles[1610]: Skipping /boot Jun 20 18:54:06.331141 zram_generator::config[1640]: No configuration found. Jun 20 18:54:06.468748 (udev-worker)[1646]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:54:06.598290 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 18:54:06.607706 kernel: ACPI: button: Power Button [PWRF] Jun 20 18:54:06.607786 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 20 18:54:06.609060 kernel: ACPI: button: Sleep Button [SLPF] Jun 20 18:54:06.617262 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 20 18:54:06.651322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:54:06.686320 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jun 20 18:54:06.738275 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1664) Jun 20 18:54:06.806949 ldconfig[1492]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:54:06.860699 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:54:06.862689 systemd[1]: Reloading finished in 672 ms. Jun 20 18:54:06.878380 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:54:06.879791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:54:06.903452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:54:06.906437 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:54:06.966109 systemd[1]: Finished ensure-sysext.service. Jun 20 18:54:06.988195 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:54:07.005874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 20 18:54:07.006703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:54:07.011557 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:54:07.016454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:54:07.019601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:54:07.024050 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:54:07.028494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:54:07.031892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:54:07.051296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:54:07.058352 lvm[1807]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:54:07.056583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:54:07.057462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:54:07.062465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:54:07.063737 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:54:07.074461 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:54:07.083414 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:54:07.093498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:54:07.094787 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:54:07.099528 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:54:07.102720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:54:07.104305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:54:07.107790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:54:07.108053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:54:07.109172 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:54:07.109434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:54:07.112627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:54:07.112876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:54:07.116580 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:54:07.116811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:54:07.124749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:54:07.124848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:54:07.143435 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:54:07.178035 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:54:07.180330 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:54:07.184554 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:54:07.193812 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:54:07.195554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:54:07.204767 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:54:07.205488 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:54:07.209316 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:54:07.218494 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:54:07.239275 lvm[1845]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:54:07.242809 augenrules[1851]: No rules Jun 20 18:54:07.248185 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:54:07.249379 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:54:07.255168 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:54:07.271604 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:54:07.287299 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:54:07.303319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:54:07.374974 systemd-networkd[1820]: lo: Link UP Jun 20 18:54:07.375456 systemd-networkd[1820]: lo: Gained carrier Jun 20 18:54:07.377601 systemd-networkd[1820]: Enumeration completed Jun 20 18:54:07.378057 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:54:07.378063 systemd-networkd[1820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:54:07.378399 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:54:07.385679 systemd-networkd[1820]: eth0: Link UP Jun 20 18:54:07.385858 systemd-networkd[1820]: eth0: Gained carrier Jun 20 18:54:07.385890 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:54:07.386864 systemd-resolved[1821]: Positive Trust Anchors: Jun 20 18:54:07.386876 systemd-resolved[1821]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:54:07.386938 systemd-resolved[1821]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:54:07.389574 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:54:07.392877 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:54:07.398545 systemd-networkd[1820]: eth0: DHCPv4 address 172.31.22.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 20 18:54:07.408747 systemd-resolved[1821]: Defaulting to hostname 'linux'. Jun 20 18:54:07.410742 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:54:07.412384 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:54:07.412906 systemd[1]: Reached target network.target - Network. Jun 20 18:54:07.413350 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:54:07.413757 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:54:07.414317 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:54:07.414743 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:54:07.415375 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:54:07.415874 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:54:07.416274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:54:07.416639 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:54:07.416684 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:54:07.417047 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:54:07.418642 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:54:07.420486 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:54:07.423538 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:54:07.424072 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:54:07.424604 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:54:07.427077 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:54:07.427921 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:54:07.429015 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:54:07.429520 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:54:07.429895 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:54:07.430360 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:54:07.430399 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:54:07.431458 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:54:07.435467 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:54:07.441488 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:54:07.446185 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:54:07.450500 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:54:07.452125 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:54:07.455553 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:54:07.458493 systemd[1]: Started ntpd.service - Network Time Service. Jun 20 18:54:07.465524 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:54:07.477189 jq[1878]: false Jun 20 18:54:07.476643 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 20 18:54:07.487551 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:54:07.514415 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:54:07.528469 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:54:07.532289 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:54:07.533086 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:54:07.540463 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:54:07.543950 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:54:07.558226 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:54:07.558830 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:54:07.569697 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:54:07.570501 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:54:07.597690 jq[1893]: true Jun 20 18:54:07.604878 dbus-daemon[1877]: [system] SELinux support is enabled Jun 20 18:54:07.613164 extend-filesystems[1879]: Found loop4 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found loop5 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found loop6 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found loop7 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p1 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p2 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p3 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found usr Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p4 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p6 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p7 Jun 20 18:54:07.613164 extend-filesystems[1879]: Found nvme0n1p9 Jun 20 18:54:07.613164 extend-filesystems[1879]: Checking size of /dev/nvme0n1p9 Jun 20 18:54:07.612111 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:54:07.677099 tar[1900]: linux-amd64/LICENSE Jun 20 18:54:07.677099 tar[1900]: linux-amd64/helm Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: ---------------------------------------------------- Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: corporation. Support and training for ntp-4 are Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: available at https://www.nwtime.org/support Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: ---------------------------------------------------- Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: proto: precision = 0.064 usec (-24) Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: basedate set to 2025-06-08 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: gps base set to 2025-06-08 (week 2370) Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Listen normally on 3 eth0 172.31.22.222:123 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Listen normally on 4 lo [::1]:123 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: bind(21) AF_INET6 fe80::44f:9ff:fed8:39e7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: unable to create socket on eth0 (5) for fe80::44f:9ff:fed8:39e7%2#123 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: failed to init interface for address fe80::44f:9ff:fed8:39e7%2 Jun 20 18:54:07.687506 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: Listening on routing socket on fd #21 for interface updates Jun 20 18:54:07.615142 dbus-daemon[1877]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1820 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 20 18:54:07.619750 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:54:07.628975 dbus-daemon[1877]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:54:07.621081 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:54:07.642377 ntpd[1881]: ntpd 4.2.8p17@1.4004-o Fri Jun 20 16:33:02 UTC 2025 (1): Starting Jun 20 18:54:07.627686 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:54:07.642404 ntpd[1881]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 20 18:54:07.627734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:54:07.642415 ntpd[1881]: ---------------------------------------------------- Jun 20 18:54:07.695030 jq[1912]: true Jun 20 18:54:07.628515 (ntainerd)[1909]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:54:07.642426 ntpd[1881]: ntp-4 is maintained by Network Time Foundation, Jun 20 18:54:07.629928 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:54:07.642436 ntpd[1881]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 20 18:54:07.629956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:54:07.642446 ntpd[1881]: corporation. Support and training for ntp-4 are Jun 20 18:54:07.708948 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:54:07.708948 ntpd[1881]: 20 Jun 18:54:07 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:54:07.656220 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 20 18:54:07.642457 ntpd[1881]: available at https://www.nwtime.org/support Jun 20 18:54:07.642466 ntpd[1881]: ---------------------------------------------------- Jun 20 18:54:07.657777 ntpd[1881]: proto: precision = 0.064 usec (-24) Jun 20 18:54:07.658110 ntpd[1881]: basedate set to 2025-06-08 Jun 20 18:54:07.658126 ntpd[1881]: gps base set to 2025-06-08 (week 2370) Jun 20 18:54:07.671089 ntpd[1881]: Listen and drop on 0 v6wildcard [::]:123 Jun 20 18:54:07.671155 ntpd[1881]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 20 18:54:07.672507 ntpd[1881]: Listen normally on 2 lo 127.0.0.1:123 Jun 20 18:54:07.672553 ntpd[1881]: Listen normally on 3 eth0 172.31.22.222:123 Jun 20 18:54:07.672596 ntpd[1881]: Listen normally on 4 lo [::1]:123 Jun 20 18:54:07.672647 ntpd[1881]: bind(21) AF_INET6 fe80::44f:9ff:fed8:39e7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:54:07.672670 ntpd[1881]: unable to create socket on eth0 (5) for fe80::44f:9ff:fed8:39e7%2#123 Jun 20 18:54:07.672686 ntpd[1881]: failed to init interface for address fe80::44f:9ff:fed8:39e7%2 Jun 20 18:54:07.672719 ntpd[1881]: Listening on routing socket on fd #21 for interface updates Jun 20 18:54:07.708327 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:54:07.708374 ntpd[1881]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 20 18:54:07.723347 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 20 18:54:07.731046 update_engine[1891]: I20250620 18:54:07.730937 1891 main.cc:92] Flatcar Update Engine starting Jun 20 18:54:07.743516 extend-filesystems[1879]: Resized partition /dev/nvme0n1p9 Jun 20 18:54:07.750280 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:54:07.758765 update_engine[1891]: I20250620 18:54:07.758284 1891 update_check_scheduler.cc:74] Next update check in 8m2s Jun 20 18:54:07.761266 extend-filesystems[1930]: resize2fs 1.47.1 (20-May-2024) Jun 20 18:54:07.763176 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:54:07.782274 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 20 18:54:07.877558 coreos-metadata[1876]: Jun 20 18:54:07.877 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:54:07.893712 coreos-metadata[1876]: Jun 20 18:54:07.892 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 20 18:54:07.896153 coreos-metadata[1876]: Jun 20 18:54:07.895 INFO Fetch successful Jun 20 18:54:07.896153 coreos-metadata[1876]: Jun 20 18:54:07.895 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 20 18:54:07.898418 coreos-metadata[1876]: Jun 20 18:54:07.897 INFO Fetch successful Jun 20 18:54:07.906558 coreos-metadata[1876]: Jun 20 18:54:07.905 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 20 18:54:07.907563 coreos-metadata[1876]: Jun 20 18:54:07.907 INFO Fetch successful Jun 20 18:54:07.908687 coreos-metadata[1876]: Jun 20 18:54:07.907 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 20 18:54:07.908687 coreos-metadata[1876]: Jun 20 18:54:07.908 INFO Fetch successful Jun 20 18:54:07.908687 coreos-metadata[1876]: Jun 20 18:54:07.908 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 20 18:54:07.910725 coreos-metadata[1876]: Jun 20 18:54:07.909 INFO Fetch failed with 404: resource not found Jun 20 18:54:07.910725 coreos-metadata[1876]: Jun 20 18:54:07.909 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 20 18:54:07.910725 coreos-metadata[1876]: Jun 20 18:54:07.909 INFO Fetch successful Jun 20 18:54:07.910725 coreos-metadata[1876]: Jun 20 18:54:07.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 20 18:54:07.915971 coreos-metadata[1876]: Jun 20 18:54:07.913 INFO Fetch successful Jun 20 18:54:07.915971 coreos-metadata[1876]: Jun 20 18:54:07.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 20 18:54:07.921089 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 20 18:54:07.921552 coreos-metadata[1876]: Jun 20 18:54:07.921 INFO Fetch successful Jun 20 18:54:07.921552 coreos-metadata[1876]: Jun 20 18:54:07.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 20 18:54:07.925324 coreos-metadata[1876]: Jun 20 18:54:07.925 INFO Fetch successful Jun 20 18:54:07.925324 coreos-metadata[1876]: Jun 20 18:54:07.925 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 20 18:54:07.930493 coreos-metadata[1876]: Jun 20 18:54:07.925 INFO Fetch successful Jun 20 18:54:07.939726 extend-filesystems[1930]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 20 18:54:07.939726 extend-filesystems[1930]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 18:54:07.939726 extend-filesystems[1930]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 20 18:54:07.949563 extend-filesystems[1879]: Resized filesystem in /dev/nvme0n1p9 Jun 20 18:54:07.949903 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:54:07.950187 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:54:07.957987 systemd-logind[1889]: Watching system buttons on /dev/input/event1 (Power Button) Jun 20 18:54:07.958020 systemd-logind[1889]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 20 18:54:07.958046 systemd-logind[1889]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:54:07.959839 systemd-logind[1889]: New seat seat0. Jun 20 18:54:08.005978 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:54:08.016706 bash[1953]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:54:08.021067 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:54:08.031618 systemd[1]: Starting sshkeys.service... Jun 20 18:54:08.037439 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:54:08.039538 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:54:08.051453 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1664) Jun 20 18:54:08.073735 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:54:08.081686 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:54:08.172144 coreos-metadata[1977]: Jun 20 18:54:08.172 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 20 18:54:08.174190 coreos-metadata[1977]: Jun 20 18:54:08.174 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 20 18:54:08.174945 coreos-metadata[1977]: Jun 20 18:54:08.174 INFO Fetch successful Jun 20 18:54:08.174945 coreos-metadata[1977]: Jun 20 18:54:08.174 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 18:54:08.175962 coreos-metadata[1977]: Jun 20 18:54:08.175 INFO Fetch successful Jun 20 18:54:08.177419 unknown[1977]: wrote ssh authorized keys file for user: core Jun 20 18:54:08.215269 update-ssh-keys[2014]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:54:08.216421 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:54:08.227137 systemd[1]: Finished sshkeys.service. Jun 20 18:54:08.236371 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 20 18:54:08.238754 dbus-daemon[1877]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 20 18:54:08.243941 dbus-daemon[1877]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1918 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 20 18:54:08.257609 systemd[1]: Starting polkit.service - Authorization Manager... Jun 20 18:54:08.289853 locksmithd[1932]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:54:08.312979 polkitd[2026]: Started polkitd version 121 Jun 20 18:54:08.356373 polkitd[2026]: Loading rules from directory /etc/polkit-1/rules.d Jun 20 18:54:08.375941 polkitd[2026]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 20 18:54:08.376652 polkitd[2026]: Finished loading, compiling and executing 2 rules Jun 20 18:54:08.391648 dbus-daemon[1877]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 20 18:54:08.392986 systemd[1]: Started polkit.service - Authorization Manager. Jun 20 18:54:08.396467 polkitd[2026]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 20 18:54:08.490635 systemd-hostnamed[1918]: Hostname set to (transient) Jun 20 18:54:08.491151 systemd-resolved[1821]: System hostname changed to 'ip-172-31-22-222'. Jun 20 18:54:08.532275 containerd[1909]: time="2025-06-20T18:54:08.531718865Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:54:08.642882 ntpd[1881]: bind(24) AF_INET6 fe80::44f:9ff:fed8:39e7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:54:08.645321 ntpd[1881]: unable to create socket on eth0 (6) for fe80::44f:9ff:fed8:39e7%2#123 Jun 20 18:54:08.645611 ntpd[1881]: 20 Jun 18:54:08 ntpd[1881]: bind(24) AF_INET6 fe80::44f:9ff:fed8:39e7%2#123 flags 0x11 failed: Cannot assign requested address Jun 20 18:54:08.645611 ntpd[1881]: 20 Jun 18:54:08 ntpd[1881]: unable to create socket on eth0 (6) for fe80::44f:9ff:fed8:39e7%2#123 Jun 20 18:54:08.645611 ntpd[1881]: 20 Jun 18:54:08 ntpd[1881]: failed to init interface for address fe80::44f:9ff:fed8:39e7%2 Jun 20 18:54:08.645337 ntpd[1881]: failed to init interface for address fe80::44f:9ff:fed8:39e7%2 Jun 20 18:54:08.651601 containerd[1909]: time="2025-06-20T18:54:08.651524299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.655477 containerd[1909]: time="2025-06-20T18:54:08.655423634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:54:08.655477 containerd[1909]: time="2025-06-20T18:54:08.655474705Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:54:08.655633 containerd[1909]: time="2025-06-20T18:54:08.655497038Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:54:08.655709 containerd[1909]: time="2025-06-20T18:54:08.655689040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:54:08.655749 containerd[1909]: time="2025-06-20T18:54:08.655719100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.655818 containerd[1909]: time="2025-06-20T18:54:08.655795516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:54:08.655866 containerd[1909]: time="2025-06-20T18:54:08.655820253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656134 containerd[1909]: time="2025-06-20T18:54:08.656109903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656180 containerd[1909]: time="2025-06-20T18:54:08.656136753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656180 containerd[1909]: time="2025-06-20T18:54:08.656157129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656180 containerd[1909]: time="2025-06-20T18:54:08.656172046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656311 containerd[1909]: time="2025-06-20T18:54:08.656289568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656567 containerd[1909]: time="2025-06-20T18:54:08.656543516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656782 containerd[1909]: time="2025-06-20T18:54:08.656759495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:54:08.656826 containerd[1909]: time="2025-06-20T18:54:08.656787265Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:54:08.656907 containerd[1909]: time="2025-06-20T18:54:08.656888864Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:54:08.656970 containerd[1909]: time="2025-06-20T18:54:08.656954874Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:54:08.664342 containerd[1909]: time="2025-06-20T18:54:08.664291706Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:54:08.664457 containerd[1909]: time="2025-06-20T18:54:08.664387423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:54:08.664568 containerd[1909]: time="2025-06-20T18:54:08.664548116Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:54:08.664611 containerd[1909]: time="2025-06-20T18:54:08.664582863Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:54:08.664780 containerd[1909]: time="2025-06-20T18:54:08.664761820Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:54:08.664980 containerd[1909]: time="2025-06-20T18:54:08.664960683Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:54:08.666507 containerd[1909]: time="2025-06-20T18:54:08.666478877Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:54:08.667042 containerd[1909]: time="2025-06-20T18:54:08.667015274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:54:08.667105 containerd[1909]: time="2025-06-20T18:54:08.667051502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:54:08.667105 containerd[1909]: time="2025-06-20T18:54:08.667091974Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:54:08.667192 containerd[1909]: time="2025-06-20T18:54:08.667114192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.667192 containerd[1909]: time="2025-06-20T18:54:08.667134976Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667506532Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667535015Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667557593Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667592615Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667612002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667629521Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667714452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667737155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667963872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.667985581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.668003582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.668022654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.668105281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668303 containerd[1909]: time="2025-06-20T18:54:08.668124384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668778 containerd[1909]: time="2025-06-20T18:54:08.668407703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668778 containerd[1909]: time="2025-06-20T18:54:08.668437483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668778 containerd[1909]: time="2025-06-20T18:54:08.668458104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668778 containerd[1909]: time="2025-06-20T18:54:08.668494191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668778 containerd[1909]: time="2025-06-20T18:54:08.668513542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668778 containerd[1909]: time="2025-06-20T18:54:08.668537481Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:54:08.668981 containerd[1909]: time="2025-06-20T18:54:08.668811593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668981 containerd[1909]: time="2025-06-20T18:54:08.668834280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.668981 containerd[1909]: time="2025-06-20T18:54:08.668869440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:54:08.669343 containerd[1909]: time="2025-06-20T18:54:08.669320754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:54:08.669395 containerd[1909]: time="2025-06-20T18:54:08.669357913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:54:08.671261 containerd[1909]: time="2025-06-20T18:54:08.669654790Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:54:08.671261 containerd[1909]: time="2025-06-20T18:54:08.669683552Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:54:08.671261 containerd[1909]: time="2025-06-20T18:54:08.669701409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.671261 containerd[1909]: time="2025-06-20T18:54:08.669737052Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:54:08.671261 containerd[1909]: time="2025-06-20T18:54:08.669753071Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:54:08.671261 containerd[1909]: time="2025-06-20T18:54:08.669768227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:54:08.671533 containerd[1909]: time="2025-06-20T18:54:08.670181502Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:54:08.671533 containerd[1909]: time="2025-06-20T18:54:08.670269391Z" level=info msg="Connect containerd service" Jun 20 18:54:08.671533 containerd[1909]: time="2025-06-20T18:54:08.670308431Z" level=info msg="using legacy CRI server" Jun 20 18:54:08.671533 containerd[1909]: time="2025-06-20T18:54:08.670317181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:54:08.671533 containerd[1909]: time="2025-06-20T18:54:08.670495109Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:54:08.673608 containerd[1909]: time="2025-06-20T18:54:08.673572874Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:54:08.674216 containerd[1909]: time="2025-06-20T18:54:08.674176914Z" level=info msg="Start subscribing containerd event" Jun 20 18:54:08.674288 containerd[1909]: time="2025-06-20T18:54:08.674264102Z" level=info msg="Start recovering state" Jun 20 18:54:08.674368 containerd[1909]: time="2025-06-20T18:54:08.674352234Z" level=info msg="Start event monitor" Jun 20 18:54:08.674408 containerd[1909]: time="2025-06-20T18:54:08.674373702Z" level=info msg="Start snapshots syncer" Jun 20 18:54:08.674408 containerd[1909]: time="2025-06-20T18:54:08.674388581Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:54:08.674408 containerd[1909]: time="2025-06-20T18:54:08.674401294Z" level=info msg="Start streaming server" Jun 20 18:54:08.675636 containerd[1909]: time="2025-06-20T18:54:08.675609959Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:54:08.675699 containerd[1909]: time="2025-06-20T18:54:08.675676909Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:54:08.677387 containerd[1909]: time="2025-06-20T18:54:08.677364552Z" level=info msg="containerd successfully booted in 0.147859s" Jun 20 18:54:08.677645 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:54:09.024818 sshd_keygen[1917]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:54:09.030276 tar[1900]: linux-amd64/README.md Jun 20 18:54:09.045816 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:54:09.057949 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:54:09.064647 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:54:09.072693 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:54:09.072987 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:54:09.079624 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:54:09.092013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:54:09.098771 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:54:09.101201 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:54:09.102518 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:54:09.202513 systemd-networkd[1820]: eth0: Gained IPv6LL Jun 20 18:54:09.204370 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:54:09.206150 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:54:09.211652 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 20 18:54:09.215488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:09.226733 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:54:09.276213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:54:09.293385 amazon-ssm-agent[2099]: Initializing new seelog logger Jun 20 18:54:09.293774 amazon-ssm-agent[2099]: New Seelog Logger Creation Complete Jun 20 18:54:09.293774 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.293774 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.294017 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 processing appconfig overrides Jun 20 18:54:09.294499 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.294499 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.294619 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 processing appconfig overrides Jun 20 18:54:09.300091 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.300091 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.300091 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 processing appconfig overrides Jun 20 18:54:09.300091 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO Proxy environment variables: Jun 20 18:54:09.302916 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.302916 amazon-ssm-agent[2099]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 20 18:54:09.302916 amazon-ssm-agent[2099]: 2025/06/20 18:54:09 processing appconfig overrides Jun 20 18:54:09.401528 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO https_proxy: Jun 20 18:54:09.500674 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO http_proxy: Jun 20 18:54:09.598749 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO no_proxy: Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO Checking if agent identity type OnPrem can be assumed Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO Checking if agent identity type EC2 can be assumed Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO Agent will take identity from EC2 Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] Starting Core Agent Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [Registrar] Starting registrar module Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [EC2Identity] EC2 registration was successful. Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [CredentialRefresher] credentialRefresher has started Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [CredentialRefresher] Starting credentials refresher loop Jun 20 18:54:09.647163 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 20 18:54:09.697256 amazon-ssm-agent[2099]: 2025-06-20 18:54:09 INFO [CredentialRefresher] Next credential rotation will be in 30.42499355951667 minutes Jun 20 18:54:10.663317 amazon-ssm-agent[2099]: 2025-06-20 18:54:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 20 18:54:10.687543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:10.688870 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:54:10.691729 systemd[1]: Startup finished in 594ms (kernel) + 15.552s (initrd) + 7.482s (userspace) = 23.628s. Jun 20 18:54:10.693342 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:10.764272 amazon-ssm-agent[2099]: 2025-06-20 18:54:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2121) started Jun 20 18:54:10.865685 amazon-ssm-agent[2099]: 2025-06-20 18:54:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 20 18:54:11.408698 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:54:11.425628 systemd[1]: Started sshd@0-172.31.22.222:22-139.178.68.195:59474.service - OpenSSH per-connection server daemon (139.178.68.195:59474). Jun 20 18:54:11.569597 kubelet[2126]: E0620 18:54:11.569515 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:11.572185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:11.572344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:11.572637 systemd[1]: kubelet.service: Consumed 1.065s CPU time, 268M memory peak. Jun 20 18:54:11.625131 sshd[2146]: Accepted publickey for core from 139.178.68.195 port 59474 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:11.627952 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:11.639060 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:54:11.642816 ntpd[1881]: Listen normally on 7 eth0 [fe80::44f:9ff:fed8:39e7%2]:123 Jun 20 18:54:11.643798 ntpd[1881]: 20 Jun 18:54:11 ntpd[1881]: Listen normally on 7 eth0 [fe80::44f:9ff:fed8:39e7%2]:123 Jun 20 18:54:11.643750 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:54:11.646796 systemd-logind[1889]: New session 1 of user core. Jun 20 18:54:11.661258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:54:11.667153 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:54:11.676357 (systemd)[2152]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:54:11.679286 systemd-logind[1889]: New session c1 of user core. Jun 20 18:54:11.836761 systemd[2152]: Queued start job for default target default.target. Jun 20 18:54:11.844572 systemd[2152]: Created slice app.slice - User Application Slice. Jun 20 18:54:11.844627 systemd[2152]: Reached target paths.target - Paths. Jun 20 18:54:11.844688 systemd[2152]: Reached target timers.target - Timers. Jun 20 18:54:11.846180 systemd[2152]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:54:11.858386 systemd[2152]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:54:11.858552 systemd[2152]: Reached target sockets.target - Sockets. Jun 20 18:54:11.858607 systemd[2152]: Reached target basic.target - Basic System. Jun 20 18:54:11.858661 systemd[2152]: Reached target default.target - Main User Target. Jun 20 18:54:11.858700 systemd[2152]: Startup finished in 172ms. Jun 20 18:54:11.859072 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:54:11.869513 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:54:12.014750 systemd[1]: Started sshd@1-172.31.22.222:22-139.178.68.195:59476.service - OpenSSH per-connection server daemon (139.178.68.195:59476). Jun 20 18:54:12.171947 sshd[2163]: Accepted publickey for core from 139.178.68.195 port 59476 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:12.173285 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:12.179483 systemd-logind[1889]: New session 2 of user core. Jun 20 18:54:12.185505 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:54:12.301971 sshd[2165]: Connection closed by 139.178.68.195 port 59476 Jun 20 18:54:12.302922 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:12.306707 systemd[1]: sshd@1-172.31.22.222:22-139.178.68.195:59476.service: Deactivated successfully. Jun 20 18:54:12.308318 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:54:12.308974 systemd-logind[1889]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:54:12.310092 systemd-logind[1889]: Removed session 2. Jun 20 18:54:12.335571 systemd[1]: Started sshd@2-172.31.22.222:22-139.178.68.195:59486.service - OpenSSH per-connection server daemon (139.178.68.195:59486). Jun 20 18:54:12.501272 sshd[2171]: Accepted publickey for core from 139.178.68.195 port 59486 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:12.502904 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:12.507750 systemd-logind[1889]: New session 3 of user core. Jun 20 18:54:12.514761 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:54:12.630147 sshd[2173]: Connection closed by 139.178.68.195 port 59486 Jun 20 18:54:12.630790 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:12.633791 systemd[1]: sshd@2-172.31.22.222:22-139.178.68.195:59486.service: Deactivated successfully. Jun 20 18:54:12.635521 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:54:12.637121 systemd-logind[1889]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:54:12.638719 systemd-logind[1889]: Removed session 3. Jun 20 18:54:12.665503 systemd[1]: Started sshd@3-172.31.22.222:22-139.178.68.195:59500.service - OpenSSH per-connection server daemon (139.178.68.195:59500). Jun 20 18:54:12.828527 sshd[2179]: Accepted publickey for core from 139.178.68.195 port 59500 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:12.829861 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:12.834514 systemd-logind[1889]: New session 4 of user core. Jun 20 18:54:12.844473 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:54:12.960464 sshd[2181]: Connection closed by 139.178.68.195 port 59500 Jun 20 18:54:12.961368 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:12.964389 systemd[1]: sshd@3-172.31.22.222:22-139.178.68.195:59500.service: Deactivated successfully. Jun 20 18:54:12.966140 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:54:12.967708 systemd-logind[1889]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:54:12.969266 systemd-logind[1889]: Removed session 4. Jun 20 18:54:12.998744 systemd[1]: Started sshd@4-172.31.22.222:22-139.178.68.195:59514.service - OpenSSH per-connection server daemon (139.178.68.195:59514). Jun 20 18:54:13.161921 sshd[2187]: Accepted publickey for core from 139.178.68.195 port 59514 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:13.163599 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:13.169451 systemd-logind[1889]: New session 5 of user core. Jun 20 18:54:13.172443 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:54:13.320443 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:54:13.320843 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:13.336307 sudo[2190]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:13.359139 sshd[2189]: Connection closed by 139.178.68.195 port 59514 Jun 20 18:54:13.359872 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:13.363685 systemd[1]: sshd@4-172.31.22.222:22-139.178.68.195:59514.service: Deactivated successfully. Jun 20 18:54:13.365697 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:54:13.367428 systemd-logind[1889]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:54:13.368839 systemd-logind[1889]: Removed session 5. Jun 20 18:54:13.392476 systemd[1]: Started sshd@5-172.31.22.222:22-139.178.68.195:59518.service - OpenSSH per-connection server daemon (139.178.68.195:59518). Jun 20 18:54:13.563771 sshd[2196]: Accepted publickey for core from 139.178.68.195 port 59518 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:13.565523 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:13.572491 systemd-logind[1889]: New session 6 of user core. Jun 20 18:54:13.575461 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:54:13.677042 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:54:13.677609 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:13.681656 sudo[2200]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:13.687770 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:54:13.688159 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:13.709087 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:54:13.741179 augenrules[2222]: No rules Jun 20 18:54:13.742761 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:54:13.743050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:54:13.744797 sudo[2199]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:13.767796 sshd[2198]: Connection closed by 139.178.68.195 port 59518 Jun 20 18:54:13.768435 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:13.771466 systemd[1]: sshd@5-172.31.22.222:22-139.178.68.195:59518.service: Deactivated successfully. Jun 20 18:54:13.773598 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:54:13.775292 systemd-logind[1889]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:54:13.776480 systemd-logind[1889]: Removed session 6. Jun 20 18:54:13.803653 systemd[1]: Started sshd@6-172.31.22.222:22-139.178.68.195:52316.service - OpenSSH per-connection server daemon (139.178.68.195:52316). Jun 20 18:54:13.967179 sshd[2231]: Accepted publickey for core from 139.178.68.195 port 52316 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:54:13.968561 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:13.973915 systemd-logind[1889]: New session 7 of user core. Jun 20 18:54:13.989489 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:54:14.085356 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:54:14.085636 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:14.728573 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:54:14.730182 (dockerd)[2254]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:54:15.421111 dockerd[2254]: time="2025-06-20T18:54:15.421036569Z" level=info msg="Starting up" Jun 20 18:54:15.645954 dockerd[2254]: time="2025-06-20T18:54:15.645704728Z" level=info msg="Loading containers: start." Jun 20 18:54:15.858273 kernel: Initializing XFRM netlink socket Jun 20 18:54:15.888913 (udev-worker)[2279]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:54:15.954013 systemd-networkd[1820]: docker0: Link UP Jun 20 18:54:15.993037 dockerd[2254]: time="2025-06-20T18:54:15.992990302Z" level=info msg="Loading containers: done." Jun 20 18:54:16.014893 dockerd[2254]: time="2025-06-20T18:54:16.014819957Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:54:16.015078 dockerd[2254]: time="2025-06-20T18:54:16.014941551Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:54:16.015078 dockerd[2254]: time="2025-06-20T18:54:16.015062605Z" level=info msg="Daemon has completed initialization" Jun 20 18:54:16.064011 dockerd[2254]: time="2025-06-20T18:54:16.063937319Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:54:16.065432 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:54:16.924719 containerd[1909]: time="2025-06-20T18:54:16.924659235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:54:17.535381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897589940.mount: Deactivated successfully. Jun 20 18:54:19.324073 containerd[1909]: time="2025-06-20T18:54:19.324018238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:19.325547 containerd[1909]: time="2025-06-20T18:54:19.325497460Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jun 20 18:54:19.327073 containerd[1909]: time="2025-06-20T18:54:19.327028440Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:19.329986 containerd[1909]: time="2025-06-20T18:54:19.329520988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:19.330797 containerd[1909]: time="2025-06-20T18:54:19.330763931Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.40606546s" Jun 20 18:54:19.330927 containerd[1909]: time="2025-06-20T18:54:19.330908676Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 18:54:19.331649 containerd[1909]: time="2025-06-20T18:54:19.331618584Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:54:21.490682 containerd[1909]: time="2025-06-20T18:54:21.490629621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:21.491952 containerd[1909]: time="2025-06-20T18:54:21.491902752Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jun 20 18:54:21.492863 containerd[1909]: time="2025-06-20T18:54:21.492800921Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:21.497266 containerd[1909]: time="2025-06-20T18:54:21.495695448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:21.497916 containerd[1909]: time="2025-06-20T18:54:21.497875353Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 2.166210201s" Jun 20 18:54:21.498012 containerd[1909]: time="2025-06-20T18:54:21.497935599Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 18:54:21.500832 containerd[1909]: time="2025-06-20T18:54:21.500702944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:54:21.822901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:54:21.828499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:22.018938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:22.027716 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:22.077672 kubelet[2510]: E0620 18:54:22.077511 2510 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:22.082197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:22.082425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:22.082797 systemd[1]: kubelet.service: Consumed 164ms CPU time, 111M memory peak. Jun 20 18:54:23.246889 containerd[1909]: time="2025-06-20T18:54:23.246829122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:23.248594 containerd[1909]: time="2025-06-20T18:54:23.248348348Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jun 20 18:54:23.251280 containerd[1909]: time="2025-06-20T18:54:23.250476808Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:23.255053 containerd[1909]: time="2025-06-20T18:54:23.254721718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:23.256132 containerd[1909]: time="2025-06-20T18:54:23.256089571Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.755146088s" Jun 20 18:54:23.256262 containerd[1909]: time="2025-06-20T18:54:23.256136815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 18:54:23.257227 containerd[1909]: time="2025-06-20T18:54:23.257194792Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:54:24.385433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount812970708.mount: Deactivated successfully. Jun 20 18:54:25.020500 containerd[1909]: time="2025-06-20T18:54:25.020443075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:25.021485 containerd[1909]: time="2025-06-20T18:54:25.021434504Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jun 20 18:54:25.022757 containerd[1909]: time="2025-06-20T18:54:25.022705206Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:25.024981 containerd[1909]: time="2025-06-20T18:54:25.024925143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:25.025449 containerd[1909]: time="2025-06-20T18:54:25.025420188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.768190943s" Jun 20 18:54:25.025515 containerd[1909]: time="2025-06-20T18:54:25.025454310Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 18:54:25.026045 containerd[1909]: time="2025-06-20T18:54:25.025930093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:54:25.533617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141838360.mount: Deactivated successfully. Jun 20 18:54:27.045442 containerd[1909]: time="2025-06-20T18:54:27.045394648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:27.047975 containerd[1909]: time="2025-06-20T18:54:27.047913851Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jun 20 18:54:27.053567 containerd[1909]: time="2025-06-20T18:54:27.053522463Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:27.059636 containerd[1909]: time="2025-06-20T18:54:27.059571158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:27.061342 containerd[1909]: time="2025-06-20T18:54:27.060963395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.035000494s" Jun 20 18:54:27.061342 containerd[1909]: time="2025-06-20T18:54:27.061004549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 18:54:27.061854 containerd[1909]: time="2025-06-20T18:54:27.061827575Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:54:27.557715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538103657.mount: Deactivated successfully. Jun 20 18:54:27.571252 containerd[1909]: time="2025-06-20T18:54:27.571177257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:27.573182 containerd[1909]: time="2025-06-20T18:54:27.573110933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 18:54:27.575633 containerd[1909]: time="2025-06-20T18:54:27.575566514Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:27.579340 containerd[1909]: time="2025-06-20T18:54:27.579275982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:27.580195 containerd[1909]: time="2025-06-20T18:54:27.580029894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 518.172991ms" Jun 20 18:54:27.580195 containerd[1909]: time="2025-06-20T18:54:27.580071963Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 18:54:27.581074 containerd[1909]: time="2025-06-20T18:54:27.580945686Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:54:28.129156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293050194.mount: Deactivated successfully. Jun 20 18:54:31.074753 containerd[1909]: time="2025-06-20T18:54:31.074695298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:31.079830 containerd[1909]: time="2025-06-20T18:54:31.079772680Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jun 20 18:54:31.081106 containerd[1909]: time="2025-06-20T18:54:31.081048581Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:31.084429 containerd[1909]: time="2025-06-20T18:54:31.084368158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:31.086075 containerd[1909]: time="2025-06-20T18:54:31.085876742Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.504896172s" Jun 20 18:54:31.086075 containerd[1909]: time="2025-06-20T18:54:31.085917688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 18:54:32.283893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:54:32.292923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:32.789463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:32.791410 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:32.862260 kubelet[2669]: E0620 18:54:32.860692 2669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:32.863739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:32.864647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:32.865376 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.5M memory peak. Jun 20 18:54:35.150039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:35.150333 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.5M memory peak. Jun 20 18:54:35.156652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:35.194700 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... Jun 20 18:54:35.194719 systemd[1]: Reloading... Jun 20 18:54:35.327261 zram_generator::config[2730]: No configuration found. Jun 20 18:54:35.475599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:54:35.616150 systemd[1]: Reloading finished in 420 ms. Jun 20 18:54:35.678611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:35.682745 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:35.684371 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:54:35.684743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:35.684801 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.2M memory peak. Jun 20 18:54:35.687627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:35.895887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:35.907810 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:54:35.961731 kubelet[2794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:35.961731 kubelet[2794]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:54:35.961731 kubelet[2794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:35.962214 kubelet[2794]: I0620 18:54:35.961809 2794 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:54:36.718758 kubelet[2794]: I0620 18:54:36.718713 2794 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:54:36.718758 kubelet[2794]: I0620 18:54:36.718745 2794 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:54:36.719035 kubelet[2794]: I0620 18:54:36.719013 2794 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:54:36.766211 kubelet[2794]: I0620 18:54:36.766129 2794 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:54:36.769105 kubelet[2794]: E0620 18:54:36.768680 2794 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.22.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:54:36.785055 kubelet[2794]: E0620 18:54:36.784948 2794 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:54:36.785560 kubelet[2794]: I0620 18:54:36.785386 2794 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:54:36.793735 kubelet[2794]: I0620 18:54:36.793698 2794 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:54:36.797854 kubelet[2794]: I0620 18:54:36.797795 2794 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:54:36.801899 kubelet[2794]: I0620 18:54:36.797852 2794 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-222","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:54:36.801899 kubelet[2794]: I0620 18:54:36.801904 2794 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:54:36.802180 kubelet[2794]: I0620 18:54:36.801924 2794 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:54:36.802180 kubelet[2794]: I0620 18:54:36.802102 2794 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:36.806651 kubelet[2794]: I0620 18:54:36.806389 2794 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:54:36.806651 kubelet[2794]: I0620 18:54:36.806429 2794 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:54:36.806651 kubelet[2794]: I0620 18:54:36.806462 2794 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:54:36.806651 kubelet[2794]: I0620 18:54:36.806478 2794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:54:36.820021 kubelet[2794]: E0620 18:54:36.819752 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-222&limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:54:36.820532 kubelet[2794]: E0620 18:54:36.820495 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:54:36.820627 kubelet[2794]: I0620 18:54:36.820604 2794 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:54:36.821271 kubelet[2794]: I0620 18:54:36.821184 2794 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:54:36.822210 kubelet[2794]: W0620 18:54:36.822174 2794 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:54:36.827335 kubelet[2794]: I0620 18:54:36.827168 2794 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:54:36.827335 kubelet[2794]: I0620 18:54:36.827252 2794 server.go:1289] "Started kubelet" Jun 20 18:54:36.831212 kubelet[2794]: I0620 18:54:36.830613 2794 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:54:36.843398 kubelet[2794]: I0620 18:54:36.842852 2794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:54:36.843398 kubelet[2794]: I0620 18:54:36.843282 2794 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:54:36.845983 kubelet[2794]: I0620 18:54:36.845817 2794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:54:36.847188 kubelet[2794]: E0620 18:54:36.837667 2794 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.222:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-222.184ad5119de73587 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-222,UID:ip-172-31-22-222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-222,},FirstTimestamp:2025-06-20 18:54:36.827194759 +0000 UTC m=+0.914355831,LastTimestamp:2025-06-20 18:54:36.827194759 +0000 UTC m=+0.914355831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-222,}" Jun 20 18:54:36.847734 kubelet[2794]: I0620 18:54:36.847698 2794 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:54:36.854347 kubelet[2794]: I0620 18:54:36.854100 2794 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:54:36.858310 kubelet[2794]: I0620 18:54:36.858290 2794 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:54:36.858888 kubelet[2794]: E0620 18:54:36.858866 2794 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-222\" not found" Jun 20 18:54:36.859227 kubelet[2794]: I0620 18:54:36.859156 2794 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:54:36.859346 kubelet[2794]: I0620 18:54:36.859266 2794 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:54:36.859813 kubelet[2794]: E0620 18:54:36.859780 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:54:36.866623 kubelet[2794]: I0620 18:54:36.866596 2794 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:54:36.867330 kubelet[2794]: I0620 18:54:36.866910 2794 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:54:36.868853 kubelet[2794]: E0620 18:54:36.867596 2794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": dial tcp 172.31.22.222:6443: connect: connection refused" interval="200ms" Jun 20 18:54:36.882271 kubelet[2794]: E0620 18:54:36.880404 2794 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:54:36.882271 kubelet[2794]: I0620 18:54:36.882021 2794 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:54:36.889364 kubelet[2794]: I0620 18:54:36.889056 2794 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:54:36.901552 kubelet[2794]: I0620 18:54:36.901509 2794 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:54:36.901552 kubelet[2794]: I0620 18:54:36.901541 2794 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:54:36.901728 kubelet[2794]: I0620 18:54:36.901565 2794 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:54:36.901728 kubelet[2794]: I0620 18:54:36.901574 2794 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:54:36.901728 kubelet[2794]: E0620 18:54:36.901626 2794 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:54:36.908920 kubelet[2794]: E0620 18:54:36.908880 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:54:36.926966 kubelet[2794]: I0620 18:54:36.926671 2794 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:54:36.926966 kubelet[2794]: I0620 18:54:36.926687 2794 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:54:36.926966 kubelet[2794]: I0620 18:54:36.926703 2794 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:36.930292 kubelet[2794]: I0620 18:54:36.930182 2794 policy_none.go:49] "None policy: Start" Jun 20 18:54:36.930292 kubelet[2794]: I0620 18:54:36.930219 2794 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:54:36.930292 kubelet[2794]: I0620 18:54:36.930258 2794 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:54:36.938375 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:54:36.950889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:54:36.954450 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:54:36.959518 kubelet[2794]: E0620 18:54:36.959472 2794 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-222\" not found" Jun 20 18:54:36.962378 kubelet[2794]: E0620 18:54:36.962355 2794 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:54:36.962691 kubelet[2794]: I0620 18:54:36.962604 2794 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:54:36.962691 kubelet[2794]: I0620 18:54:36.962617 2794 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:54:36.965188 kubelet[2794]: I0620 18:54:36.965053 2794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:54:36.966550 kubelet[2794]: E0620 18:54:36.966524 2794 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:54:36.966660 kubelet[2794]: E0620 18:54:36.966579 2794 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-222\" not found" Jun 20 18:54:37.024398 systemd[1]: Created slice kubepods-burstable-pod2eb36b26fd57c6889289736f05962d58.slice - libcontainer container kubepods-burstable-pod2eb36b26fd57c6889289736f05962d58.slice. Jun 20 18:54:37.034271 kubelet[2794]: E0620 18:54:37.034078 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:37.037832 systemd[1]: Created slice kubepods-burstable-pod587dee9b7cde046c3dce45203a0fa026.slice - libcontainer container kubepods-burstable-pod587dee9b7cde046c3dce45203a0fa026.slice. Jun 20 18:54:37.045840 kubelet[2794]: E0620 18:54:37.045808 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:37.046999 systemd[1]: Created slice kubepods-burstable-pod8f24d875eebd10cea92b2eabf414d5c1.slice - libcontainer container kubepods-burstable-pod8f24d875eebd10cea92b2eabf414d5c1.slice. Jun 20 18:54:37.048728 kubelet[2794]: E0620 18:54:37.048706 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:37.068527 kubelet[2794]: I0620 18:54:37.068500 2794 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:37.069589 kubelet[2794]: E0620 18:54:37.068817 2794 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.222:6443/api/v1/nodes\": dial tcp 172.31.22.222:6443: connect: connection refused" node="ip-172-31-22-222" Jun 20 18:54:37.086094 kubelet[2794]: E0620 18:54:37.086052 2794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": dial tcp 172.31.22.222:6443: connect: connection refused" interval="400ms" Jun 20 18:54:37.160402 kubelet[2794]: I0620 18:54:37.160341 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f24d875eebd10cea92b2eabf414d5c1-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-222\" (UID: \"8f24d875eebd10cea92b2eabf414d5c1\") " pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:37.160402 kubelet[2794]: I0620 18:54:37.160386 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2eb36b26fd57c6889289736f05962d58-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-222\" (UID: \"2eb36b26fd57c6889289736f05962d58\") " pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:37.160402 kubelet[2794]: I0620 18:54:37.160405 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:37.160616 kubelet[2794]: I0620 18:54:37.160421 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:37.160616 kubelet[2794]: I0620 18:54:37.160438 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2eb36b26fd57c6889289736f05962d58-ca-certs\") pod \"kube-apiserver-ip-172-31-22-222\" (UID: \"2eb36b26fd57c6889289736f05962d58\") " pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:37.160616 kubelet[2794]: I0620 18:54:37.160452 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2eb36b26fd57c6889289736f05962d58-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-222\" (UID: \"2eb36b26fd57c6889289736f05962d58\") " pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:37.160616 kubelet[2794]: I0620 18:54:37.160480 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:37.160616 kubelet[2794]: I0620 18:54:37.160495 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:37.160750 kubelet[2794]: I0620 18:54:37.160514 2794 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:37.270923 kubelet[2794]: I0620 18:54:37.270890 2794 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:37.271257 kubelet[2794]: E0620 18:54:37.271217 2794 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.222:6443/api/v1/nodes\": dial tcp 172.31.22.222:6443: connect: connection refused" node="ip-172-31-22-222" Jun 20 18:54:37.335692 containerd[1909]: time="2025-06-20T18:54:37.335646721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-222,Uid:2eb36b26fd57c6889289736f05962d58,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:37.347421 containerd[1909]: time="2025-06-20T18:54:37.347370700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-222,Uid:587dee9b7cde046c3dce45203a0fa026,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:37.350171 containerd[1909]: time="2025-06-20T18:54:37.350134725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-222,Uid:8f24d875eebd10cea92b2eabf414d5c1,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:37.487141 kubelet[2794]: E0620 18:54:37.487097 2794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": dial tcp 172.31.22.222:6443: connect: connection refused" interval="800ms" Jun 20 18:54:37.673414 kubelet[2794]: I0620 18:54:37.673152 2794 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:37.673823 kubelet[2794]: E0620 18:54:37.673632 2794 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.222:6443/api/v1/nodes\": dial tcp 172.31.22.222:6443: connect: connection refused" node="ip-172-31-22-222" Jun 20 18:54:37.811892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082177620.mount: Deactivated successfully. Jun 20 18:54:37.820043 containerd[1909]: time="2025-06-20T18:54:37.819967630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:37.824156 containerd[1909]: time="2025-06-20T18:54:37.824076601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 20 18:54:37.825034 containerd[1909]: time="2025-06-20T18:54:37.824998996Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:37.825868 containerd[1909]: time="2025-06-20T18:54:37.825825917Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:37.828218 containerd[1909]: time="2025-06-20T18:54:37.828175397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:37.829066 containerd[1909]: time="2025-06-20T18:54:37.829018223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:54:37.829764 containerd[1909]: time="2025-06-20T18:54:37.829718987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:54:37.831638 containerd[1909]: time="2025-06-20T18:54:37.830429442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:37.831638 containerd[1909]: time="2025-06-20T18:54:37.831230133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.000192ms" Jun 20 18:54:37.836612 containerd[1909]: time="2025-06-20T18:54:37.836566962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.091533ms" Jun 20 18:54:37.837270 containerd[1909]: time="2025-06-20T18:54:37.837217788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.550217ms" Jun 20 18:54:37.879075 kubelet[2794]: E0620 18:54:37.879030 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:54:37.898056 kubelet[2794]: E0620 18:54:37.897993 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-222&limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:54:38.148444 containerd[1909]: time="2025-06-20T18:54:38.146181727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:38.148444 containerd[1909]: time="2025-06-20T18:54:38.148208100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:38.148444 containerd[1909]: time="2025-06-20T18:54:38.148252196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:38.148444 containerd[1909]: time="2025-06-20T18:54:38.148356290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:38.158451 containerd[1909]: time="2025-06-20T18:54:38.156304589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:38.158451 containerd[1909]: time="2025-06-20T18:54:38.156367059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:38.158451 containerd[1909]: time="2025-06-20T18:54:38.156384225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:38.158451 containerd[1909]: time="2025-06-20T18:54:38.156472531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:38.167663 containerd[1909]: time="2025-06-20T18:54:38.165500461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:38.167663 containerd[1909]: time="2025-06-20T18:54:38.167428823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:38.167663 containerd[1909]: time="2025-06-20T18:54:38.167449792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:38.167663 containerd[1909]: time="2025-06-20T18:54:38.167551844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:38.183754 kubelet[2794]: E0620 18:54:38.183351 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:54:38.194451 systemd[1]: Started cri-containerd-c3fc3788afe53956e09aa9539142ecdef2ae30dc5c45d605287719116d71eddd.scope - libcontainer container c3fc3788afe53956e09aa9539142ecdef2ae30dc5c45d605287719116d71eddd. Jun 20 18:54:38.205974 systemd[1]: Started cri-containerd-d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2.scope - libcontainer container d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2. Jun 20 18:54:38.209034 systemd[1]: Started cri-containerd-e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71.scope - libcontainer container e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71. Jun 20 18:54:38.288217 kubelet[2794]: E0620 18:54:38.288137 2794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": dial tcp 172.31.22.222:6443: connect: connection refused" interval="1.6s" Jun 20 18:54:38.290293 containerd[1909]: time="2025-06-20T18:54:38.290000011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-222,Uid:2eb36b26fd57c6889289736f05962d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3fc3788afe53956e09aa9539142ecdef2ae30dc5c45d605287719116d71eddd\"" Jun 20 18:54:38.306062 kubelet[2794]: E0620 18:54:38.306016 2794 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:54:38.311898 containerd[1909]: time="2025-06-20T18:54:38.311842754Z" level=info msg="CreateContainer within sandbox \"c3fc3788afe53956e09aa9539142ecdef2ae30dc5c45d605287719116d71eddd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:54:38.313725 containerd[1909]: time="2025-06-20T18:54:38.313436631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-222,Uid:587dee9b7cde046c3dce45203a0fa026,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71\"" Jun 20 18:54:38.334785 containerd[1909]: time="2025-06-20T18:54:38.334613921Z" level=info msg="CreateContainer within sandbox \"e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:54:38.335851 containerd[1909]: time="2025-06-20T18:54:38.335787496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-222,Uid:8f24d875eebd10cea92b2eabf414d5c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2\"" Jun 20 18:54:38.341538 containerd[1909]: time="2025-06-20T18:54:38.341506251Z" level=info msg="CreateContainer within sandbox \"d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:54:38.347618 containerd[1909]: time="2025-06-20T18:54:38.347561762Z" level=info msg="CreateContainer within sandbox \"c3fc3788afe53956e09aa9539142ecdef2ae30dc5c45d605287719116d71eddd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af3244abcc62272950b242aaf9e3ae521fb678b62726742dbaa8518da5b3f762\"" Jun 20 18:54:38.348385 containerd[1909]: time="2025-06-20T18:54:38.348350699Z" level=info msg="StartContainer for \"af3244abcc62272950b242aaf9e3ae521fb678b62726742dbaa8518da5b3f762\"" Jun 20 18:54:38.358168 containerd[1909]: time="2025-06-20T18:54:38.358110831Z" level=info msg="CreateContainer within sandbox \"e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082\"" Jun 20 18:54:38.358738 containerd[1909]: time="2025-06-20T18:54:38.358706634Z" level=info msg="StartContainer for \"3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082\"" Jun 20 18:54:38.372041 containerd[1909]: time="2025-06-20T18:54:38.371985938Z" level=info msg="CreateContainer within sandbox \"d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8\"" Jun 20 18:54:38.373544 containerd[1909]: time="2025-06-20T18:54:38.372579059Z" level=info msg="StartContainer for \"bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8\"" Jun 20 18:54:38.400251 systemd[1]: Started cri-containerd-af3244abcc62272950b242aaf9e3ae521fb678b62726742dbaa8518da5b3f762.scope - libcontainer container af3244abcc62272950b242aaf9e3ae521fb678b62726742dbaa8518da5b3f762. Jun 20 18:54:38.411471 systemd[1]: Started cri-containerd-3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082.scope - libcontainer container 3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082. Jun 20 18:54:38.439477 systemd[1]: Started cri-containerd-bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8.scope - libcontainer container bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8. Jun 20 18:54:38.476571 kubelet[2794]: I0620 18:54:38.476485 2794 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:38.476821 kubelet[2794]: E0620 18:54:38.476780 2794 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.222:6443/api/v1/nodes\": dial tcp 172.31.22.222:6443: connect: connection refused" node="ip-172-31-22-222" Jun 20 18:54:38.499204 containerd[1909]: time="2025-06-20T18:54:38.498777585Z" level=info msg="StartContainer for \"3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082\" returns successfully" Jun 20 18:54:38.505204 containerd[1909]: time="2025-06-20T18:54:38.505093417Z" level=info msg="StartContainer for \"af3244abcc62272950b242aaf9e3ae521fb678b62726742dbaa8518da5b3f762\" returns successfully" Jun 20 18:54:38.514943 containerd[1909]: time="2025-06-20T18:54:38.514773183Z" level=info msg="StartContainer for \"bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8\" returns successfully" Jun 20 18:54:38.525777 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 20 18:54:38.811433 kubelet[2794]: E0620 18:54:38.811383 2794 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.22.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.222:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:54:38.937327 kubelet[2794]: E0620 18:54:38.937060 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:38.940611 kubelet[2794]: E0620 18:54:38.940571 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:38.942601 kubelet[2794]: E0620 18:54:38.942361 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:39.882012 kubelet[2794]: E0620 18:54:39.881889 2794 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.222:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-222.184ad5119de73587 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-222,UID:ip-172-31-22-222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-222,},FirstTimestamp:2025-06-20 18:54:36.827194759 +0000 UTC m=+0.914355831,LastTimestamp:2025-06-20 18:54:36.827194759 +0000 UTC m=+0.914355831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-222,}" Jun 20 18:54:39.890027 kubelet[2794]: E0620 18:54:39.889657 2794 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": dial tcp 172.31.22.222:6443: connect: connection refused" interval="3.2s" Jun 20 18:54:39.946301 kubelet[2794]: E0620 18:54:39.945665 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:39.946301 kubelet[2794]: E0620 18:54:39.946070 2794 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:40.079667 kubelet[2794]: I0620 18:54:40.079640 2794 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:40.080252 kubelet[2794]: E0620 18:54:40.080128 2794 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.222:6443/api/v1/nodes\": dial tcp 172.31.22.222:6443: connect: connection refused" node="ip-172-31-22-222" Jun 20 18:54:42.824149 kubelet[2794]: I0620 18:54:42.824100 2794 apiserver.go:52] "Watching apiserver" Jun 20 18:54:42.859815 kubelet[2794]: I0620 18:54:42.859746 2794 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:54:42.862065 kubelet[2794]: E0620 18:54:42.862026 2794 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-22-222" not found Jun 20 18:54:43.094306 kubelet[2794]: E0620 18:54:43.094169 2794 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-222\" not found" node="ip-172-31-22-222" Jun 20 18:54:43.230761 kubelet[2794]: E0620 18:54:43.230715 2794 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-22-222" not found Jun 20 18:54:43.281934 kubelet[2794]: I0620 18:54:43.281900 2794 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:43.292162 kubelet[2794]: I0620 18:54:43.292121 2794 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-222" Jun 20 18:54:43.366146 kubelet[2794]: I0620 18:54:43.365774 2794 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:43.380974 kubelet[2794]: I0620 18:54:43.380177 2794 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:43.385341 kubelet[2794]: I0620 18:54:43.385299 2794 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:44.384122 systemd[1]: Reload requested from client PID 3077 ('systemctl') (unit session-7.scope)... Jun 20 18:54:44.384141 systemd[1]: Reloading... Jun 20 18:54:44.537270 zram_generator::config[3128]: No configuration found. Jun 20 18:54:44.663381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:54:44.801599 systemd[1]: Reloading finished in 416 ms. Jun 20 18:54:44.831514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:44.847786 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:54:44.848014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:44.848072 systemd[1]: kubelet.service: Consumed 1.324s CPU time, 129.9M memory peak. Jun 20 18:54:44.854595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:45.093956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:45.105803 (kubelet)[3182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:54:45.178231 kubelet[3182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:45.178231 kubelet[3182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:54:45.178231 kubelet[3182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:45.178231 kubelet[3182]: I0620 18:54:45.178291 3182 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:54:45.186841 kubelet[3182]: I0620 18:54:45.186796 3182 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:54:45.186841 kubelet[3182]: I0620 18:54:45.186826 3182 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:54:45.187095 kubelet[3182]: I0620 18:54:45.187060 3182 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:54:45.188229 kubelet[3182]: I0620 18:54:45.188208 3182 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:54:45.199008 kubelet[3182]: I0620 18:54:45.198960 3182 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:54:45.238365 kubelet[3182]: E0620 18:54:45.238075 3182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:54:45.240341 kubelet[3182]: I0620 18:54:45.238440 3182 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:54:45.245143 kubelet[3182]: I0620 18:54:45.245091 3182 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:54:45.245580 kubelet[3182]: I0620 18:54:45.245518 3182 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:54:45.245721 kubelet[3182]: I0620 18:54:45.245561 3182 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-222","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:54:45.245808 kubelet[3182]: I0620 18:54:45.245723 3182 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:54:45.245808 kubelet[3182]: I0620 18:54:45.245733 3182 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:54:45.246964 kubelet[3182]: I0620 18:54:45.246931 3182 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:45.247154 kubelet[3182]: I0620 18:54:45.247131 3182 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:54:45.247154 kubelet[3182]: I0620 18:54:45.247151 3182 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:54:45.248865 kubelet[3182]: I0620 18:54:45.248825 3182 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:54:45.248865 kubelet[3182]: I0620 18:54:45.248859 3182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:54:45.262574 kubelet[3182]: I0620 18:54:45.262464 3182 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:54:45.263557 kubelet[3182]: I0620 18:54:45.263535 3182 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:54:45.271816 kubelet[3182]: I0620 18:54:45.271791 3182 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:54:45.272007 kubelet[3182]: I0620 18:54:45.271998 3182 server.go:1289] "Started kubelet" Jun 20 18:54:45.274188 kubelet[3182]: I0620 18:54:45.274118 3182 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:54:45.277966 kubelet[3182]: I0620 18:54:45.277923 3182 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:54:45.279795 kubelet[3182]: I0620 18:54:45.279637 3182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:54:45.288478 kubelet[3182]: I0620 18:54:45.288379 3182 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:54:45.289555 kubelet[3182]: I0620 18:54:45.289165 3182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:54:45.290929 kubelet[3182]: I0620 18:54:45.289779 3182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:54:45.295313 kubelet[3182]: I0620 18:54:45.295283 3182 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:54:45.297287 kubelet[3182]: I0620 18:54:45.297223 3182 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:54:45.308364 kubelet[3182]: I0620 18:54:45.305426 3182 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:54:45.312746 kubelet[3182]: I0620 18:54:45.312716 3182 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:54:45.312884 kubelet[3182]: I0620 18:54:45.312843 3182 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:54:45.316467 kubelet[3182]: I0620 18:54:45.316276 3182 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:54:45.316955 kubelet[3182]: E0620 18:54:45.316878 3182 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:54:45.344729 kubelet[3182]: I0620 18:54:45.344155 3182 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:54:45.347858 kubelet[3182]: I0620 18:54:45.347155 3182 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:54:45.347858 kubelet[3182]: I0620 18:54:45.347181 3182 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:54:45.347858 kubelet[3182]: I0620 18:54:45.347210 3182 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:54:45.347858 kubelet[3182]: I0620 18:54:45.347221 3182 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:54:45.347858 kubelet[3182]: E0620 18:54:45.347288 3182 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:54:45.382758 kubelet[3182]: I0620 18:54:45.382726 3182 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:54:45.382758 kubelet[3182]: I0620 18:54:45.382751 3182 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:54:45.382950 kubelet[3182]: I0620 18:54:45.382775 3182 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:45.382950 kubelet[3182]: I0620 18:54:45.382930 3182 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:54:45.383036 kubelet[3182]: I0620 18:54:45.382943 3182 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:54:45.383036 kubelet[3182]: I0620 18:54:45.382964 3182 policy_none.go:49] "None policy: Start" Jun 20 18:54:45.383036 kubelet[3182]: I0620 18:54:45.382978 3182 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:54:45.383036 kubelet[3182]: I0620 18:54:45.382991 3182 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:54:45.383192 kubelet[3182]: I0620 18:54:45.383114 3182 state_mem.go:75] "Updated machine memory state" Jun 20 18:54:45.389196 kubelet[3182]: E0620 18:54:45.388628 3182 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:54:45.389196 kubelet[3182]: I0620 18:54:45.388885 3182 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:54:45.389196 kubelet[3182]: I0620 18:54:45.388909 3182 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:54:45.390372 kubelet[3182]: I0620 18:54:45.389391 3182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:54:45.393406 kubelet[3182]: E0620 18:54:45.393384 3182 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:54:45.418640 sudo[3216]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:54:45.419082 sudo[3216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:54:45.449679 kubelet[3182]: I0620 18:54:45.448707 3182 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:45.449679 kubelet[3182]: I0620 18:54:45.449486 3182 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:45.449943 kubelet[3182]: I0620 18:54:45.449708 3182 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:45.463727 kubelet[3182]: E0620 18:54:45.463596 3182 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-222\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:45.463727 kubelet[3182]: E0620 18:54:45.463688 3182 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-222\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:45.464015 kubelet[3182]: E0620 18:54:45.463928 3182 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-222\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:45.498144 kubelet[3182]: I0620 18:54:45.497809 3182 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-222" Jun 20 18:54:45.510908 kubelet[3182]: I0620 18:54:45.510843 3182 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-222" Jun 20 18:54:45.511175 kubelet[3182]: I0620 18:54:45.511051 3182 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-222" Jun 20 18:54:45.512634 kubelet[3182]: I0620 18:54:45.512083 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:45.512634 kubelet[3182]: I0620 18:54:45.512113 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:45.512634 kubelet[3182]: I0620 18:54:45.512132 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:45.512634 kubelet[3182]: I0620 18:54:45.512149 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:45.512634 kubelet[3182]: I0620 18:54:45.512167 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f24d875eebd10cea92b2eabf414d5c1-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-222\" (UID: \"8f24d875eebd10cea92b2eabf414d5c1\") " pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:45.512849 kubelet[3182]: I0620 18:54:45.512181 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2eb36b26fd57c6889289736f05962d58-ca-certs\") pod \"kube-apiserver-ip-172-31-22-222\" (UID: \"2eb36b26fd57c6889289736f05962d58\") " pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:45.512849 kubelet[3182]: I0620 18:54:45.512196 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2eb36b26fd57c6889289736f05962d58-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-222\" (UID: \"2eb36b26fd57c6889289736f05962d58\") " pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:45.512849 kubelet[3182]: I0620 18:54:45.512211 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2eb36b26fd57c6889289736f05962d58-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-222\" (UID: \"2eb36b26fd57c6889289736f05962d58\") " pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:45.512849 kubelet[3182]: I0620 18:54:45.512228 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/587dee9b7cde046c3dce45203a0fa026-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-222\" (UID: \"587dee9b7cde046c3dce45203a0fa026\") " pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:46.103964 sudo[3216]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:46.253414 kubelet[3182]: I0620 18:54:46.253364 3182 apiserver.go:52] "Watching apiserver" Jun 20 18:54:46.308771 kubelet[3182]: I0620 18:54:46.308724 3182 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:54:46.367035 kubelet[3182]: I0620 18:54:46.366715 3182 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:46.368072 kubelet[3182]: I0620 18:54:46.367573 3182 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:46.369320 kubelet[3182]: I0620 18:54:46.369115 3182 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:46.387035 kubelet[3182]: E0620 18:54:46.386840 3182 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-222\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-222" Jun 20 18:54:46.388023 kubelet[3182]: E0620 18:54:46.387526 3182 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-222\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-222" Jun 20 18:54:46.389667 kubelet[3182]: E0620 18:54:46.389289 3182 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-222\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-222" Jun 20 18:54:46.433343 kubelet[3182]: I0620 18:54:46.433158 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-222" podStartSLOduration=3.433136732 podStartE2EDuration="3.433136732s" podCreationTimestamp="2025-06-20 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:46.415866101 +0000 UTC m=+1.301593587" watchObservedRunningTime="2025-06-20 18:54:46.433136732 +0000 UTC m=+1.318864216" Jun 20 18:54:46.450594 kubelet[3182]: I0620 18:54:46.450431 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-222" podStartSLOduration=3.450411002 podStartE2EDuration="3.450411002s" podCreationTimestamp="2025-06-20 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:46.434480845 +0000 UTC m=+1.320208331" watchObservedRunningTime="2025-06-20 18:54:46.450411002 +0000 UTC m=+1.336138491" Jun 20 18:54:48.069565 sudo[2234]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:48.091844 sshd[2233]: Connection closed by 139.178.68.195 port 52316 Jun 20 18:54:48.092917 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:48.096579 systemd[1]: sshd@6-172.31.22.222:22-139.178.68.195:52316.service: Deactivated successfully. Jun 20 18:54:48.099605 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:54:48.099870 systemd[1]: session-7.scope: Consumed 6.327s CPU time, 209.6M memory peak. Jun 20 18:54:48.101488 systemd-logind[1889]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:54:48.103054 systemd-logind[1889]: Removed session 7. Jun 20 18:54:49.524537 kubelet[3182]: I0620 18:54:49.523997 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-222" podStartSLOduration=6.523968213 podStartE2EDuration="6.523968213s" podCreationTimestamp="2025-06-20 18:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:46.451608065 +0000 UTC m=+1.337335551" watchObservedRunningTime="2025-06-20 18:54:49.523968213 +0000 UTC m=+4.409695698" Jun 20 18:54:50.389823 kubelet[3182]: I0620 18:54:50.389644 3182 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:54:50.390864 containerd[1909]: time="2025-06-20T18:54:50.390145450Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:54:50.391314 kubelet[3182]: I0620 18:54:50.390651 3182 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:54:51.360907 systemd[1]: Created slice kubepods-besteffort-pod62f90734_0597_4283_af18_c005502b9f8c.slice - libcontainer container kubepods-besteffort-pod62f90734_0597_4283_af18_c005502b9f8c.slice. Jun 20 18:54:51.373658 systemd[1]: Created slice kubepods-burstable-podf44253b7_fe0a_4654_9a51_dcae2fd2b846.slice - libcontainer container kubepods-burstable-podf44253b7_fe0a_4654_9a51_dcae2fd2b846.slice. Jun 20 18:54:51.455462 kubelet[3182]: I0620 18:54:51.455358 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-config-path\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.455462 kubelet[3182]: I0620 18:54:51.455394 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-net\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.455462 kubelet[3182]: I0620 18:54:51.455409 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-run\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.455462 kubelet[3182]: I0620 18:54:51.455452 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-bpf-maps\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456199 kubelet[3182]: I0620 18:54:51.455483 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hubble-tls\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456199 kubelet[3182]: I0620 18:54:51.455502 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zsn\" (UniqueName: \"kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-kube-api-access-r7zsn\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456199 kubelet[3182]: I0620 18:54:51.455524 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwl8\" (UniqueName: \"kubernetes.io/projected/62f90734-0597-4283-af18-c005502b9f8c-kube-api-access-stwl8\") pod \"kube-proxy-j2ptm\" (UID: \"62f90734-0597-4283-af18-c005502b9f8c\") " pod="kube-system/kube-proxy-j2ptm" Jun 20 18:54:51.456199 kubelet[3182]: I0620 18:54:51.455551 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-etc-cni-netd\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456199 kubelet[3182]: I0620 18:54:51.455572 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-lib-modules\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456357 kubelet[3182]: I0620 18:54:51.455588 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f44253b7-fe0a-4654-9a51-dcae2fd2b846-clustermesh-secrets\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456357 kubelet[3182]: I0620 18:54:51.455609 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-kernel\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456357 kubelet[3182]: I0620 18:54:51.455627 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hostproc\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456357 kubelet[3182]: I0620 18:54:51.455642 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-xtables-lock\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456357 kubelet[3182]: I0620 18:54:51.455666 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62f90734-0597-4283-af18-c005502b9f8c-kube-proxy\") pod \"kube-proxy-j2ptm\" (UID: \"62f90734-0597-4283-af18-c005502b9f8c\") " pod="kube-system/kube-proxy-j2ptm" Jun 20 18:54:51.456357 kubelet[3182]: I0620 18:54:51.455679 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62f90734-0597-4283-af18-c005502b9f8c-xtables-lock\") pod \"kube-proxy-j2ptm\" (UID: \"62f90734-0597-4283-af18-c005502b9f8c\") " pod="kube-system/kube-proxy-j2ptm" Jun 20 18:54:51.456500 kubelet[3182]: I0620 18:54:51.455704 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62f90734-0597-4283-af18-c005502b9f8c-lib-modules\") pod \"kube-proxy-j2ptm\" (UID: \"62f90734-0597-4283-af18-c005502b9f8c\") " pod="kube-system/kube-proxy-j2ptm" Jun 20 18:54:51.456500 kubelet[3182]: I0620 18:54:51.455739 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-cgroup\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.456500 kubelet[3182]: I0620 18:54:51.455755 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cni-path\") pod \"cilium-8vf8d\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " pod="kube-system/cilium-8vf8d" Jun 20 18:54:51.645293 systemd[1]: Created slice kubepods-besteffort-pod45a57080_2381_4156_a161_c9089a596171.slice - libcontainer container kubepods-besteffort-pod45a57080_2381_4156_a161_c9089a596171.slice. Jun 20 18:54:51.656707 kubelet[3182]: I0620 18:54:51.656640 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45a57080-2381-4156-a161-c9089a596171-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f54mx\" (UID: \"45a57080-2381-4156-a161-c9089a596171\") " pod="kube-system/cilium-operator-6c4d7847fc-f54mx" Jun 20 18:54:51.656707 kubelet[3182]: I0620 18:54:51.656701 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8sh\" (UniqueName: \"kubernetes.io/projected/45a57080-2381-4156-a161-c9089a596171-kube-api-access-jk8sh\") pod \"cilium-operator-6c4d7847fc-f54mx\" (UID: \"45a57080-2381-4156-a161-c9089a596171\") " pod="kube-system/cilium-operator-6c4d7847fc-f54mx" Jun 20 18:54:51.669558 containerd[1909]: time="2025-06-20T18:54:51.669114169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2ptm,Uid:62f90734-0597-4283-af18-c005502b9f8c,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:51.678869 containerd[1909]: time="2025-06-20T18:54:51.678830846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8vf8d,Uid:f44253b7-fe0a-4654-9a51-dcae2fd2b846,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:51.718974 containerd[1909]: time="2025-06-20T18:54:51.718860113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:51.719994 containerd[1909]: time="2025-06-20T18:54:51.718918314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:51.719994 containerd[1909]: time="2025-06-20T18:54:51.719952135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:51.721602 containerd[1909]: time="2025-06-20T18:54:51.721529446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:51.723881 containerd[1909]: time="2025-06-20T18:54:51.723778161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:51.723881 containerd[1909]: time="2025-06-20T18:54:51.723857315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:51.725409 containerd[1909]: time="2025-06-20T18:54:51.723872670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:51.725409 containerd[1909]: time="2025-06-20T18:54:51.723970335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:51.764974 systemd[1]: Started cri-containerd-3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43.scope - libcontainer container 3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43. Jun 20 18:54:51.771577 systemd[1]: Started cri-containerd-3cb319c09014817fb3bd85dc35187281443eceffaeeb37cb7f5aa0b2771a98aa.scope - libcontainer container 3cb319c09014817fb3bd85dc35187281443eceffaeeb37cb7f5aa0b2771a98aa. Jun 20 18:54:51.820592 containerd[1909]: time="2025-06-20T18:54:51.820461350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8vf8d,Uid:f44253b7-fe0a-4654-9a51-dcae2fd2b846,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\"" Jun 20 18:54:51.828358 containerd[1909]: time="2025-06-20T18:54:51.828308468Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:54:51.834041 containerd[1909]: time="2025-06-20T18:54:51.834000308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2ptm,Uid:62f90734-0597-4283-af18-c005502b9f8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb319c09014817fb3bd85dc35187281443eceffaeeb37cb7f5aa0b2771a98aa\"" Jun 20 18:54:51.847924 containerd[1909]: time="2025-06-20T18:54:51.847778885Z" level=info msg="CreateContainer within sandbox \"3cb319c09014817fb3bd85dc35187281443eceffaeeb37cb7f5aa0b2771a98aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:54:51.884993 containerd[1909]: time="2025-06-20T18:54:51.884944110Z" level=info msg="CreateContainer within sandbox \"3cb319c09014817fb3bd85dc35187281443eceffaeeb37cb7f5aa0b2771a98aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"954becddc6f385513881cf45ae2b2999b745ba05b34aca5e142ede043ae1745e\"" Jun 20 18:54:51.888139 containerd[1909]: time="2025-06-20T18:54:51.885561155Z" level=info msg="StartContainer for \"954becddc6f385513881cf45ae2b2999b745ba05b34aca5e142ede043ae1745e\"" Jun 20 18:54:51.918463 systemd[1]: Started cri-containerd-954becddc6f385513881cf45ae2b2999b745ba05b34aca5e142ede043ae1745e.scope - libcontainer container 954becddc6f385513881cf45ae2b2999b745ba05b34aca5e142ede043ae1745e. Jun 20 18:54:51.952771 containerd[1909]: time="2025-06-20T18:54:51.952740053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f54mx,Uid:45a57080-2381-4156-a161-c9089a596171,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:51.953447 containerd[1909]: time="2025-06-20T18:54:51.953423873Z" level=info msg="StartContainer for \"954becddc6f385513881cf45ae2b2999b745ba05b34aca5e142ede043ae1745e\" returns successfully" Jun 20 18:54:51.988104 containerd[1909]: time="2025-06-20T18:54:51.987767408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:51.988104 containerd[1909]: time="2025-06-20T18:54:51.987820015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:51.988104 containerd[1909]: time="2025-06-20T18:54:51.987830433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:51.988104 containerd[1909]: time="2025-06-20T18:54:51.987911244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:52.010460 systemd[1]: Started cri-containerd-0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082.scope - libcontainer container 0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082. Jun 20 18:54:52.058147 containerd[1909]: time="2025-06-20T18:54:52.058117409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f54mx,Uid:45a57080-2381-4156-a161-c9089a596171,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\"" Jun 20 18:54:52.398641 kubelet[3182]: I0620 18:54:52.398583 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j2ptm" podStartSLOduration=1.3985674559999999 podStartE2EDuration="1.398567456s" podCreationTimestamp="2025-06-20 18:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:52.398422608 +0000 UTC m=+7.284150094" watchObservedRunningTime="2025-06-20 18:54:52.398567456 +0000 UTC m=+7.284294941" Jun 20 18:54:53.064392 update_engine[1891]: I20250620 18:54:53.064281 1891 update_attempter.cc:509] Updating boot flags... Jun 20 18:54:53.225299 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3434) Jun 20 18:54:53.695351 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3425) Jun 20 18:54:54.145334 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3425) Jun 20 18:54:57.631555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028832515.mount: Deactivated successfully. Jun 20 18:55:00.210891 containerd[1909]: time="2025-06-20T18:55:00.210803662Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:00.212477 containerd[1909]: time="2025-06-20T18:55:00.212382333Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 18:55:00.214130 containerd[1909]: time="2025-06-20T18:55:00.213836264Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:00.215684 containerd[1909]: time="2025-06-20T18:55:00.215634528Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.387241109s" Jun 20 18:55:00.215949 containerd[1909]: time="2025-06-20T18:55:00.215827525Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 18:55:00.217564 containerd[1909]: time="2025-06-20T18:55:00.217398357Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:55:00.222659 containerd[1909]: time="2025-06-20T18:55:00.221703158Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:55:00.353000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218387445.mount: Deactivated successfully. Jun 20 18:55:00.359695 containerd[1909]: time="2025-06-20T18:55:00.359650167Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b\"" Jun 20 18:55:00.361156 containerd[1909]: time="2025-06-20T18:55:00.360264569Z" level=info msg="StartContainer for \"8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b\"" Jun 20 18:55:00.496525 systemd[1]: Started cri-containerd-8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b.scope - libcontainer container 8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b. Jun 20 18:55:00.528955 containerd[1909]: time="2025-06-20T18:55:00.528915041Z" level=info msg="StartContainer for \"8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b\" returns successfully" Jun 20 18:55:00.544701 systemd[1]: cri-containerd-8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b.scope: Deactivated successfully. Jun 20 18:55:00.829180 containerd[1909]: time="2025-06-20T18:55:00.821672548Z" level=info msg="shim disconnected" id=8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b namespace=k8s.io Jun 20 18:55:00.829180 containerd[1909]: time="2025-06-20T18:55:00.829179379Z" level=warning msg="cleaning up after shim disconnected" id=8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b namespace=k8s.io Jun 20 18:55:00.829441 containerd[1909]: time="2025-06-20T18:55:00.829196053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:01.344515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b-rootfs.mount: Deactivated successfully. Jun 20 18:55:01.686326 containerd[1909]: time="2025-06-20T18:55:01.685785068Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:55:01.714007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506946705.mount: Deactivated successfully. Jun 20 18:55:01.744515 containerd[1909]: time="2025-06-20T18:55:01.744261245Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677\"" Jun 20 18:55:01.747501 containerd[1909]: time="2025-06-20T18:55:01.746327644Z" level=info msg="StartContainer for \"3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677\"" Jun 20 18:55:01.766128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789031446.mount: Deactivated successfully. Jun 20 18:55:01.805081 systemd[1]: Started cri-containerd-3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677.scope - libcontainer container 3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677. Jun 20 18:55:01.907295 containerd[1909]: time="2025-06-20T18:55:01.905868201Z" level=info msg="StartContainer for \"3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677\" returns successfully" Jun 20 18:55:01.988578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:55:01.988955 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:01.989169 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:55:01.999073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:55:02.004231 systemd[1]: cri-containerd-3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677.scope: Deactivated successfully. Jun 20 18:55:02.006043 systemd[1]: cri-containerd-3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677.scope: Consumed 31ms CPU time, 5.8M memory peak, 16K read from disk, 2.2M written to disk. Jun 20 18:55:02.062484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:02.089120 containerd[1909]: time="2025-06-20T18:55:02.089039250Z" level=info msg="shim disconnected" id=3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677 namespace=k8s.io Jun 20 18:55:02.089120 containerd[1909]: time="2025-06-20T18:55:02.089120519Z" level=warning msg="cleaning up after shim disconnected" id=3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677 namespace=k8s.io Jun 20 18:55:02.090006 containerd[1909]: time="2025-06-20T18:55:02.089131732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:02.116939 containerd[1909]: time="2025-06-20T18:55:02.116614646Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:55:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:55:02.685103 containerd[1909]: time="2025-06-20T18:55:02.685046682Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:55:02.785405 containerd[1909]: time="2025-06-20T18:55:02.784497809Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:02.789119 containerd[1909]: time="2025-06-20T18:55:02.789049915Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 18:55:02.838973 containerd[1909]: time="2025-06-20T18:55:02.838895452Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:02.859319 containerd[1909]: time="2025-06-20T18:55:02.859260663Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.641793196s" Jun 20 18:55:02.859319 containerd[1909]: time="2025-06-20T18:55:02.859307352Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 18:55:02.869488 containerd[1909]: time="2025-06-20T18:55:02.869443711Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843\"" Jun 20 18:55:02.870825 containerd[1909]: time="2025-06-20T18:55:02.870717278Z" level=info msg="StartContainer for \"b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843\"" Jun 20 18:55:02.880076 containerd[1909]: time="2025-06-20T18:55:02.880024900Z" level=info msg="CreateContainer within sandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:55:02.927504 systemd[1]: Started cri-containerd-b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843.scope - libcontainer container b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843. Jun 20 18:55:02.943994 containerd[1909]: time="2025-06-20T18:55:02.943803552Z" level=info msg="CreateContainer within sandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\"" Jun 20 18:55:02.947613 containerd[1909]: time="2025-06-20T18:55:02.945120048Z" level=info msg="StartContainer for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\"" Jun 20 18:55:03.005372 systemd[1]: Started cri-containerd-e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520.scope - libcontainer container e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520. Jun 20 18:55:03.029750 containerd[1909]: time="2025-06-20T18:55:03.027759427Z" level=info msg="StartContainer for \"b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843\" returns successfully" Jun 20 18:55:03.037308 systemd[1]: cri-containerd-b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843.scope: Deactivated successfully. Jun 20 18:55:03.087937 containerd[1909]: time="2025-06-20T18:55:03.087825810Z" level=info msg="shim disconnected" id=b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843 namespace=k8s.io Jun 20 18:55:03.087937 containerd[1909]: time="2025-06-20T18:55:03.087908534Z" level=warning msg="cleaning up after shim disconnected" id=b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843 namespace=k8s.io Jun 20 18:55:03.087937 containerd[1909]: time="2025-06-20T18:55:03.087921194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:03.091516 containerd[1909]: time="2025-06-20T18:55:03.091125928Z" level=info msg="StartContainer for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" returns successfully" Jun 20 18:55:03.351355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843-rootfs.mount: Deactivated successfully. Jun 20 18:55:03.699283 containerd[1909]: time="2025-06-20T18:55:03.697577287Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:55:03.718858 containerd[1909]: time="2025-06-20T18:55:03.718804809Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124\"" Jun 20 18:55:03.719675 containerd[1909]: time="2025-06-20T18:55:03.719641152Z" level=info msg="StartContainer for \"04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124\"" Jun 20 18:55:03.790549 systemd[1]: Started cri-containerd-04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124.scope - libcontainer container 04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124. Jun 20 18:55:03.894270 containerd[1909]: time="2025-06-20T18:55:03.892978505Z" level=info msg="StartContainer for \"04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124\" returns successfully" Jun 20 18:55:03.896698 systemd[1]: cri-containerd-04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124.scope: Deactivated successfully. Jun 20 18:55:03.943817 containerd[1909]: time="2025-06-20T18:55:03.943399467Z" level=info msg="shim disconnected" id=04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124 namespace=k8s.io Jun 20 18:55:03.943817 containerd[1909]: time="2025-06-20T18:55:03.943459922Z" level=warning msg="cleaning up after shim disconnected" id=04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124 namespace=k8s.io Jun 20 18:55:03.943817 containerd[1909]: time="2025-06-20T18:55:03.943471161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:03.973099 containerd[1909]: time="2025-06-20T18:55:03.972946061Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:55:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:55:04.033517 kubelet[3182]: I0620 18:55:04.031372 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f54mx" podStartSLOduration=2.230648854 podStartE2EDuration="13.031349114s" podCreationTimestamp="2025-06-20 18:54:51 +0000 UTC" firstStartedPulling="2025-06-20 18:54:52.059555905 +0000 UTC m=+6.945283378" lastFinishedPulling="2025-06-20 18:55:02.860256163 +0000 UTC m=+17.745983638" observedRunningTime="2025-06-20 18:55:03.821596012 +0000 UTC m=+18.707323514" watchObservedRunningTime="2025-06-20 18:55:04.031349114 +0000 UTC m=+18.917076599" Jun 20 18:55:04.348725 systemd[1]: run-containerd-runc-k8s.io-04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124-runc.R12CgZ.mount: Deactivated successfully. Jun 20 18:55:04.348880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124-rootfs.mount: Deactivated successfully. Jun 20 18:55:04.690732 containerd[1909]: time="2025-06-20T18:55:04.690587411Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:55:04.715680 containerd[1909]: time="2025-06-20T18:55:04.715626352Z" level=info msg="CreateContainer within sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\"" Jun 20 18:55:04.717019 containerd[1909]: time="2025-06-20T18:55:04.716980060Z" level=info msg="StartContainer for \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\"" Jun 20 18:55:04.766448 systemd[1]: Started cri-containerd-e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a.scope - libcontainer container e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a. Jun 20 18:55:04.808869 containerd[1909]: time="2025-06-20T18:55:04.808723109Z" level=info msg="StartContainer for \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\" returns successfully" Jun 20 18:55:05.096820 kubelet[3182]: I0620 18:55:05.096546 3182 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:55:05.181858 systemd[1]: Created slice kubepods-burstable-pod6090e820_2d58_4a4e_b52b_ac069c4c80d1.slice - libcontainer container kubepods-burstable-pod6090e820_2d58_4a4e_b52b_ac069c4c80d1.slice. Jun 20 18:55:05.194658 systemd[1]: Created slice kubepods-burstable-podfe60299c_5295_4a93_b557_ffc4e0c7ace7.slice - libcontainer container kubepods-burstable-podfe60299c_5295_4a93_b557_ffc4e0c7ace7.slice. Jun 20 18:55:05.296978 kubelet[3182]: I0620 18:55:05.296882 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe60299c-5295-4a93-b557-ffc4e0c7ace7-config-volume\") pod \"coredns-674b8bbfcf-9q6q8\" (UID: \"fe60299c-5295-4a93-b557-ffc4e0c7ace7\") " pod="kube-system/coredns-674b8bbfcf-9q6q8" Jun 20 18:55:05.296978 kubelet[3182]: I0620 18:55:05.296926 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6090e820-2d58-4a4e-b52b-ac069c4c80d1-config-volume\") pod \"coredns-674b8bbfcf-g9v2p\" (UID: \"6090e820-2d58-4a4e-b52b-ac069c4c80d1\") " pod="kube-system/coredns-674b8bbfcf-g9v2p" Jun 20 18:55:05.296978 kubelet[3182]: I0620 18:55:05.296953 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftpz\" (UniqueName: \"kubernetes.io/projected/6090e820-2d58-4a4e-b52b-ac069c4c80d1-kube-api-access-sftpz\") pod \"coredns-674b8bbfcf-g9v2p\" (UID: \"6090e820-2d58-4a4e-b52b-ac069c4c80d1\") " pod="kube-system/coredns-674b8bbfcf-g9v2p" Jun 20 18:55:05.296978 kubelet[3182]: I0620 18:55:05.296970 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2gn\" (UniqueName: \"kubernetes.io/projected/fe60299c-5295-4a93-b557-ffc4e0c7ace7-kube-api-access-vh2gn\") pod \"coredns-674b8bbfcf-9q6q8\" (UID: \"fe60299c-5295-4a93-b557-ffc4e0c7ace7\") " pod="kube-system/coredns-674b8bbfcf-9q6q8" Jun 20 18:55:05.345197 systemd[1]: run-containerd-runc-k8s.io-e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a-runc.Z6NNe6.mount: Deactivated successfully. Jun 20 18:55:05.492995 containerd[1909]: time="2025-06-20T18:55:05.492892254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9v2p,Uid:6090e820-2d58-4a4e-b52b-ac069c4c80d1,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:05.501612 containerd[1909]: time="2025-06-20T18:55:05.501573011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9q6q8,Uid:fe60299c-5295-4a93-b557-ffc4e0c7ace7,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:07.696533 systemd-networkd[1820]: cilium_host: Link UP Jun 20 18:55:07.696655 systemd-networkd[1820]: cilium_net: Link UP Jun 20 18:55:07.696659 systemd-networkd[1820]: cilium_net: Gained carrier Jun 20 18:55:07.696821 systemd-networkd[1820]: cilium_host: Gained carrier Jun 20 18:55:07.698027 (udev-worker)[4257]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:55:07.699152 (udev-worker)[4292]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:55:07.862453 (udev-worker)[4299]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:55:07.873540 systemd-networkd[1820]: cilium_vxlan: Link UP Jun 20 18:55:07.873551 systemd-networkd[1820]: cilium_vxlan: Gained carrier Jun 20 18:55:08.186407 systemd-networkd[1820]: cilium_net: Gained IPv6LL Jun 20 18:55:08.274786 systemd-networkd[1820]: cilium_host: Gained IPv6LL Jun 20 18:55:08.527421 kernel: NET: Registered PF_ALG protocol family Jun 20 18:55:08.978769 systemd-networkd[1820]: cilium_vxlan: Gained IPv6LL Jun 20 18:55:09.370945 systemd-networkd[1820]: lxc_health: Link UP Jun 20 18:55:09.379178 systemd-networkd[1820]: lxc_health: Gained carrier Jun 20 18:55:09.613142 systemd-networkd[1820]: lxc368108088595: Link UP Jun 20 18:55:09.620069 kernel: eth0: renamed from tmp0a80c Jun 20 18:55:09.630212 systemd-networkd[1820]: lxc368108088595: Gained carrier Jun 20 18:55:09.667975 systemd-networkd[1820]: lxc9643d312887f: Link UP Jun 20 18:55:09.671258 kernel: eth0: renamed from tmp03242 Jun 20 18:55:09.677451 systemd-networkd[1820]: lxc9643d312887f: Gained carrier Jun 20 18:55:09.736418 kubelet[3182]: I0620 18:55:09.735789 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8vf8d" podStartSLOduration=10.342627723 podStartE2EDuration="18.735763756s" podCreationTimestamp="2025-06-20 18:54:51 +0000 UTC" firstStartedPulling="2025-06-20 18:54:51.823984948 +0000 UTC m=+6.709712425" lastFinishedPulling="2025-06-20 18:55:00.217120994 +0000 UTC m=+15.102848458" observedRunningTime="2025-06-20 18:55:05.743932668 +0000 UTC m=+20.629660156" watchObservedRunningTime="2025-06-20 18:55:09.735763756 +0000 UTC m=+24.621491246" Jun 20 18:55:10.450391 systemd-networkd[1820]: lxc_health: Gained IPv6LL Jun 20 18:55:10.962472 systemd-networkd[1820]: lxc9643d312887f: Gained IPv6LL Jun 20 18:55:11.346692 systemd-networkd[1820]: lxc368108088595: Gained IPv6LL Jun 20 18:55:12.743466 kubelet[3182]: I0620 18:55:12.743413 3182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:55:13.642991 ntpd[1881]: Listen normally on 8 cilium_host 192.168.0.79:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 8 cilium_host 192.168.0.79:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 9 cilium_net [fe80::984c:8aff:fe2f:e514%4]:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 10 cilium_host [fe80::80e0:80ff:fe60:b746%5]:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 11 cilium_vxlan [fe80::5002:88ff:fe3f:1b9c%6]:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 12 lxc_health [fe80::845b:d3ff:fea4:7430%8]:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 13 lxc368108088595 [fe80::e0b6:30ff:fe1a:503f%10]:123 Jun 20 18:55:13.644398 ntpd[1881]: 20 Jun 18:55:13 ntpd[1881]: Listen normally on 14 lxc9643d312887f [fe80::c032:16ff:fe42:2392%12]:123 Jun 20 18:55:13.643088 ntpd[1881]: Listen normally on 9 cilium_net [fe80::984c:8aff:fe2f:e514%4]:123 Jun 20 18:55:13.643146 ntpd[1881]: Listen normally on 10 cilium_host [fe80::80e0:80ff:fe60:b746%5]:123 Jun 20 18:55:13.643189 ntpd[1881]: Listen normally on 11 cilium_vxlan [fe80::5002:88ff:fe3f:1b9c%6]:123 Jun 20 18:55:13.643269 ntpd[1881]: Listen normally on 12 lxc_health [fe80::845b:d3ff:fea4:7430%8]:123 Jun 20 18:55:13.643312 ntpd[1881]: Listen normally on 13 lxc368108088595 [fe80::e0b6:30ff:fe1a:503f%10]:123 Jun 20 18:55:13.643355 ntpd[1881]: Listen normally on 14 lxc9643d312887f [fe80::c032:16ff:fe42:2392%12]:123 Jun 20 18:55:14.351425 containerd[1909]: time="2025-06-20T18:55:14.350921542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:14.351425 containerd[1909]: time="2025-06-20T18:55:14.351029634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:14.351425 containerd[1909]: time="2025-06-20T18:55:14.351053233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:14.351425 containerd[1909]: time="2025-06-20T18:55:14.351182777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:14.424502 systemd[1]: Started cri-containerd-0a80ce9140f1f8794602b21a8275e8b70b0cb967f526d10815268f2503adbaf9.scope - libcontainer container 0a80ce9140f1f8794602b21a8275e8b70b0cb967f526d10815268f2503adbaf9. Jun 20 18:55:14.465636 containerd[1909]: time="2025-06-20T18:55:14.465459815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:14.465636 containerd[1909]: time="2025-06-20T18:55:14.465544946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:14.465636 containerd[1909]: time="2025-06-20T18:55:14.465562919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:14.467100 containerd[1909]: time="2025-06-20T18:55:14.465674346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:14.527863 systemd[1]: Started cri-containerd-03242c8e2ca349389de5f412b316d11cbd2eec4fdaac75ba5bd0f4b22e9c531b.scope - libcontainer container 03242c8e2ca349389de5f412b316d11cbd2eec4fdaac75ba5bd0f4b22e9c531b. Jun 20 18:55:14.579626 containerd[1909]: time="2025-06-20T18:55:14.579582871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9q6q8,Uid:fe60299c-5295-4a93-b557-ffc4e0c7ace7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a80ce9140f1f8794602b21a8275e8b70b0cb967f526d10815268f2503adbaf9\"" Jun 20 18:55:14.586827 containerd[1909]: time="2025-06-20T18:55:14.586750469Z" level=info msg="CreateContainer within sandbox \"0a80ce9140f1f8794602b21a8275e8b70b0cb967f526d10815268f2503adbaf9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:55:14.634405 containerd[1909]: time="2025-06-20T18:55:14.633424568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9v2p,Uid:6090e820-2d58-4a4e-b52b-ac069c4c80d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"03242c8e2ca349389de5f412b316d11cbd2eec4fdaac75ba5bd0f4b22e9c531b\"" Jun 20 18:55:14.641344 containerd[1909]: time="2025-06-20T18:55:14.641264949Z" level=info msg="CreateContainer within sandbox \"03242c8e2ca349389de5f412b316d11cbd2eec4fdaac75ba5bd0f4b22e9c531b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:55:14.759626 containerd[1909]: time="2025-06-20T18:55:14.759572447Z" level=info msg="CreateContainer within sandbox \"03242c8e2ca349389de5f412b316d11cbd2eec4fdaac75ba5bd0f4b22e9c531b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1dd5153c735af4e7e46e04ce3b1931915a482146924b18d16f0a3f2fbcace9a\"" Jun 20 18:55:14.761447 containerd[1909]: time="2025-06-20T18:55:14.760614385Z" level=info msg="StartContainer for \"a1dd5153c735af4e7e46e04ce3b1931915a482146924b18d16f0a3f2fbcace9a\"" Jun 20 18:55:14.761811 containerd[1909]: time="2025-06-20T18:55:14.761785709Z" level=info msg="CreateContainer within sandbox \"0a80ce9140f1f8794602b21a8275e8b70b0cb967f526d10815268f2503adbaf9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4462a9160505ac7df528377b2785337b9cae4019c04471f3c66a900cc0da853\"" Jun 20 18:55:14.762262 containerd[1909]: time="2025-06-20T18:55:14.762226985Z" level=info msg="StartContainer for \"e4462a9160505ac7df528377b2785337b9cae4019c04471f3c66a900cc0da853\"" Jun 20 18:55:14.809266 systemd[1]: Started cri-containerd-a1dd5153c735af4e7e46e04ce3b1931915a482146924b18d16f0a3f2fbcace9a.scope - libcontainer container a1dd5153c735af4e7e46e04ce3b1931915a482146924b18d16f0a3f2fbcace9a. Jun 20 18:55:14.813601 systemd[1]: Started cri-containerd-e4462a9160505ac7df528377b2785337b9cae4019c04471f3c66a900cc0da853.scope - libcontainer container e4462a9160505ac7df528377b2785337b9cae4019c04471f3c66a900cc0da853. Jun 20 18:55:14.870405 containerd[1909]: time="2025-06-20T18:55:14.870356101Z" level=info msg="StartContainer for \"a1dd5153c735af4e7e46e04ce3b1931915a482146924b18d16f0a3f2fbcace9a\" returns successfully" Jun 20 18:55:14.879961 containerd[1909]: time="2025-06-20T18:55:14.879916115Z" level=info msg="StartContainer for \"e4462a9160505ac7df528377b2785337b9cae4019c04471f3c66a900cc0da853\" returns successfully" Jun 20 18:55:15.767512 kubelet[3182]: I0620 18:55:15.766700 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9q6q8" podStartSLOduration=24.76668246 podStartE2EDuration="24.76668246s" podCreationTimestamp="2025-06-20 18:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:15.765371123 +0000 UTC m=+30.651098623" watchObservedRunningTime="2025-06-20 18:55:15.76668246 +0000 UTC m=+30.652409957" Jun 20 18:55:17.794570 systemd[1]: Started sshd@7-172.31.22.222:22-139.178.68.195:32978.service - OpenSSH per-connection server daemon (139.178.68.195:32978). Jun 20 18:55:18.007297 sshd[4820]: Accepted publickey for core from 139.178.68.195 port 32978 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:18.010962 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:18.018519 systemd-logind[1889]: New session 8 of user core. Jun 20 18:55:18.031494 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:55:18.942196 sshd[4822]: Connection closed by 139.178.68.195 port 32978 Jun 20 18:55:18.942891 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:18.947292 systemd[1]: sshd@7-172.31.22.222:22-139.178.68.195:32978.service: Deactivated successfully. Jun 20 18:55:18.949341 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:55:18.950679 systemd-logind[1889]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:55:18.951857 systemd-logind[1889]: Removed session 8. Jun 20 18:55:23.983667 systemd[1]: Started sshd@8-172.31.22.222:22-139.178.68.195:46890.service - OpenSSH per-connection server daemon (139.178.68.195:46890). Jun 20 18:55:24.181187 sshd[4844]: Accepted publickey for core from 139.178.68.195 port 46890 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:24.182655 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:24.189177 systemd-logind[1889]: New session 9 of user core. Jun 20 18:55:24.195477 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:55:24.436163 sshd[4846]: Connection closed by 139.178.68.195 port 46890 Jun 20 18:55:24.437225 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:24.443329 systemd[1]: sshd@8-172.31.22.222:22-139.178.68.195:46890.service: Deactivated successfully. Jun 20 18:55:24.445805 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:55:24.447772 systemd-logind[1889]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:55:24.449671 systemd-logind[1889]: Removed session 9. Jun 20 18:55:27.790916 kubelet[3182]: I0620 18:55:27.790738 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g9v2p" podStartSLOduration=36.790716781 podStartE2EDuration="36.790716781s" podCreationTimestamp="2025-06-20 18:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:15.785756663 +0000 UTC m=+30.671484149" watchObservedRunningTime="2025-06-20 18:55:27.790716781 +0000 UTC m=+42.676444267" Jun 20 18:55:29.482670 systemd[1]: Started sshd@9-172.31.22.222:22-139.178.68.195:46904.service - OpenSSH per-connection server daemon (139.178.68.195:46904). Jun 20 18:55:29.650127 sshd[4865]: Accepted publickey for core from 139.178.68.195 port 46904 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:29.651728 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:29.658330 systemd-logind[1889]: New session 10 of user core. Jun 20 18:55:29.669528 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:55:29.878199 sshd[4867]: Connection closed by 139.178.68.195 port 46904 Jun 20 18:55:29.878896 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:29.881718 systemd[1]: sshd@9-172.31.22.222:22-139.178.68.195:46904.service: Deactivated successfully. Jun 20 18:55:29.883792 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:55:29.885451 systemd-logind[1889]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:55:29.886928 systemd-logind[1889]: Removed session 10. Jun 20 18:55:34.925655 systemd[1]: Started sshd@10-172.31.22.222:22-139.178.68.195:48130.service - OpenSSH per-connection server daemon (139.178.68.195:48130). Jun 20 18:55:35.108341 sshd[4880]: Accepted publickey for core from 139.178.68.195 port 48130 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:35.109707 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:35.114637 systemd-logind[1889]: New session 11 of user core. Jun 20 18:55:35.122479 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:55:35.328883 sshd[4882]: Connection closed by 139.178.68.195 port 48130 Jun 20 18:55:35.329531 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:35.333792 systemd-logind[1889]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:55:35.334551 systemd[1]: sshd@10-172.31.22.222:22-139.178.68.195:48130.service: Deactivated successfully. Jun 20 18:55:35.339003 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:55:35.340871 systemd-logind[1889]: Removed session 11. Jun 20 18:55:35.370638 systemd[1]: Started sshd@11-172.31.22.222:22-139.178.68.195:48146.service - OpenSSH per-connection server daemon (139.178.68.195:48146). Jun 20 18:55:35.540851 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 48146 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:35.542550 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:35.552324 systemd-logind[1889]: New session 12 of user core. Jun 20 18:55:35.557465 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:55:35.891008 sshd[4897]: Connection closed by 139.178.68.195 port 48146 Jun 20 18:55:35.891455 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:35.897468 systemd-logind[1889]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:55:35.898164 systemd[1]: sshd@11-172.31.22.222:22-139.178.68.195:48146.service: Deactivated successfully. Jun 20 18:55:35.901945 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:55:35.903990 systemd-logind[1889]: Removed session 12. Jun 20 18:55:35.933589 systemd[1]: Started sshd@12-172.31.22.222:22-139.178.68.195:48152.service - OpenSSH per-connection server daemon (139.178.68.195:48152). Jun 20 18:55:36.094437 sshd[4908]: Accepted publickey for core from 139.178.68.195 port 48152 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:36.096027 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:36.100843 systemd-logind[1889]: New session 13 of user core. Jun 20 18:55:36.108484 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:55:36.297078 sshd[4910]: Connection closed by 139.178.68.195 port 48152 Jun 20 18:55:36.297710 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:36.301109 systemd[1]: sshd@12-172.31.22.222:22-139.178.68.195:48152.service: Deactivated successfully. Jun 20 18:55:36.303120 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:55:36.304608 systemd-logind[1889]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:55:36.306007 systemd-logind[1889]: Removed session 13. Jun 20 18:55:41.339605 systemd[1]: Started sshd@13-172.31.22.222:22-139.178.68.195:48158.service - OpenSSH per-connection server daemon (139.178.68.195:48158). Jun 20 18:55:41.500388 sshd[4922]: Accepted publickey for core from 139.178.68.195 port 48158 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:41.502025 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:41.507170 systemd-logind[1889]: New session 14 of user core. Jun 20 18:55:41.510528 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:55:41.699616 sshd[4924]: Connection closed by 139.178.68.195 port 48158 Jun 20 18:55:41.699987 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:41.704938 systemd[1]: sshd@13-172.31.22.222:22-139.178.68.195:48158.service: Deactivated successfully. Jun 20 18:55:41.707969 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:55:41.708935 systemd-logind[1889]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:55:41.710205 systemd-logind[1889]: Removed session 14. Jun 20 18:55:46.740579 systemd[1]: Started sshd@14-172.31.22.222:22-139.178.68.195:38994.service - OpenSSH per-connection server daemon (139.178.68.195:38994). Jun 20 18:55:46.907418 sshd[4938]: Accepted publickey for core from 139.178.68.195 port 38994 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:46.908984 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:46.913966 systemd-logind[1889]: New session 15 of user core. Jun 20 18:55:46.919438 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:55:47.113277 sshd[4940]: Connection closed by 139.178.68.195 port 38994 Jun 20 18:55:47.113841 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:47.116854 systemd[1]: sshd@14-172.31.22.222:22-139.178.68.195:38994.service: Deactivated successfully. Jun 20 18:55:47.118800 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:55:47.120134 systemd-logind[1889]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:55:47.121732 systemd-logind[1889]: Removed session 15. Jun 20 18:55:47.152550 systemd[1]: Started sshd@15-172.31.22.222:22-139.178.68.195:39006.service - OpenSSH per-connection server daemon (139.178.68.195:39006). Jun 20 18:55:47.319966 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 39006 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:47.321701 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:47.326602 systemd-logind[1889]: New session 16 of user core. Jun 20 18:55:47.335476 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:55:48.011217 sshd[4954]: Connection closed by 139.178.68.195 port 39006 Jun 20 18:55:48.012043 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:48.018730 systemd[1]: sshd@15-172.31.22.222:22-139.178.68.195:39006.service: Deactivated successfully. Jun 20 18:55:48.021456 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:55:48.022494 systemd-logind[1889]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:55:48.023737 systemd-logind[1889]: Removed session 16. Jun 20 18:55:48.055575 systemd[1]: Started sshd@16-172.31.22.222:22-139.178.68.195:39020.service - OpenSSH per-connection server daemon (139.178.68.195:39020). Jun 20 18:55:48.242210 sshd[4964]: Accepted publickey for core from 139.178.68.195 port 39020 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:48.243587 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:48.248757 systemd-logind[1889]: New session 17 of user core. Jun 20 18:55:48.254496 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:55:49.343365 sshd[4966]: Connection closed by 139.178.68.195 port 39020 Jun 20 18:55:49.344040 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:49.357921 systemd-logind[1889]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:55:49.358886 systemd[1]: sshd@16-172.31.22.222:22-139.178.68.195:39020.service: Deactivated successfully. Jun 20 18:55:49.362716 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:55:49.379271 systemd-logind[1889]: Removed session 17. Jun 20 18:55:49.390858 systemd[1]: Started sshd@17-172.31.22.222:22-139.178.68.195:39030.service - OpenSSH per-connection server daemon (139.178.68.195:39030). Jun 20 18:55:49.569186 sshd[4982]: Accepted publickey for core from 139.178.68.195 port 39030 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:49.570716 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:49.576080 systemd-logind[1889]: New session 18 of user core. Jun 20 18:55:49.588491 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:55:49.966850 sshd[4985]: Connection closed by 139.178.68.195 port 39030 Jun 20 18:55:49.967430 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:49.970821 systemd[1]: sshd@17-172.31.22.222:22-139.178.68.195:39030.service: Deactivated successfully. Jun 20 18:55:49.974626 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:55:49.977451 systemd-logind[1889]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:55:49.978791 systemd-logind[1889]: Removed session 18. Jun 20 18:55:50.007594 systemd[1]: Started sshd@18-172.31.22.222:22-139.178.68.195:39046.service - OpenSSH per-connection server daemon (139.178.68.195:39046). Jun 20 18:55:50.176806 sshd[4995]: Accepted publickey for core from 139.178.68.195 port 39046 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:50.178685 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:50.183037 systemd-logind[1889]: New session 19 of user core. Jun 20 18:55:50.190466 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:55:50.380129 sshd[4997]: Connection closed by 139.178.68.195 port 39046 Jun 20 18:55:50.380764 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:50.384425 systemd-logind[1889]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:55:50.385018 systemd[1]: sshd@18-172.31.22.222:22-139.178.68.195:39046.service: Deactivated successfully. Jun 20 18:55:50.387860 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:55:50.390531 systemd-logind[1889]: Removed session 19. Jun 20 18:55:55.424535 systemd[1]: Started sshd@19-172.31.22.222:22-139.178.68.195:60340.service - OpenSSH per-connection server daemon (139.178.68.195:60340). Jun 20 18:55:55.593093 sshd[5013]: Accepted publickey for core from 139.178.68.195 port 60340 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:55:55.594581 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:55:55.599151 systemd-logind[1889]: New session 20 of user core. Jun 20 18:55:55.604900 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:55:55.793190 sshd[5017]: Connection closed by 139.178.68.195 port 60340 Jun 20 18:55:55.793779 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:55.797036 systemd[1]: sshd@19-172.31.22.222:22-139.178.68.195:60340.service: Deactivated successfully. Jun 20 18:55:55.799603 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:55:55.800842 systemd-logind[1889]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:55:55.802005 systemd-logind[1889]: Removed session 20. Jun 20 18:56:00.835851 systemd[1]: Started sshd@20-172.31.22.222:22-139.178.68.195:60342.service - OpenSSH per-connection server daemon (139.178.68.195:60342). Jun 20 18:56:01.000372 sshd[5030]: Accepted publickey for core from 139.178.68.195 port 60342 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:01.001870 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:01.007074 systemd-logind[1889]: New session 21 of user core. Jun 20 18:56:01.012453 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:56:01.340815 sshd[5032]: Connection closed by 139.178.68.195 port 60342 Jun 20 18:56:01.339451 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:01.356544 systemd[1]: sshd@20-172.31.22.222:22-139.178.68.195:60342.service: Deactivated successfully. Jun 20 18:56:01.372277 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:56:01.382216 systemd-logind[1889]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:56:01.389690 systemd-logind[1889]: Removed session 21. Jun 20 18:56:06.374538 systemd[1]: Started sshd@21-172.31.22.222:22-139.178.68.195:45950.service - OpenSSH per-connection server daemon (139.178.68.195:45950). Jun 20 18:56:06.535123 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 45950 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:06.536643 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:06.542478 systemd-logind[1889]: New session 22 of user core. Jun 20 18:56:06.548436 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:56:06.728299 sshd[5046]: Connection closed by 139.178.68.195 port 45950 Jun 20 18:56:06.729416 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:06.734220 systemd[1]: sshd@21-172.31.22.222:22-139.178.68.195:45950.service: Deactivated successfully. Jun 20 18:56:06.736781 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:56:06.737829 systemd-logind[1889]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:56:06.739037 systemd-logind[1889]: Removed session 22. Jun 20 18:56:06.767741 systemd[1]: Started sshd@22-172.31.22.222:22-139.178.68.195:45956.service - OpenSSH per-connection server daemon (139.178.68.195:45956). Jun 20 18:56:06.929967 sshd[5058]: Accepted publickey for core from 139.178.68.195 port 45956 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:06.931411 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:06.939177 systemd-logind[1889]: New session 23 of user core. Jun 20 18:56:06.940467 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:56:09.598473 containerd[1909]: time="2025-06-20T18:56:09.598371453Z" level=info msg="StopContainer for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" with timeout 30 (s)" Jun 20 18:56:09.599989 containerd[1909]: time="2025-06-20T18:56:09.598930113Z" level=info msg="Stop container \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" with signal terminated" Jun 20 18:56:09.645994 systemd[1]: cri-containerd-e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520.scope: Deactivated successfully. Jun 20 18:56:09.697500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520-rootfs.mount: Deactivated successfully. Jun 20 18:56:09.704329 containerd[1909]: time="2025-06-20T18:56:09.704098414Z" level=info msg="shim disconnected" id=e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520 namespace=k8s.io Jun 20 18:56:09.704329 containerd[1909]: time="2025-06-20T18:56:09.704151409Z" level=warning msg="cleaning up after shim disconnected" id=e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520 namespace=k8s.io Jun 20 18:56:09.704329 containerd[1909]: time="2025-06-20T18:56:09.704159826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:09.720081 containerd[1909]: time="2025-06-20T18:56:09.720034308Z" level=info msg="StopContainer for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" returns successfully" Jun 20 18:56:09.720733 containerd[1909]: time="2025-06-20T18:56:09.720702410Z" level=info msg="StopPodSandbox for \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\"" Jun 20 18:56:09.732352 containerd[1909]: time="2025-06-20T18:56:09.732305876Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:56:09.735680 containerd[1909]: time="2025-06-20T18:56:09.725765368Z" level=info msg="Container to stop \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:56:09.738264 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082-shm.mount: Deactivated successfully. Jun 20 18:56:09.751496 systemd[1]: cri-containerd-0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082.scope: Deactivated successfully. Jun 20 18:56:09.761871 containerd[1909]: time="2025-06-20T18:56:09.761714658Z" level=info msg="StopContainer for \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\" with timeout 2 (s)" Jun 20 18:56:09.762444 containerd[1909]: time="2025-06-20T18:56:09.762404533Z" level=info msg="Stop container \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\" with signal terminated" Jun 20 18:56:09.775476 systemd-networkd[1820]: lxc_health: Link DOWN Jun 20 18:56:09.775491 systemd-networkd[1820]: lxc_health: Lost carrier Jun 20 18:56:09.799957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082-rootfs.mount: Deactivated successfully. Jun 20 18:56:09.802571 systemd[1]: cri-containerd-e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a.scope: Deactivated successfully. Jun 20 18:56:09.803439 systemd[1]: cri-containerd-e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a.scope: Consumed 8.404s CPU time, 193.1M memory peak, 70.7M read from disk, 13.3M written to disk. Jun 20 18:56:09.813326 containerd[1909]: time="2025-06-20T18:56:09.812650787Z" level=info msg="shim disconnected" id=0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082 namespace=k8s.io Jun 20 18:56:09.813326 containerd[1909]: time="2025-06-20T18:56:09.812707951Z" level=warning msg="cleaning up after shim disconnected" id=0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082 namespace=k8s.io Jun 20 18:56:09.813326 containerd[1909]: time="2025-06-20T18:56:09.812717677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:09.837992 containerd[1909]: time="2025-06-20T18:56:09.837943911Z" level=info msg="TearDown network for sandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" successfully" Jun 20 18:56:09.838133 containerd[1909]: time="2025-06-20T18:56:09.838003456Z" level=info msg="StopPodSandbox for \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" returns successfully" Jun 20 18:56:09.851689 containerd[1909]: time="2025-06-20T18:56:09.851490446Z" level=info msg="shim disconnected" id=e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a namespace=k8s.io Jun 20 18:56:09.851689 containerd[1909]: time="2025-06-20T18:56:09.851547136Z" level=warning msg="cleaning up after shim disconnected" id=e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a namespace=k8s.io Jun 20 18:56:09.851689 containerd[1909]: time="2025-06-20T18:56:09.851558364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:09.879420 containerd[1909]: time="2025-06-20T18:56:09.879289217Z" level=info msg="StopContainer for \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\" returns successfully" Jun 20 18:56:09.879946 containerd[1909]: time="2025-06-20T18:56:09.879771388Z" level=info msg="StopPodSandbox for \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\"" Jun 20 18:56:09.879946 containerd[1909]: time="2025-06-20T18:56:09.879801689Z" level=info msg="Container to stop \"b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:56:09.879946 containerd[1909]: time="2025-06-20T18:56:09.879830978Z" level=info msg="Container to stop \"04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:56:09.879946 containerd[1909]: time="2025-06-20T18:56:09.879839267Z" level=info msg="Container to stop \"3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:56:09.879946 containerd[1909]: time="2025-06-20T18:56:09.879848172Z" level=info msg="Container to stop \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:56:09.879946 containerd[1909]: time="2025-06-20T18:56:09.879855751Z" level=info msg="Container to stop \"8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:56:09.887496 systemd[1]: cri-containerd-3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43.scope: Deactivated successfully. Jun 20 18:56:09.904051 kubelet[3182]: I0620 18:56:09.904004 3182 scope.go:117] "RemoveContainer" containerID="e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520" Jun 20 18:56:09.927613 containerd[1909]: time="2025-06-20T18:56:09.927556253Z" level=info msg="shim disconnected" id=3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43 namespace=k8s.io Jun 20 18:56:09.927613 containerd[1909]: time="2025-06-20T18:56:09.927606732Z" level=warning msg="cleaning up after shim disconnected" id=3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43 namespace=k8s.io Jun 20 18:56:09.927613 containerd[1909]: time="2025-06-20T18:56:09.927614574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:09.932216 containerd[1909]: time="2025-06-20T18:56:09.932090569Z" level=info msg="RemoveContainer for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\"" Jun 20 18:56:09.937941 containerd[1909]: time="2025-06-20T18:56:09.937795216Z" level=info msg="RemoveContainer for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" returns successfully" Jun 20 18:56:09.938610 kubelet[3182]: I0620 18:56:09.938572 3182 scope.go:117] "RemoveContainer" containerID="e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520" Jun 20 18:56:09.940283 containerd[1909]: time="2025-06-20T18:56:09.939777698Z" level=error msg="ContainerStatus for \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\": not found" Jun 20 18:56:09.948493 containerd[1909]: time="2025-06-20T18:56:09.948413699Z" level=info msg="TearDown network for sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" successfully" Jun 20 18:56:09.948493 containerd[1909]: time="2025-06-20T18:56:09.948450730Z" level=info msg="StopPodSandbox for \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" returns successfully" Jun 20 18:56:09.960936 kubelet[3182]: E0620 18:56:09.960880 3182 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\": not found" containerID="e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520" Jun 20 18:56:09.970193 kubelet[3182]: I0620 18:56:09.969798 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk8sh\" (UniqueName: \"kubernetes.io/projected/45a57080-2381-4156-a161-c9089a596171-kube-api-access-jk8sh\") pod \"45a57080-2381-4156-a161-c9089a596171\" (UID: \"45a57080-2381-4156-a161-c9089a596171\") " Jun 20 18:56:09.970193 kubelet[3182]: I0620 18:56:09.969866 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45a57080-2381-4156-a161-c9089a596171-cilium-config-path\") pod \"45a57080-2381-4156-a161-c9089a596171\" (UID: \"45a57080-2381-4156-a161-c9089a596171\") " Jun 20 18:56:09.971506 kubelet[3182]: I0620 18:56:09.960948 3182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520"} err="failed to get container status \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2ada8034a6b6245701aff21724055b339364236eae300be31b134b836808520\": not found" Jun 20 18:56:09.994985 kubelet[3182]: I0620 18:56:09.992364 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45a57080-2381-4156-a161-c9089a596171-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45a57080-2381-4156-a161-c9089a596171" (UID: "45a57080-2381-4156-a161-c9089a596171"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:56:09.995210 kubelet[3182]: I0620 18:56:09.995100 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a57080-2381-4156-a161-c9089a596171-kube-api-access-jk8sh" (OuterVolumeSpecName: "kube-api-access-jk8sh") pod "45a57080-2381-4156-a161-c9089a596171" (UID: "45a57080-2381-4156-a161-c9089a596171"). InnerVolumeSpecName "kube-api-access-jk8sh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:56:10.071342 kubelet[3182]: I0620 18:56:10.070698 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hubble-tls\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071342 kubelet[3182]: I0620 18:56:10.070735 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hostproc\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071342 kubelet[3182]: I0620 18:56:10.070752 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-xtables-lock\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071342 kubelet[3182]: I0620 18:56:10.070778 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-config-path\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071342 kubelet[3182]: I0620 18:56:10.070793 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-net\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071342 kubelet[3182]: I0620 18:56:10.070809 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f44253b7-fe0a-4654-9a51-dcae2fd2b846-clustermesh-secrets\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071645 kubelet[3182]: I0620 18:56:10.070823 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cni-path\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071645 kubelet[3182]: I0620 18:56:10.070838 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-run\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071645 kubelet[3182]: I0620 18:56:10.070853 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-bpf-maps\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071645 kubelet[3182]: I0620 18:56:10.070870 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zsn\" (UniqueName: \"kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-kube-api-access-r7zsn\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071645 kubelet[3182]: I0620 18:56:10.070885 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-lib-modules\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071645 kubelet[3182]: I0620 18:56:10.070903 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-cgroup\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071799 kubelet[3182]: I0620 18:56:10.070920 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-etc-cni-netd\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071799 kubelet[3182]: I0620 18:56:10.070933 3182 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-kernel\") pod \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\" (UID: \"f44253b7-fe0a-4654-9a51-dcae2fd2b846\") " Jun 20 18:56:10.071799 kubelet[3182]: I0620 18:56:10.071066 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cni-path" (OuterVolumeSpecName: "cni-path") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.071799 kubelet[3182]: I0620 18:56:10.071642 3182 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45a57080-2381-4156-a161-c9089a596171-cilium-config-path\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.071799 kubelet[3182]: I0620 18:56:10.071657 3182 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jk8sh\" (UniqueName: \"kubernetes.io/projected/45a57080-2381-4156-a161-c9089a596171-kube-api-access-jk8sh\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.071799 kubelet[3182]: I0620 18:56:10.071724 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.071948 kubelet[3182]: I0620 18:56:10.071749 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.076413 kubelet[3182]: I0620 18:56:10.076325 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:56:10.077419 kubelet[3182]: I0620 18:56:10.076489 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hostproc" (OuterVolumeSpecName: "hostproc") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.077419 kubelet[3182]: I0620 18:56:10.076523 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.078714 kubelet[3182]: I0620 18:56:10.078684 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-kube-api-access-r7zsn" (OuterVolumeSpecName: "kube-api-access-r7zsn") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "kube-api-access-r7zsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:56:10.078831 kubelet[3182]: I0620 18:56:10.078819 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.078899 kubelet[3182]: I0620 18:56:10.078890 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.078972 kubelet[3182]: I0620 18:56:10.078962 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.079035 kubelet[3182]: I0620 18:56:10.078980 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:56:10.079077 kubelet[3182]: I0620 18:56:10.079011 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.079140 kubelet[3182]: I0620 18:56:10.079121 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:56:10.081668 kubelet[3182]: I0620 18:56:10.081600 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44253b7-fe0a-4654-9a51-dcae2fd2b846-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f44253b7-fe0a-4654-9a51-dcae2fd2b846" (UID: "f44253b7-fe0a-4654-9a51-dcae2fd2b846"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172226 3182 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-run\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172275 3182 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-bpf-maps\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172285 3182 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r7zsn\" (UniqueName: \"kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-kube-api-access-r7zsn\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172297 3182 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-lib-modules\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172307 3182 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-cgroup\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172315 3182 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-etc-cni-netd\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172323 3182 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-kernel\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172428 kubelet[3182]: I0620 18:56:10.172331 3182 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hubble-tls\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172709 kubelet[3182]: I0620 18:56:10.172340 3182 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-hostproc\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172709 kubelet[3182]: I0620 18:56:10.172347 3182 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-xtables-lock\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172709 kubelet[3182]: I0620 18:56:10.172356 3182 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cilium-config-path\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172709 kubelet[3182]: I0620 18:56:10.172364 3182 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-host-proc-sys-net\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172709 kubelet[3182]: I0620 18:56:10.172371 3182 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f44253b7-fe0a-4654-9a51-dcae2fd2b846-clustermesh-secrets\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.172709 kubelet[3182]: I0620 18:56:10.172379 3182 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f44253b7-fe0a-4654-9a51-dcae2fd2b846-cni-path\") on node \"ip-172-31-22-222\" DevicePath \"\"" Jun 20 18:56:10.205990 systemd[1]: Removed slice kubepods-besteffort-pod45a57080_2381_4156_a161_c9089a596171.slice - libcontainer container kubepods-besteffort-pod45a57080_2381_4156_a161_c9089a596171.slice. Jun 20 18:56:10.447025 kubelet[3182]: E0620 18:56:10.446858 3182 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:56:10.652586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a-rootfs.mount: Deactivated successfully. Jun 20 18:56:10.652704 systemd[1]: var-lib-kubelet-pods-45a57080\x2d2381\x2d4156\x2da161\x2dc9089a596171-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djk8sh.mount: Deactivated successfully. Jun 20 18:56:10.652771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43-rootfs.mount: Deactivated successfully. Jun 20 18:56:10.652833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43-shm.mount: Deactivated successfully. Jun 20 18:56:10.652896 systemd[1]: var-lib-kubelet-pods-f44253b7\x2dfe0a\x2d4654\x2d9a51\x2ddcae2fd2b846-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7zsn.mount: Deactivated successfully. Jun 20 18:56:10.652957 systemd[1]: var-lib-kubelet-pods-f44253b7\x2dfe0a\x2d4654\x2d9a51\x2ddcae2fd2b846-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:56:10.653024 systemd[1]: var-lib-kubelet-pods-f44253b7\x2dfe0a\x2d4654\x2d9a51\x2ddcae2fd2b846-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:56:10.912343 kubelet[3182]: I0620 18:56:10.911480 3182 scope.go:117] "RemoveContainer" containerID="e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a" Jun 20 18:56:10.918432 systemd[1]: Removed slice kubepods-burstable-podf44253b7_fe0a_4654_9a51_dcae2fd2b846.slice - libcontainer container kubepods-burstable-podf44253b7_fe0a_4654_9a51_dcae2fd2b846.slice. Jun 20 18:56:10.918766 systemd[1]: kubepods-burstable-podf44253b7_fe0a_4654_9a51_dcae2fd2b846.slice: Consumed 8.516s CPU time, 193.5M memory peak, 71.1M read from disk, 15.6M written to disk. Jun 20 18:56:10.940570 containerd[1909]: time="2025-06-20T18:56:10.939857345Z" level=info msg="RemoveContainer for \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\"" Jun 20 18:56:10.945262 containerd[1909]: time="2025-06-20T18:56:10.944377805Z" level=info msg="RemoveContainer for \"e162df7544ecbcc702a04a5f0585b036ab2141af9a5d324031c4d9686cdd9e6a\" returns successfully" Jun 20 18:56:10.945350 kubelet[3182]: I0620 18:56:10.944796 3182 scope.go:117] "RemoveContainer" containerID="04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124" Jun 20 18:56:10.946417 containerd[1909]: time="2025-06-20T18:56:10.946398944Z" level=info msg="RemoveContainer for \"04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124\"" Jun 20 18:56:10.950384 containerd[1909]: time="2025-06-20T18:56:10.950358836Z" level=info msg="RemoveContainer for \"04516003e9a457295ad410a29fc7837b734f5c42bbb50e7fb6f810d196f16124\" returns successfully" Jun 20 18:56:10.950788 kubelet[3182]: I0620 18:56:10.950747 3182 scope.go:117] "RemoveContainer" containerID="b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843" Jun 20 18:56:10.951783 containerd[1909]: time="2025-06-20T18:56:10.951755477Z" level=info msg="RemoveContainer for \"b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843\"" Jun 20 18:56:10.955221 containerd[1909]: time="2025-06-20T18:56:10.955082057Z" level=info msg="RemoveContainer for \"b74cac0e19239fcb4c8f182aabfc4f3d1d3a6ba40fefef1104f50b042f422843\" returns successfully" Jun 20 18:56:10.955499 kubelet[3182]: I0620 18:56:10.955387 3182 scope.go:117] "RemoveContainer" containerID="3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677" Jun 20 18:56:10.958604 containerd[1909]: time="2025-06-20T18:56:10.957420419Z" level=info msg="RemoveContainer for \"3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677\"" Jun 20 18:56:10.962340 containerd[1909]: time="2025-06-20T18:56:10.962296729Z" level=info msg="RemoveContainer for \"3f7b4fb523b8407a4c2f3ace34c95a4710ff0a231785b5f3c15ed448e182c677\" returns successfully" Jun 20 18:56:10.962540 kubelet[3182]: I0620 18:56:10.962513 3182 scope.go:117] "RemoveContainer" containerID="8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b" Jun 20 18:56:10.964014 containerd[1909]: time="2025-06-20T18:56:10.963725160Z" level=info msg="RemoveContainer for \"8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b\"" Jun 20 18:56:10.966719 containerd[1909]: time="2025-06-20T18:56:10.966687057Z" level=info msg="RemoveContainer for \"8d914835cde1489c1ff4efd482514d532568f029bd819b7df36f7ac1aee8076b\" returns successfully" Jun 20 18:56:11.350514 kubelet[3182]: I0620 18:56:11.350466 3182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a57080-2381-4156-a161-c9089a596171" path="/var/lib/kubelet/pods/45a57080-2381-4156-a161-c9089a596171/volumes" Jun 20 18:56:11.350989 kubelet[3182]: I0620 18:56:11.350956 3182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44253b7-fe0a-4654-9a51-dcae2fd2b846" path="/var/lib/kubelet/pods/f44253b7-fe0a-4654-9a51-dcae2fd2b846/volumes" Jun 20 18:56:11.499966 sshd[5060]: Connection closed by 139.178.68.195 port 45956 Jun 20 18:56:11.538675 systemd[1]: Started sshd@23-172.31.22.222:22-139.178.68.195:45962.service - OpenSSH per-connection server daemon (139.178.68.195:45962). Jun 20 18:56:11.571939 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:11.579135 systemd-logind[1889]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:56:11.580090 systemd[1]: sshd@22-172.31.22.222:22-139.178.68.195:45956.service: Deactivated successfully. Jun 20 18:56:11.585114 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:56:11.586845 systemd-logind[1889]: Removed session 23. Jun 20 18:56:11.738746 sshd[5218]: Accepted publickey for core from 139.178.68.195 port 45962 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:11.740286 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:11.746596 systemd-logind[1889]: New session 24 of user core. Jun 20 18:56:11.759506 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:56:12.642924 ntpd[1881]: Deleting interface #12 lxc_health, fe80::845b:d3ff:fea4:7430%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Jun 20 18:56:12.643443 ntpd[1881]: 20 Jun 18:56:12 ntpd[1881]: Deleting interface #12 lxc_health, fe80::845b:d3ff:fea4:7430%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Jun 20 18:56:12.969465 sshd[5223]: Connection closed by 139.178.68.195 port 45962 Jun 20 18:56:12.970834 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:12.975530 systemd-logind[1889]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:56:12.977645 systemd[1]: sshd@23-172.31.22.222:22-139.178.68.195:45962.service: Deactivated successfully. Jun 20 18:56:12.982320 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:56:12.988596 systemd-logind[1889]: Removed session 24. Jun 20 18:56:13.011013 systemd[1]: Started sshd@24-172.31.22.222:22-139.178.68.195:45966.service - OpenSSH per-connection server daemon (139.178.68.195:45966). Jun 20 18:56:13.156223 systemd[1]: Created slice kubepods-burstable-pod7e0bbb05_30c4_4b8a_a776_1694a8de3fe3.slice - libcontainer container kubepods-burstable-pod7e0bbb05_30c4_4b8a_a776_1694a8de3fe3.slice. Jun 20 18:56:13.179062 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 45966 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:13.180538 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:13.186852 systemd-logind[1889]: New session 25 of user core. Jun 20 18:56:13.193742 kubelet[3182]: I0620 18:56:13.193691 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-lib-modules\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.193742 kubelet[3182]: I0620 18:56:13.193732 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-host-proc-sys-kernel\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.193742 kubelet[3182]: I0620 18:56:13.193757 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-cilium-ipsec-secrets\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195632 kubelet[3182]: I0620 18:56:13.193775 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-hostproc\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195632 kubelet[3182]: I0620 18:56:13.193791 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-host-proc-sys-net\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195632 kubelet[3182]: I0620 18:56:13.193806 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxsl\" (UniqueName: \"kubernetes.io/projected/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-kube-api-access-xgxsl\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195632 kubelet[3182]: I0620 18:56:13.193823 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-cilium-cgroup\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195632 kubelet[3182]: I0620 18:56:13.193839 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-xtables-lock\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195632 kubelet[3182]: I0620 18:56:13.193866 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-hubble-tls\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195512 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:56:13.195852 kubelet[3182]: I0620 18:56:13.193881 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-bpf-maps\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195852 kubelet[3182]: I0620 18:56:13.193896 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-cni-path\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195852 kubelet[3182]: I0620 18:56:13.193910 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-etc-cni-netd\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195852 kubelet[3182]: I0620 18:56:13.193924 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-clustermesh-secrets\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195852 kubelet[3182]: I0620 18:56:13.193940 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-cilium-config-path\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.195852 kubelet[3182]: I0620 18:56:13.193954 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e0bbb05-30c4-4b8a-a776-1694a8de3fe3-cilium-run\") pod \"cilium-5hf6h\" (UID: \"7e0bbb05-30c4-4b8a-a776-1694a8de3fe3\") " pod="kube-system/cilium-5hf6h" Jun 20 18:56:13.313664 sshd[5237]: Connection closed by 139.178.68.195 port 45966 Jun 20 18:56:13.315536 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:13.340594 systemd[1]: sshd@24-172.31.22.222:22-139.178.68.195:45966.service: Deactivated successfully. Jun 20 18:56:13.344181 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:56:13.346603 systemd-logind[1889]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:56:13.358333 systemd[1]: Started sshd@25-172.31.22.222:22-139.178.68.195:45970.service - OpenSSH per-connection server daemon (139.178.68.195:45970). Jun 20 18:56:13.359805 systemd-logind[1889]: Removed session 25. Jun 20 18:56:13.466182 containerd[1909]: time="2025-06-20T18:56:13.466128431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hf6h,Uid:7e0bbb05-30c4-4b8a-a776-1694a8de3fe3,Namespace:kube-system,Attempt:0,}" Jun 20 18:56:13.496273 containerd[1909]: time="2025-06-20T18:56:13.496129720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:56:13.496273 containerd[1909]: time="2025-06-20T18:56:13.496196489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:56:13.496273 containerd[1909]: time="2025-06-20T18:56:13.496212037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.497094 containerd[1909]: time="2025-06-20T18:56:13.496408911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:56:13.529528 systemd[1]: Started cri-containerd-b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53.scope - libcontainer container b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53. Jun 20 18:56:13.541431 sshd[5248]: Accepted publickey for core from 139.178.68.195 port 45970 ssh2: RSA SHA256:sF0tjKSFADzF6g6JG756y/3bgw4kb0C1NHj6dI7T2go Jun 20 18:56:13.543181 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:13.552506 systemd-logind[1889]: New session 26 of user core. Jun 20 18:56:13.558446 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:56:13.571401 containerd[1909]: time="2025-06-20T18:56:13.570937227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hf6h,Uid:7e0bbb05-30c4-4b8a-a776-1694a8de3fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\"" Jun 20 18:56:13.587505 containerd[1909]: time="2025-06-20T18:56:13.587464497Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:56:13.599322 containerd[1909]: time="2025-06-20T18:56:13.599275765Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f\"" Jun 20 18:56:13.601044 containerd[1909]: time="2025-06-20T18:56:13.600016286Z" level=info msg="StartContainer for \"f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f\"" Jun 20 18:56:13.628438 systemd[1]: Started cri-containerd-f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f.scope - libcontainer container f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f. Jun 20 18:56:13.661211 containerd[1909]: time="2025-06-20T18:56:13.661164570Z" level=info msg="StartContainer for \"f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f\" returns successfully" Jun 20 18:56:13.742175 systemd[1]: cri-containerd-f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f.scope: Deactivated successfully. Jun 20 18:56:13.742861 systemd[1]: cri-containerd-f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f.scope: Consumed 23ms CPU time, 9.5M memory peak, 3M read from disk. Jun 20 18:56:13.787898 containerd[1909]: time="2025-06-20T18:56:13.787803909Z" level=info msg="shim disconnected" id=f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f namespace=k8s.io Jun 20 18:56:13.787898 containerd[1909]: time="2025-06-20T18:56:13.787881068Z" level=warning msg="cleaning up after shim disconnected" id=f1f06e00de4bbb00918f4695cb40f30d3417d4920912fa1d59814ef366233a9f namespace=k8s.io Jun 20 18:56:13.787898 containerd[1909]: time="2025-06-20T18:56:13.787890544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:13.927849 containerd[1909]: time="2025-06-20T18:56:13.927703789Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:56:13.946841 containerd[1909]: time="2025-06-20T18:56:13.946697372Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba\"" Jun 20 18:56:13.947721 containerd[1909]: time="2025-06-20T18:56:13.947640680Z" level=info msg="StartContainer for \"d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba\"" Jun 20 18:56:13.979440 systemd[1]: Started cri-containerd-d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba.scope - libcontainer container d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba. Jun 20 18:56:14.011124 containerd[1909]: time="2025-06-20T18:56:14.011075115Z" level=info msg="StartContainer for \"d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba\" returns successfully" Jun 20 18:56:14.024743 systemd[1]: cri-containerd-d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba.scope: Deactivated successfully. Jun 20 18:56:14.025973 systemd[1]: cri-containerd-d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba.scope: Consumed 20ms CPU time, 7.3M memory peak, 2.1M read from disk. Jun 20 18:56:14.081322 containerd[1909]: time="2025-06-20T18:56:14.081142988Z" level=info msg="shim disconnected" id=d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba namespace=k8s.io Jun 20 18:56:14.081322 containerd[1909]: time="2025-06-20T18:56:14.081211649Z" level=warning msg="cleaning up after shim disconnected" id=d19123899641679e3d38bb3f41542f5c2a0b28c487a7e8504e654125f6258eba namespace=k8s.io Jun 20 18:56:14.081322 containerd[1909]: time="2025-06-20T18:56:14.081221629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:14.931263 containerd[1909]: time="2025-06-20T18:56:14.931053983Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:56:14.959529 containerd[1909]: time="2025-06-20T18:56:14.959459952Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828\"" Jun 20 18:56:14.960205 containerd[1909]: time="2025-06-20T18:56:14.960181019Z" level=info msg="StartContainer for \"603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828\"" Jun 20 18:56:14.998734 systemd[1]: Started cri-containerd-603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828.scope - libcontainer container 603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828. Jun 20 18:56:15.048431 containerd[1909]: time="2025-06-20T18:56:15.048292403Z" level=info msg="StartContainer for \"603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828\" returns successfully" Jun 20 18:56:15.200903 systemd[1]: cri-containerd-603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828.scope: Deactivated successfully. Jun 20 18:56:15.238264 containerd[1909]: time="2025-06-20T18:56:15.238004971Z" level=info msg="shim disconnected" id=603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828 namespace=k8s.io Jun 20 18:56:15.238264 containerd[1909]: time="2025-06-20T18:56:15.238068799Z" level=warning msg="cleaning up after shim disconnected" id=603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828 namespace=k8s.io Jun 20 18:56:15.238264 containerd[1909]: time="2025-06-20T18:56:15.238079440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:15.300937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603b0f516d98157b6de38975c3c802d9e9405f67a934c62ae2060bdcd2c01828-rootfs.mount: Deactivated successfully. Jun 20 18:56:15.447724 kubelet[3182]: E0620 18:56:15.447649 3182 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:56:15.936263 containerd[1909]: time="2025-06-20T18:56:15.934545070Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:56:15.957275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200911992.mount: Deactivated successfully. Jun 20 18:56:15.961902 containerd[1909]: time="2025-06-20T18:56:15.961828737Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77\"" Jun 20 18:56:15.962689 containerd[1909]: time="2025-06-20T18:56:15.962661887Z" level=info msg="StartContainer for \"bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77\"" Jun 20 18:56:15.998515 systemd[1]: Started cri-containerd-bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77.scope - libcontainer container bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77. Jun 20 18:56:16.025203 systemd[1]: cri-containerd-bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77.scope: Deactivated successfully. Jun 20 18:56:16.029676 containerd[1909]: time="2025-06-20T18:56:16.029574675Z" level=info msg="StartContainer for \"bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77\" returns successfully" Jun 20 18:56:16.059030 containerd[1909]: time="2025-06-20T18:56:16.058968156Z" level=info msg="shim disconnected" id=bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77 namespace=k8s.io Jun 20 18:56:16.059030 containerd[1909]: time="2025-06-20T18:56:16.059016362Z" level=warning msg="cleaning up after shim disconnected" id=bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77 namespace=k8s.io Jun 20 18:56:16.059030 containerd[1909]: time="2025-06-20T18:56:16.059025344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:56:16.301022 systemd[1]: run-containerd-runc-k8s.io-bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77-runc.bYU4fk.mount: Deactivated successfully. Jun 20 18:56:16.301137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd241c19833093a5d735f4795ca2f29351c1fe36958dae6e381e4265503e8a77-rootfs.mount: Deactivated successfully. Jun 20 18:56:16.940471 containerd[1909]: time="2025-06-20T18:56:16.940202241Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:56:16.969502 containerd[1909]: time="2025-06-20T18:56:16.969453987Z" level=info msg="CreateContainer within sandbox \"b06030244b4062ceadd22abfe23fc6b0e35b793a860e4e0df4ea4773cfc92d53\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3\"" Jun 20 18:56:16.970220 containerd[1909]: time="2025-06-20T18:56:16.970193409Z" level=info msg="StartContainer for \"d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3\"" Jun 20 18:56:17.005462 systemd[1]: Started cri-containerd-d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3.scope - libcontainer container d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3. Jun 20 18:56:17.044453 containerd[1909]: time="2025-06-20T18:56:17.044407139Z" level=info msg="StartContainer for \"d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3\" returns successfully" Jun 20 18:56:17.301291 systemd[1]: run-containerd-runc-k8s.io-d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3-runc.Tx3lRq.mount: Deactivated successfully. Jun 20 18:56:17.458469 kubelet[3182]: I0620 18:56:17.456873 3182 setters.go:618] "Node became not ready" node="ip-172-31-22-222" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:56:17Z","lastTransitionTime":"2025-06-20T18:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:56:17.898266 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 18:56:20.726288 systemd[1]: run-containerd-runc-k8s.io-d34e0a80155d5c06fb8b3c86b7e53c70eb7443b71cd7afb87a23c3e5baeddbe3-runc.UjI454.mount: Deactivated successfully. Jun 20 18:56:21.213029 (udev-worker)[6085]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:56:21.220482 (udev-worker)[6088]: Network interface NamePolicy= disabled on kernel command line. Jun 20 18:56:21.227814 systemd-networkd[1820]: lxc_health: Link UP Jun 20 18:56:21.254417 systemd-networkd[1820]: lxc_health: Gained carrier Jun 20 18:56:21.511037 kubelet[3182]: I0620 18:56:21.510163 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5hf6h" podStartSLOduration=9.510143703 podStartE2EDuration="9.510143703s" podCreationTimestamp="2025-06-20 18:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:56:17.979777652 +0000 UTC m=+92.865505162" watchObservedRunningTime="2025-06-20 18:56:21.510143703 +0000 UTC m=+96.395871190" Jun 20 18:56:22.643412 systemd-networkd[1820]: lxc_health: Gained IPv6LL Jun 20 18:56:24.643102 ntpd[1881]: Listen normally on 15 lxc_health [fe80::241e:21ff:fea0:2b39%14]:123 Jun 20 18:56:24.643669 ntpd[1881]: 20 Jun 18:56:24 ntpd[1881]: Listen normally on 15 lxc_health [fe80::241e:21ff:fea0:2b39%14]:123 Jun 20 18:56:27.615152 sshd[5289]: Connection closed by 139.178.68.195 port 45970 Jun 20 18:56:27.617756 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:27.621298 systemd[1]: sshd@25-172.31.22.222:22-139.178.68.195:45970.service: Deactivated successfully. Jun 20 18:56:27.624319 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:56:27.626055 systemd-logind[1889]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:56:27.628468 systemd-logind[1889]: Removed session 26. Jun 20 18:56:45.322364 containerd[1909]: time="2025-06-20T18:56:45.322189570Z" level=info msg="StopPodSandbox for \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\"" Jun 20 18:56:45.322364 containerd[1909]: time="2025-06-20T18:56:45.322299811Z" level=info msg="TearDown network for sandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" successfully" Jun 20 18:56:45.322364 containerd[1909]: time="2025-06-20T18:56:45.322311273Z" level=info msg="StopPodSandbox for \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" returns successfully" Jun 20 18:56:45.322868 containerd[1909]: time="2025-06-20T18:56:45.322712474Z" level=info msg="RemovePodSandbox for \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\"" Jun 20 18:56:45.322868 containerd[1909]: time="2025-06-20T18:56:45.322753960Z" level=info msg="Forcibly stopping sandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\"" Jun 20 18:56:45.322868 containerd[1909]: time="2025-06-20T18:56:45.322811417Z" level=info msg="TearDown network for sandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" successfully" Jun 20 18:56:45.328036 containerd[1909]: time="2025-06-20T18:56:45.327976170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:56:45.328036 containerd[1909]: time="2025-06-20T18:56:45.328037342Z" level=info msg="RemovePodSandbox \"0fa940997afd53d01e427a0ef732a99dd2e1097641bc2b983d4502b3738d5082\" returns successfully" Jun 20 18:56:45.329091 containerd[1909]: time="2025-06-20T18:56:45.328820420Z" level=info msg="StopPodSandbox for \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\"" Jun 20 18:56:45.329091 containerd[1909]: time="2025-06-20T18:56:45.328898330Z" level=info msg="TearDown network for sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" successfully" Jun 20 18:56:45.329091 containerd[1909]: time="2025-06-20T18:56:45.328908281Z" level=info msg="StopPodSandbox for \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" returns successfully" Jun 20 18:56:45.329815 containerd[1909]: time="2025-06-20T18:56:45.329440204Z" level=info msg="RemovePodSandbox for \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\"" Jun 20 18:56:45.329815 containerd[1909]: time="2025-06-20T18:56:45.329460841Z" level=info msg="Forcibly stopping sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\"" Jun 20 18:56:45.329815 containerd[1909]: time="2025-06-20T18:56:45.329527604Z" level=info msg="TearDown network for sandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" successfully" Jun 20 18:56:45.334916 containerd[1909]: time="2025-06-20T18:56:45.334864067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 18:56:45.335068 containerd[1909]: time="2025-06-20T18:56:45.334932422Z" level=info msg="RemovePodSandbox \"3c04309e24a0e95311a61878825039691e625a29e530f732124a5673c465cd43\" returns successfully" Jun 20 18:57:06.180256 systemd[1]: cri-containerd-3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082.scope: Deactivated successfully. Jun 20 18:57:06.180558 systemd[1]: cri-containerd-3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082.scope: Consumed 4.710s CPU time, 89M memory peak, 51.2M read from disk. Jun 20 18:57:06.210073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082-rootfs.mount: Deactivated successfully. Jun 20 18:57:06.230363 containerd[1909]: time="2025-06-20T18:57:06.230307387Z" level=info msg="shim disconnected" id=3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082 namespace=k8s.io Jun 20 18:57:06.230817 containerd[1909]: time="2025-06-20T18:57:06.230406668Z" level=warning msg="cleaning up after shim disconnected" id=3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082 namespace=k8s.io Jun 20 18:57:06.230817 containerd[1909]: time="2025-06-20T18:57:06.230429472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:07.085620 kubelet[3182]: I0620 18:57:07.085375 3182 scope.go:117] "RemoveContainer" containerID="3e952f6056298ca27fd50a331be39cee56d46e85a347fa5c04f036b07d226082" Jun 20 18:57:07.088712 containerd[1909]: time="2025-06-20T18:57:07.088663919Z" level=info msg="CreateContainer within sandbox \"e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 18:57:07.153867 containerd[1909]: time="2025-06-20T18:57:07.153823883Z" level=info msg="CreateContainer within sandbox \"e9dc5aa8a59147febabfb07b26e613f3df756fee168a136817b50cad9b3c4a71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1da2aa5de4e190dd43b9874c3f6fd19630e2e590981c246cdd966b0d73aef5ee\"" Jun 20 18:57:07.154449 containerd[1909]: time="2025-06-20T18:57:07.154406504Z" level=info msg="StartContainer for \"1da2aa5de4e190dd43b9874c3f6fd19630e2e590981c246cdd966b0d73aef5ee\"" Jun 20 18:57:07.194467 systemd[1]: Started cri-containerd-1da2aa5de4e190dd43b9874c3f6fd19630e2e590981c246cdd966b0d73aef5ee.scope - libcontainer container 1da2aa5de4e190dd43b9874c3f6fd19630e2e590981c246cdd966b0d73aef5ee. Jun 20 18:57:07.211030 systemd[1]: run-containerd-runc-k8s.io-1da2aa5de4e190dd43b9874c3f6fd19630e2e590981c246cdd966b0d73aef5ee-runc.1mT6O6.mount: Deactivated successfully. Jun 20 18:57:07.251212 containerd[1909]: time="2025-06-20T18:57:07.251158720Z" level=info msg="StartContainer for \"1da2aa5de4e190dd43b9874c3f6fd19630e2e590981c246cdd966b0d73aef5ee\" returns successfully" Jun 20 18:57:08.565124 kubelet[3182]: E0620 18:57:08.565054 3182 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 20 18:57:10.239220 systemd[1]: cri-containerd-bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8.scope: Deactivated successfully. Jun 20 18:57:10.239515 systemd[1]: cri-containerd-bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8.scope: Consumed 2.612s CPU time, 30.4M memory peak, 13.8M read from disk. Jun 20 18:57:10.269554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8-rootfs.mount: Deactivated successfully. Jun 20 18:57:10.294749 containerd[1909]: time="2025-06-20T18:57:10.294644994Z" level=info msg="shim disconnected" id=bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8 namespace=k8s.io Jun 20 18:57:10.294749 containerd[1909]: time="2025-06-20T18:57:10.294698741Z" level=warning msg="cleaning up after shim disconnected" id=bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8 namespace=k8s.io Jun 20 18:57:10.294749 containerd[1909]: time="2025-06-20T18:57:10.294707098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:11.111401 kubelet[3182]: I0620 18:57:11.111366 3182 scope.go:117] "RemoveContainer" containerID="bda610c5481325e3af2435337cde09014a0c082a2614677d2269bd17dc0719d8" Jun 20 18:57:11.114015 containerd[1909]: time="2025-06-20T18:57:11.113979664Z" level=info msg="CreateContainer within sandbox \"d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 18:57:11.134619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254357388.mount: Deactivated successfully. Jun 20 18:57:11.147371 containerd[1909]: time="2025-06-20T18:57:11.147314321Z" level=info msg="CreateContainer within sandbox \"d14b85f0fa1e603756ab926edebc60b96cf04cdfb59f00cdc7c8784ac8bb42d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4a4ed3577e1d7cdf69fd7a50d9bd826fafbf57831f8b31e499d2be3fc074b892\"" Jun 20 18:57:11.149967 containerd[1909]: time="2025-06-20T18:57:11.149930107Z" level=info msg="StartContainer for \"4a4ed3577e1d7cdf69fd7a50d9bd826fafbf57831f8b31e499d2be3fc074b892\"" Jun 20 18:57:11.188465 systemd[1]: Started cri-containerd-4a4ed3577e1d7cdf69fd7a50d9bd826fafbf57831f8b31e499d2be3fc074b892.scope - libcontainer container 4a4ed3577e1d7cdf69fd7a50d9bd826fafbf57831f8b31e499d2be3fc074b892. Jun 20 18:57:11.237229 containerd[1909]: time="2025-06-20T18:57:11.237178509Z" level=info msg="StartContainer for \"4a4ed3577e1d7cdf69fd7a50d9bd826fafbf57831f8b31e499d2be3fc074b892\" returns successfully" Jun 20 18:57:18.565840 kubelet[3182]: E0620 18:57:18.565673 3182 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"