Apr 30 12:49:43.939038 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 12:49:43.939073 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:49:43.939092 kernel: BIOS-provided physical RAM map: Apr 30 12:49:43.939104 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:49:43.939115 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 12:49:43.939126 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Apr 30 12:49:43.940740 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 12:49:43.940771 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 12:49:43.940784 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 12:49:43.940797 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 12:49:43.940816 kernel: NX (Execute Disable) protection: active Apr 30 12:49:43.940828 kernel: APIC: Static calls initialized Apr 30 12:49:43.940841 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Apr 30 12:49:43.940855 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Apr 30 12:49:43.940871 kernel: extended physical RAM map: Apr 30 12:49:43.940885 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:49:43.940902 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Apr 30 12:49:43.940917 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Apr 30 12:49:43.940931 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Apr 30 12:49:43.940945 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Apr 30 12:49:43.940959 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 12:49:43.940973 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 12:49:43.940987 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 12:49:43.941001 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 12:49:43.941014 kernel: efi: EFI v2.7 by EDK II Apr 30 12:49:43.941040 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 12:49:43.941058 kernel: secureboot: Secure boot disabled Apr 30 12:49:43.941071 kernel: SMBIOS 2.7 present. Apr 30 12:49:43.941085 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 12:49:43.941099 kernel: Hypervisor detected: KVM Apr 30 12:49:43.941113 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 12:49:43.941127 kernel: kvm-clock: using sched offset of 3897441029 cycles Apr 30 12:49:43.941190 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 12:49:43.941207 kernel: tsc: Detected 2499.996 MHz processor Apr 30 12:49:43.941224 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 12:49:43.941237 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 12:49:43.941249 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 12:49:43.941265 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 12:49:43.941278 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 12:49:43.941290 kernel: Using GB pages for direct mapping Apr 30 12:49:43.941309 kernel: ACPI: Early table checksum verification disabled Apr 30 12:49:43.941322 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 12:49:43.941336 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 12:49:43.941353 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 12:49:43.941366 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 12:49:43.941380 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 12:49:43.941394 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 12:49:43.941408 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 12:49:43.941422 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 12:49:43.941435 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 12:49:43.941452 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 12:49:43.941466 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 12:49:43.941480 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 12:49:43.941495 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 12:49:43.941509 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 12:49:43.941523 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 12:49:43.941537 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 12:49:43.941551 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 12:49:43.941565 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 12:49:43.941581 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 12:49:43.941596 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 12:49:43.941610 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 12:49:43.941624 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 12:49:43.941638 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 12:49:43.941652 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 12:49:43.941666 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 12:49:43.941680 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 12:49:43.941694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 12:49:43.941708 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 12:49:43.941725 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 12:49:43.941738 kernel: Zone ranges: Apr 30 12:49:43.941753 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 12:49:43.941767 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 12:49:43.941781 kernel: Normal empty Apr 30 12:49:43.941795 kernel: Movable zone start for each node Apr 30 12:49:43.941809 kernel: Early memory node ranges Apr 30 12:49:43.941823 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 12:49:43.941837 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 12:49:43.941855 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 12:49:43.941870 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 12:49:43.941884 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:49:43.941898 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 12:49:43.941913 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 12:49:43.941928 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 12:49:43.941941 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 12:49:43.941955 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 12:49:43.941969 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 12:49:43.941986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 12:49:43.942000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 12:49:43.942015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 12:49:43.942029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 12:49:43.942044 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 12:49:43.942058 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 12:49:43.942073 kernel: TSC deadline timer available Apr 30 12:49:43.942087 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 12:49:43.942102 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 12:49:43.942117 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 12:49:43.942134 kernel: Booting paravirtualized kernel on KVM Apr 30 12:49:43.942163 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 12:49:43.942178 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 12:49:43.942192 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 12:49:43.942207 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 12:49:43.942222 kernel: pcpu-alloc: [0] 0 1 Apr 30 12:49:43.942237 kernel: kvm-guest: PV spinlocks enabled Apr 30 12:49:43.942252 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 12:49:43.942274 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:49:43.942290 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:49:43.942305 kernel: random: crng init done Apr 30 12:49:43.942318 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:49:43.942334 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 12:49:43.942349 kernel: Fallback order for Node 0: 0 Apr 30 12:49:43.942363 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 12:49:43.942378 kernel: Policy zone: DMA32 Apr 30 12:49:43.942396 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:49:43.942412 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 165012K reserved, 0K cma-reserved) Apr 30 12:49:43.942428 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:49:43.942443 kernel: Kernel/User page tables isolation: enabled Apr 30 12:49:43.942458 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 12:49:43.942485 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 12:49:43.942503 kernel: Dynamic Preempt: voluntary Apr 30 12:49:43.942519 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:49:43.942536 kernel: rcu: RCU event tracing is enabled. Apr 30 12:49:43.942552 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:49:43.942568 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:49:43.942584 kernel: Rude variant of Tasks RCU enabled. Apr 30 12:49:43.942603 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:49:43.942618 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:49:43.942634 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:49:43.942650 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 12:49:43.942667 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:49:43.942686 kernel: Console: colour dummy device 80x25 Apr 30 12:49:43.942701 kernel: printk: console [tty0] enabled Apr 30 12:49:43.942717 kernel: printk: console [ttyS0] enabled Apr 30 12:49:43.942733 kernel: ACPI: Core revision 20230628 Apr 30 12:49:43.942749 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 12:49:43.942765 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 12:49:43.942781 kernel: x2apic enabled Apr 30 12:49:43.942797 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 12:49:43.942813 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 12:49:43.942833 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 30 12:49:43.942849 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 12:49:43.942865 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 12:49:43.942881 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 12:49:43.942897 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 12:49:43.942913 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 12:49:43.942928 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 12:49:43.942944 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 12:49:43.942960 kernel: RETBleed: Vulnerable Apr 30 12:49:43.942975 kernel: Speculative Store Bypass: Vulnerable Apr 30 12:49:43.942994 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 12:49:43.943009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 12:49:43.943025 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 12:49:43.943040 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 12:49:43.943056 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 12:49:43.943072 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 12:49:43.943087 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 12:49:43.943103 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 12:49:43.943119 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 12:49:43.943134 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 12:49:43.945194 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 12:49:43.945222 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 12:49:43.945239 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 12:49:43.945254 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 12:49:43.945270 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 12:49:43.945285 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 12:49:43.945301 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 12:49:43.945317 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 12:49:43.945332 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 12:49:43.945347 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 12:49:43.945363 kernel: Freeing SMP alternatives memory: 32K Apr 30 12:49:43.945378 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:49:43.945394 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:49:43.945413 kernel: landlock: Up and running. Apr 30 12:49:43.945428 kernel: SELinux: Initializing. Apr 30 12:49:43.945444 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 12:49:43.945460 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 12:49:43.945476 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 12:49:43.945492 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:49:43.945509 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:49:43.945525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:49:43.945541 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 12:49:43.945560 kernel: signal: max sigframe size: 3632 Apr 30 12:49:43.945576 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:49:43.945593 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:49:43.945608 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 12:49:43.945624 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:49:43.945640 kernel: smpboot: x86: Booting SMP configuration: Apr 30 12:49:43.945656 kernel: .... node #0, CPUs: #1 Apr 30 12:49:43.945673 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 12:49:43.945690 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 12:49:43.945709 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:49:43.945724 kernel: smpboot: Max logical packages: 1 Apr 30 12:49:43.945740 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 30 12:49:43.945756 kernel: devtmpfs: initialized Apr 30 12:49:43.945772 kernel: x86/mm: Memory block size: 128MB Apr 30 12:49:43.945788 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 12:49:43.945804 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:49:43.945820 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:49:43.945836 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:49:43.945855 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:49:43.945871 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:49:43.945887 kernel: audit: type=2000 audit(1746017383.224:1): state=initialized audit_enabled=0 res=1 Apr 30 12:49:43.945903 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:49:43.945919 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 12:49:43.945934 kernel: cpuidle: using governor menu Apr 30 12:49:43.945951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:49:43.945967 kernel: dca service started, version 1.12.1 Apr 30 12:49:43.945983 kernel: PCI: Using configuration type 1 for base access Apr 30 12:49:43.946002 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 12:49:43.946019 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:49:43.946035 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:49:43.946051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:49:43.946067 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:49:43.946082 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:49:43.946098 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:49:43.946114 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:49:43.946131 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:49:43.946167 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 12:49:43.946182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 12:49:43.946197 kernel: ACPI: Interpreter enabled Apr 30 12:49:43.946213 kernel: ACPI: PM: (supports S0 S5) Apr 30 12:49:43.946229 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 12:49:43.946245 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 12:49:43.946262 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 12:49:43.946279 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 12:49:43.946296 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:49:43.946527 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:49:43.946689 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 12:49:43.946826 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 12:49:43.946845 kernel: acpiphp: Slot [3] registered Apr 30 12:49:43.946862 kernel: acpiphp: Slot [4] registered Apr 30 12:49:43.946878 kernel: acpiphp: Slot [5] registered Apr 30 12:49:43.946894 kernel: acpiphp: Slot [6] registered Apr 30 12:49:43.946914 kernel: acpiphp: Slot [7] registered Apr 30 12:49:43.946930 kernel: acpiphp: Slot [8] registered Apr 30 12:49:43.946946 kernel: acpiphp: Slot [9] registered Apr 30 12:49:43.946962 kernel: acpiphp: Slot [10] registered Apr 30 12:49:43.946978 kernel: acpiphp: Slot [11] registered Apr 30 12:49:43.946994 kernel: acpiphp: Slot [12] registered Apr 30 12:49:43.947011 kernel: acpiphp: Slot [13] registered Apr 30 12:49:43.947026 kernel: acpiphp: Slot [14] registered Apr 30 12:49:43.947042 kernel: acpiphp: Slot [15] registered Apr 30 12:49:43.947058 kernel: acpiphp: Slot [16] registered Apr 30 12:49:43.947077 kernel: acpiphp: Slot [17] registered Apr 30 12:49:43.947093 kernel: acpiphp: Slot [18] registered Apr 30 12:49:43.947109 kernel: acpiphp: Slot [19] registered Apr 30 12:49:43.947125 kernel: acpiphp: Slot [20] registered Apr 30 12:49:43.950219 kernel: acpiphp: Slot [21] registered Apr 30 12:49:43.950252 kernel: acpiphp: Slot [22] registered Apr 30 12:49:43.950269 kernel: acpiphp: Slot [23] registered Apr 30 12:49:43.950286 kernel: acpiphp: Slot [24] registered Apr 30 12:49:43.950301 kernel: acpiphp: Slot [25] registered Apr 30 12:49:43.950323 kernel: acpiphp: Slot [26] registered Apr 30 12:49:43.950339 kernel: acpiphp: Slot [27] registered Apr 30 12:49:43.950354 kernel: acpiphp: Slot [28] registered Apr 30 12:49:43.950370 kernel: acpiphp: Slot [29] registered Apr 30 12:49:43.950386 kernel: acpiphp: Slot [30] registered Apr 30 12:49:43.950401 kernel: acpiphp: Slot [31] registered Apr 30 12:49:43.950417 kernel: PCI host bridge to bus 0000:00 Apr 30 12:49:43.950601 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 12:49:43.950730 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 12:49:43.950859 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 12:49:43.950983 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 12:49:43.951107 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 12:49:43.951251 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:49:43.951407 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 12:49:43.951555 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 12:49:43.951706 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 12:49:43.951840 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 12:49:43.951985 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 12:49:43.952122 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 12:49:43.954336 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 12:49:43.954489 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 12:49:43.954632 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 12:49:43.954776 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 12:49:43.954927 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 12:49:43.955070 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 12:49:43.955239 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 12:49:43.955379 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 12:49:43.955517 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 12:49:43.955667 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 12:49:43.955812 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 12:49:43.955959 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 12:49:43.956100 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 12:49:43.956121 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 12:49:43.956139 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 12:49:43.957254 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 12:49:43.957274 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 12:49:43.957299 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 12:49:43.957317 kernel: iommu: Default domain type: Translated Apr 30 12:49:43.957335 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 12:49:43.957352 kernel: efivars: Registered efivars operations Apr 30 12:49:43.957370 kernel: PCI: Using ACPI for IRQ routing Apr 30 12:49:43.957388 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 12:49:43.957405 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Apr 30 12:49:43.957422 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 12:49:43.957439 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 12:49:43.957639 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 12:49:43.957787 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 12:49:43.957928 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 12:49:43.957948 kernel: vgaarb: loaded Apr 30 12:49:43.957963 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 12:49:43.957980 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 12:49:43.957996 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 12:49:43.958012 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:49:43.958029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:49:43.958050 kernel: pnp: PnP ACPI init Apr 30 12:49:43.958066 kernel: pnp: PnP ACPI: found 5 devices Apr 30 12:49:43.958082 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 12:49:43.958097 kernel: NET: Registered PF_INET protocol family Apr 30 12:49:43.958110 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:49:43.958124 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 12:49:43.958139 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:49:43.958191 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 12:49:43.958211 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 12:49:43.958225 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 12:49:43.958238 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 12:49:43.958252 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 12:49:43.958266 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:49:43.958282 kernel: NET: Registered PF_XDP protocol family Apr 30 12:49:43.958430 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 12:49:43.958555 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 12:49:43.958679 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 12:49:43.958805 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 12:49:43.958928 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 12:49:43.959073 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 12:49:43.959094 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:49:43.959111 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 12:49:43.959127 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 12:49:43.962837 kernel: clocksource: Switched to clocksource tsc Apr 30 12:49:43.962860 kernel: Initialise system trusted keyrings Apr 30 12:49:43.962883 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 12:49:43.962899 kernel: Key type asymmetric registered Apr 30 12:49:43.962914 kernel: Asymmetric key parser 'x509' registered Apr 30 12:49:43.962930 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 12:49:43.962946 kernel: io scheduler mq-deadline registered Apr 30 12:49:43.962961 kernel: io scheduler kyber registered Apr 30 12:49:43.962977 kernel: io scheduler bfq registered Apr 30 12:49:43.962993 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 12:49:43.963008 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:49:43.963024 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 12:49:43.963043 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 12:49:43.963059 kernel: i8042: Warning: Keylock active Apr 30 12:49:43.963074 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 12:49:43.963090 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 12:49:43.963282 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 12:49:43.963415 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 12:49:43.963541 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T12:49:43 UTC (1746017383) Apr 30 12:49:43.963670 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 12:49:43.963689 kernel: intel_pstate: CPU model not supported Apr 30 12:49:43.963704 kernel: efifb: probing for efifb Apr 30 12:49:43.963720 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Apr 30 12:49:43.963758 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 12:49:43.963778 kernel: efifb: scrolling: redraw Apr 30 12:49:43.963794 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 12:49:43.963810 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 12:49:43.963827 kernel: fb0: EFI VGA frame buffer device Apr 30 12:49:43.963846 kernel: pstore: Using crash dump compression: deflate Apr 30 12:49:43.963862 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 12:49:43.963879 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:49:43.963895 kernel: Segment Routing with IPv6 Apr 30 12:49:43.963914 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:49:43.963931 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:49:43.963947 kernel: Key type dns_resolver registered Apr 30 12:49:43.963963 kernel: IPI shorthand broadcast: enabled Apr 30 12:49:43.963980 kernel: sched_clock: Marking stable (467002549, 138553445)->(674531749, -68975755) Apr 30 12:49:43.963999 kernel: registered taskstats version 1 Apr 30 12:49:43.964016 kernel: Loading compiled-in X.509 certificates Apr 30 12:49:43.964032 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 12:49:43.964049 kernel: Key type .fscrypt registered Apr 30 12:49:43.964064 kernel: Key type fscrypt-provisioning registered Apr 30 12:49:43.964081 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:49:43.964097 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:49:43.964113 kernel: ima: No architecture policies found Apr 30 12:49:43.964130 kernel: clk: Disabling unused clocks Apr 30 12:49:43.964160 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 12:49:43.964177 kernel: Write protecting the kernel read-only data: 38912k Apr 30 12:49:43.964194 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 12:49:43.964211 kernel: Run /init as init process Apr 30 12:49:43.964227 kernel: with arguments: Apr 30 12:49:43.964243 kernel: /init Apr 30 12:49:43.964259 kernel: with environment: Apr 30 12:49:43.964275 kernel: HOME=/ Apr 30 12:49:43.964291 kernel: TERM=linux Apr 30 12:49:43.964312 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:49:43.964330 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:49:43.964351 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:49:43.964369 systemd[1]: Detected virtualization amazon. Apr 30 12:49:43.964386 systemd[1]: Detected architecture x86-64. Apr 30 12:49:43.964406 systemd[1]: Running in initrd. Apr 30 12:49:43.964424 systemd[1]: No hostname configured, using default hostname. Apr 30 12:49:43.964442 systemd[1]: Hostname set to . Apr 30 12:49:43.964458 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:49:43.964476 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:49:43.964493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:49:43.964510 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:49:43.964531 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:49:43.964549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:49:43.964566 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:49:43.964585 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:49:43.964604 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:49:43.964622 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:49:43.964640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:49:43.964660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:49:43.964677 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:49:43.964695 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:49:43.964712 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:49:43.964730 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:49:43.964747 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:49:43.964765 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:49:43.964783 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:49:43.964800 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:49:43.964821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:49:43.964838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:49:43.964855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:49:43.964873 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:49:43.964891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:49:43.964908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:49:43.964925 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:49:43.964943 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:49:43.964963 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:49:43.964981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:49:43.964998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:43.965016 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:49:43.965097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:49:43.965677 systemd-journald[179]: Collecting audit messages is disabled. Apr 30 12:49:43.965727 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:49:43.965746 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:49:43.965767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:43.965788 systemd-journald[179]: Journal started Apr 30 12:49:43.965822 systemd-journald[179]: Runtime Journal (/run/log/journal/ec243e0a1bb539fd6563f43dbdd3c248) is 4.7M, max 38.1M, 33.4M free. Apr 30 12:49:43.933886 systemd-modules-load[180]: Inserted module 'overlay' Apr 30 12:49:43.973162 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:49:43.989176 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:49:43.989482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:49:43.994935 kernel: Bridge firewalling registered Apr 30 12:49:43.992250 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 30 12:49:43.993158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:49:43.995781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:49:43.999757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:49:44.004381 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 12:49:44.012287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:49:44.018047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:49:44.022221 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:49:44.029197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:44.032356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:49:44.035940 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:49:44.043356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:49:44.046466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:49:44.062003 dracut-cmdline[214]: dracut-dracut-053 Apr 30 12:49:44.065428 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:49:44.094726 systemd-resolved[215]: Positive Trust Anchors: Apr 30 12:49:44.095764 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:49:44.095833 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:49:44.102381 systemd-resolved[215]: Defaulting to hostname 'linux'. Apr 30 12:49:44.105818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:49:44.106822 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:49:44.150182 kernel: SCSI subsystem initialized Apr 30 12:49:44.160169 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:49:44.172180 kernel: iscsi: registered transport (tcp) Apr 30 12:49:44.193548 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:49:44.193632 kernel: QLogic iSCSI HBA Driver Apr 30 12:49:44.230715 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:49:44.235374 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:49:44.262263 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:49:44.262339 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:49:44.262363 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:49:44.304205 kernel: raid6: avx512x4 gen() 18357 MB/s Apr 30 12:49:44.322190 kernel: raid6: avx512x2 gen() 18202 MB/s Apr 30 12:49:44.340191 kernel: raid6: avx512x1 gen() 18181 MB/s Apr 30 12:49:44.358170 kernel: raid6: avx2x4 gen() 18039 MB/s Apr 30 12:49:44.376170 kernel: raid6: avx2x2 gen() 18060 MB/s Apr 30 12:49:44.393400 kernel: raid6: avx2x1 gen() 13797 MB/s Apr 30 12:49:44.393445 kernel: raid6: using algorithm avx512x4 gen() 18357 MB/s Apr 30 12:49:44.413259 kernel: raid6: .... xor() 8032 MB/s, rmw enabled Apr 30 12:49:44.413307 kernel: raid6: using avx512x2 recovery algorithm Apr 30 12:49:44.435181 kernel: xor: automatically using best checksumming function avx Apr 30 12:49:44.589175 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:49:44.599025 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:49:44.609365 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:49:44.624125 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 30 12:49:44.630019 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:49:44.641331 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:49:44.656515 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 30 12:49:44.685295 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:49:44.690860 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:49:44.741691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:49:44.751865 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:49:44.772742 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:49:44.777605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:49:44.780093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:49:44.780609 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:49:44.788449 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:49:44.815949 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:49:44.834334 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 12:49:44.860563 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 12:49:44.860786 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 12:49:44.860974 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 12:49:44.860997 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:8d:73:45:a6:f3 Apr 30 12:49:44.870534 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:49:44.878240 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 12:49:44.878513 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 12:49:44.882995 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 12:49:44.883058 kernel: AES CTR mode by8 optimization enabled Apr 30 12:49:44.896173 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 12:49:44.910637 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:49:44.910711 kernel: GPT:9289727 != 16777215 Apr 30 12:49:44.910732 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:49:44.910749 kernel: GPT:9289727 != 16777215 Apr 30 12:49:44.910765 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:49:44.910783 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:44.910756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:49:44.910947 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:44.913308 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:49:44.913871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:49:44.914027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:44.917017 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:44.923340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:44.924933 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:49:44.943315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:44.951359 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:49:44.975188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:45.014176 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (446) Apr 30 12:49:45.019165 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (445) Apr 30 12:49:45.043753 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 12:49:45.067934 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 12:49:45.093468 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 12:49:45.094062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 12:49:45.106188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 12:49:45.117414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:49:45.124085 disk-uuid[628]: Primary Header is updated. Apr 30 12:49:45.124085 disk-uuid[628]: Secondary Entries is updated. Apr 30 12:49:45.124085 disk-uuid[628]: Secondary Header is updated. Apr 30 12:49:45.129170 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:45.148172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:46.143121 disk-uuid[629]: The operation has completed successfully. Apr 30 12:49:46.143786 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:46.267768 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:49:46.267906 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:49:46.326402 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:49:46.330093 sh[887]: Success Apr 30 12:49:46.350206 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 12:49:46.457858 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:49:46.471275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:49:46.473098 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:49:46.512777 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 12:49:46.512839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:46.512853 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:49:46.516134 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:49:46.516213 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:49:46.653215 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:49:46.692352 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:49:46.693466 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:49:46.697312 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:49:46.699481 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:49:46.729596 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:46.729662 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:46.729675 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:49:46.735164 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:49:46.741189 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:46.742748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:49:46.747330 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:49:46.786359 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:49:46.791334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:49:46.816037 systemd-networkd[1076]: lo: Link UP Apr 30 12:49:46.816048 systemd-networkd[1076]: lo: Gained carrier Apr 30 12:49:46.817354 systemd-networkd[1076]: Enumeration completed Apr 30 12:49:46.817651 systemd-networkd[1076]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:46.817655 systemd-networkd[1076]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:49:46.818493 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:49:46.819269 systemd[1]: Reached target network.target - Network. Apr 30 12:49:46.820499 systemd-networkd[1076]: eth0: Link UP Apr 30 12:49:46.820503 systemd-networkd[1076]: eth0: Gained carrier Apr 30 12:49:46.820512 systemd-networkd[1076]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:46.833251 systemd-networkd[1076]: eth0: DHCPv4 address 172.31.21.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 12:49:47.127542 ignition[1019]: Ignition 2.20.0 Apr 30 12:49:47.127554 ignition[1019]: Stage: fetch-offline Apr 30 12:49:47.127729 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:47.127738 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:47.129078 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:49:47.127967 ignition[1019]: Ignition finished successfully Apr 30 12:49:47.135330 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:49:47.147632 ignition[1087]: Ignition 2.20.0 Apr 30 12:49:47.147642 ignition[1087]: Stage: fetch Apr 30 12:49:47.147929 ignition[1087]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:47.147938 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:47.148014 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:47.156354 ignition[1087]: PUT result: OK Apr 30 12:49:47.158960 ignition[1087]: parsed url from cmdline: "" Apr 30 12:49:47.158971 ignition[1087]: no config URL provided Apr 30 12:49:47.158980 ignition[1087]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:49:47.158996 ignition[1087]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:49:47.159037 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:47.160066 ignition[1087]: PUT result: OK Apr 30 12:49:47.160126 ignition[1087]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 12:49:47.161107 ignition[1087]: GET result: OK Apr 30 12:49:47.161223 ignition[1087]: parsing config with SHA512: 81e28fa2d9a624e6ca65a67e2ccac8db851ba2ff0dfb8794d26728c64d22fe1d24cdb2785f1c90993f1e7daf2eae87191417950b084d929179ff3532411d4c1f Apr 30 12:49:47.166170 unknown[1087]: fetched base config from "system" Apr 30 12:49:47.166184 unknown[1087]: fetched base config from "system" Apr 30 12:49:47.166194 unknown[1087]: fetched user config from "aws" Apr 30 12:49:47.166924 ignition[1087]: fetch: fetch complete Apr 30 12:49:47.166931 ignition[1087]: fetch: fetch passed Apr 30 12:49:47.166988 ignition[1087]: Ignition finished successfully Apr 30 12:49:47.169536 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:49:47.174359 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:49:47.190292 ignition[1093]: Ignition 2.20.0 Apr 30 12:49:47.190306 ignition[1093]: Stage: kargs Apr 30 12:49:47.190732 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:47.190746 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:47.190865 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:47.191801 ignition[1093]: PUT result: OK Apr 30 12:49:47.194806 ignition[1093]: kargs: kargs passed Apr 30 12:49:47.194875 ignition[1093]: Ignition finished successfully Apr 30 12:49:47.196355 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:49:47.201360 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:49:47.215572 ignition[1099]: Ignition 2.20.0 Apr 30 12:49:47.215585 ignition[1099]: Stage: disks Apr 30 12:49:47.216037 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:47.216051 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:47.216199 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:47.217226 ignition[1099]: PUT result: OK Apr 30 12:49:47.220446 ignition[1099]: disks: disks passed Apr 30 12:49:47.220520 ignition[1099]: Ignition finished successfully Apr 30 12:49:47.221865 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:49:47.222821 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:49:47.223217 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:49:47.223747 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:49:47.224288 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:49:47.224835 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:49:47.237388 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:49:47.275528 systemd-fsck[1107]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 12:49:47.278130 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:49:47.283247 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:49:47.378175 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 12:49:47.378683 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:49:47.379732 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:49:47.397322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:49:47.399437 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:49:47.400643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 12:49:47.400695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:49:47.400722 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:49:47.418170 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1126) Apr 30 12:49:47.418463 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:49:47.424268 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:47.424301 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:47.424322 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:49:47.431761 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:49:47.430930 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:49:47.434729 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:49:47.840687 initrd-setup-root[1150]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:49:47.855166 initrd-setup-root[1157]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:49:47.872884 initrd-setup-root[1164]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:49:47.877298 initrd-setup-root[1171]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:49:48.122665 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:49:48.126314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:49:48.130302 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:49:48.135951 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:49:48.138221 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:48.158504 ignition[1239]: INFO : Ignition 2.20.0 Apr 30 12:49:48.158504 ignition[1239]: INFO : Stage: mount Apr 30 12:49:48.158504 ignition[1239]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:48.158504 ignition[1239]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:48.158504 ignition[1239]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:48.161540 ignition[1239]: INFO : PUT result: OK Apr 30 12:49:48.164664 ignition[1239]: INFO : mount: mount passed Apr 30 12:49:48.165821 ignition[1239]: INFO : Ignition finished successfully Apr 30 12:49:48.166683 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:49:48.171310 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:49:48.175312 systemd-networkd[1076]: eth0: Gained IPv6LL Apr 30 12:49:48.175508 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:49:48.191390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:49:48.212180 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1251) Apr 30 12:49:48.215216 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:48.215270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:48.217666 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:49:48.222169 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:49:48.224454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:49:48.244139 ignition[1268]: INFO : Ignition 2.20.0 Apr 30 12:49:48.244139 ignition[1268]: INFO : Stage: files Apr 30 12:49:48.245577 ignition[1268]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:48.245577 ignition[1268]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:48.245577 ignition[1268]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:48.246868 ignition[1268]: INFO : PUT result: OK Apr 30 12:49:48.248368 ignition[1268]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:49:48.249221 ignition[1268]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:49:48.249221 ignition[1268]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:49:48.282818 ignition[1268]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:49:48.283674 ignition[1268]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:49:48.283674 ignition[1268]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:49:48.283434 unknown[1268]: wrote ssh authorized keys file for user: core Apr 30 12:49:48.285555 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 12:49:48.285555 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 12:49:48.367502 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:49:48.545664 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 12:49:48.545664 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:49:48.547422 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 12:49:48.990259 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:49:49.128502 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:49:49.129531 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:49:49.137542 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:49:49.137542 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:49:49.137542 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 12:49:49.609953 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:49:50.447618 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:49:50.447618 ignition[1268]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:49:50.459277 ignition[1268]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:49:50.460223 ignition[1268]: INFO : files: files passed Apr 30 12:49:50.460223 ignition[1268]: INFO : Ignition finished successfully Apr 30 12:49:50.461224 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:49:50.467455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:49:50.471967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:49:50.476385 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:49:50.476540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:49:50.498084 initrd-setup-root-after-ignition[1297]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:49:50.499869 initrd-setup-root-after-ignition[1297]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:49:50.502126 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:49:50.502418 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:49:50.504464 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:49:50.507351 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:49:50.533584 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:49:50.533728 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:49:50.535472 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:49:50.536090 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:49:50.536916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:49:50.543344 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:49:50.555961 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:49:50.561348 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:49:50.573671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:49:50.574334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:49:50.575315 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:49:50.576138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:49:50.576344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:49:50.577560 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:49:50.578422 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:49:50.579232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:49:50.580007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:49:50.580789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:49:50.581655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:49:50.582429 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:49:50.583223 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:49:50.584337 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:49:50.585139 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:49:50.585888 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:49:50.586069 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:49:50.587172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:49:50.587954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:49:50.588650 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:49:50.588790 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:49:50.589553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:49:50.589762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:49:50.590773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:49:50.590953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:49:50.591603 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:49:50.591752 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:49:50.598434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:49:50.599019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:49:50.599796 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:49:50.603042 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:49:50.604322 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:49:50.605433 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:49:50.607125 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:49:50.607768 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:49:50.617496 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:49:50.618185 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:49:50.626019 ignition[1322]: INFO : Ignition 2.20.0 Apr 30 12:49:50.626019 ignition[1322]: INFO : Stage: umount Apr 30 12:49:50.626019 ignition[1322]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:50.626019 ignition[1322]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:50.626019 ignition[1322]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:50.629457 ignition[1322]: INFO : PUT result: OK Apr 30 12:49:50.634743 ignition[1322]: INFO : umount: umount passed Apr 30 12:49:50.634743 ignition[1322]: INFO : Ignition finished successfully Apr 30 12:49:50.637114 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:49:50.637687 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:49:50.638621 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:49:50.638669 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:49:50.639036 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:49:50.639077 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:49:50.640686 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:49:50.640732 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:49:50.641452 systemd[1]: Stopped target network.target - Network. Apr 30 12:49:50.641980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:49:50.642029 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:49:50.642586 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:49:50.642876 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:49:50.646240 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:49:50.646566 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:49:50.647381 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:49:50.647933 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:49:50.647974 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:49:50.648513 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:49:50.648551 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:49:50.649130 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:49:50.649192 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:49:50.649699 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:49:50.649738 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:49:50.650468 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:49:50.650968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:49:50.652488 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:49:50.653095 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:49:50.653198 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:49:50.654167 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:49:50.654261 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:49:50.654824 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:49:50.654908 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:49:50.658042 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:49:50.658414 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:49:50.658512 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:49:50.660073 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:49:50.660951 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:49:50.661014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:49:50.666251 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:49:50.666614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:49:50.666673 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:49:50.668274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:49:50.668326 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:49:50.669016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:49:50.669206 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:49:50.669590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:49:50.669633 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:49:50.670287 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:49:50.672314 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:49:50.672375 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:49:50.681468 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:49:50.681573 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:49:50.683474 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:49:50.683611 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:49:50.684752 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:49:50.684822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:49:50.685746 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:49:50.685778 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:49:50.686463 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:49:50.686509 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:49:50.687474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:49:50.687515 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:49:50.688504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:49:50.688548 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:50.694342 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:49:50.694809 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:49:50.694875 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:49:50.696996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:49:50.697123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:50.698641 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:49:50.698704 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:49:50.701282 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:49:50.701374 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:49:50.702361 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:49:50.707309 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:49:50.715612 systemd[1]: Switching root. Apr 30 12:49:50.767644 systemd-journald[179]: Journal stopped Apr 30 12:49:52.753717 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 30 12:49:52.753797 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:49:52.753828 kernel: SELinux: policy capability open_perms=1 Apr 30 12:49:52.753850 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:49:52.753867 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:49:52.753883 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:49:52.753901 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:49:52.753919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:49:52.753937 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:49:52.753955 kernel: audit: type=1403 audit(1746017391.170:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:49:52.753980 systemd[1]: Successfully loaded SELinux policy in 75.506ms. Apr 30 12:49:52.754013 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.599ms. Apr 30 12:49:52.754034 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:49:52.754054 systemd[1]: Detected virtualization amazon. Apr 30 12:49:52.754073 systemd[1]: Detected architecture x86-64. Apr 30 12:49:52.754095 systemd[1]: Detected first boot. Apr 30 12:49:52.754114 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:49:52.754132 zram_generator::config[1367]: No configuration found. Apr 30 12:49:52.754164 kernel: Guest personality initialized and is inactive Apr 30 12:49:52.754181 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Apr 30 12:49:52.754202 kernel: Initialized host personality Apr 30 12:49:52.754219 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:49:52.754237 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:49:52.754257 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:49:52.754276 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:49:52.754294 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:49:52.754322 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:49:52.754341 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:49:52.754360 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:49:52.754381 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:49:52.754400 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:49:52.754419 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:49:52.754447 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:49:52.754467 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:49:52.754486 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:49:52.754506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:49:52.754525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:49:52.754546 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:49:52.754566 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:49:52.754587 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:49:52.754609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:49:52.754631 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:49:52.754652 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:49:52.754671 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:49:52.754691 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:49:52.754715 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:49:52.754733 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:49:52.754753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:49:52.754773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:49:52.754792 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:49:52.754812 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:49:52.754832 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:49:52.754851 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:49:52.754870 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:49:52.754893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:49:52.754913 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:49:52.754932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:49:52.754953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:49:52.754971 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:49:52.754989 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:49:52.755006 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:49:52.755027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:52.755052 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:49:52.755079 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:49:52.755102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:49:52.755123 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:49:52.755141 systemd[1]: Reached target machines.target - Containers. Apr 30 12:49:52.755172 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:49:52.755190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:49:52.755210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:49:52.755229 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:49:52.755254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:49:52.755274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:49:52.755294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:49:52.755312 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:49:52.755333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:49:52.755354 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:49:52.755373 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:49:52.755394 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:49:52.755414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:49:52.755438 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:49:52.755458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:49:52.755478 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:49:52.755502 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:49:52.755526 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:49:52.755551 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:49:52.755575 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:49:52.755595 kernel: fuse: init (API version 7.39) Apr 30 12:49:52.755618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:49:52.755637 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:49:52.755656 systemd[1]: Stopped verity-setup.service. Apr 30 12:49:52.755675 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:52.755694 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:49:52.755716 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:49:52.755737 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:49:52.755756 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:49:52.755775 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:49:52.755792 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:49:52.755817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:49:52.755837 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:49:52.755858 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:49:52.755880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:49:52.755900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:49:52.755921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:49:52.755943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:49:52.755964 kernel: ACPI: bus type drm_connector registered Apr 30 12:49:52.755985 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:49:52.756009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:49:52.756030 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:49:52.756051 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:49:52.756072 kernel: loop: module loaded Apr 30 12:49:52.756092 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:49:52.756113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:49:52.756134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:49:52.756209 systemd-journald[1450]: Collecting audit messages is disabled. Apr 30 12:49:52.756253 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:49:52.756273 systemd-journald[1450]: Journal started Apr 30 12:49:52.756311 systemd-journald[1450]: Runtime Journal (/run/log/journal/ec243e0a1bb539fd6563f43dbdd3c248) is 4.7M, max 38.1M, 33.4M free. Apr 30 12:49:52.386966 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:49:52.396413 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 12:49:52.396910 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:49:52.759200 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:49:52.779826 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:49:52.793500 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:49:52.804066 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:49:52.805371 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:49:52.806324 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:49:52.808766 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:49:52.815370 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:49:52.826344 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:49:52.827327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:49:52.835406 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:49:52.839275 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:49:52.840505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:49:52.848483 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:49:52.849259 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:49:52.852452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:49:52.861444 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:49:52.870890 systemd-journald[1450]: Time spent on flushing to /var/log/journal/ec243e0a1bb539fd6563f43dbdd3c248 is 85.121ms for 1004 entries. Apr 30 12:49:52.870890 systemd-journald[1450]: System Journal (/var/log/journal/ec243e0a1bb539fd6563f43dbdd3c248) is 8M, max 195.6M, 187.6M free. Apr 30 12:49:53.000914 systemd-journald[1450]: Received client request to flush runtime journal. Apr 30 12:49:53.002267 kernel: loop0: detected capacity change from 0 to 62832 Apr 30 12:49:52.871062 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:49:52.873641 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:49:52.875408 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:49:52.876475 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:49:52.878197 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:49:52.886743 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:49:52.897424 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:49:52.904930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:49:52.905815 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:49:52.916003 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:49:52.989469 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:49:52.997459 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:49:53.001431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:49:53.013562 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:49:53.031674 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:49:53.033601 udevadm[1513]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 12:49:53.046630 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:49:53.055958 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:49:53.092086 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:49:53.114175 kernel: loop1: detected capacity change from 0 to 218376 Apr 30 12:49:53.126064 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Apr 30 12:49:53.126854 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Apr 30 12:49:53.135981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:49:53.185759 kernel: loop2: detected capacity change from 0 to 147912 Apr 30 12:49:53.342167 kernel: loop3: detected capacity change from 0 to 138176 Apr 30 12:49:53.398622 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:49:53.485566 kernel: loop4: detected capacity change from 0 to 62832 Apr 30 12:49:53.503218 kernel: loop5: detected capacity change from 0 to 218376 Apr 30 12:49:53.538340 kernel: loop6: detected capacity change from 0 to 147912 Apr 30 12:49:53.556288 kernel: loop7: detected capacity change from 0 to 138176 Apr 30 12:49:53.572699 (sd-merge)[1528]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 12:49:53.573334 (sd-merge)[1528]: Merged extensions into '/usr'. Apr 30 12:49:53.579180 systemd[1]: Reload requested from client PID 1500 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:49:53.579196 systemd[1]: Reloading... Apr 30 12:49:53.667176 zram_generator::config[1555]: No configuration found. Apr 30 12:49:53.858544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:49:53.957804 systemd[1]: Reloading finished in 378 ms. Apr 30 12:49:53.980859 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:49:53.981949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:49:53.998161 systemd[1]: Starting ensure-sysext.service... Apr 30 12:49:54.001993 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:49:54.013749 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:49:54.043448 systemd[1]: Reload requested from client PID 1608 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:49:54.043470 systemd[1]: Reloading... Apr 30 12:49:54.067451 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:49:54.067860 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:49:54.071770 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:49:54.074460 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Apr 30 12:49:54.074562 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Apr 30 12:49:54.088957 systemd-udevd[1610]: Using default interface naming scheme 'v255'. Apr 30 12:49:54.094794 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:49:54.094808 systemd-tmpfiles[1609]: Skipping /boot Apr 30 12:49:54.117878 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:49:54.120190 systemd-tmpfiles[1609]: Skipping /boot Apr 30 12:49:54.188198 zram_generator::config[1640]: No configuration found. Apr 30 12:49:54.310310 (udev-worker)[1650]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:49:54.439217 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 12:49:54.485451 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 12:49:54.494590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:49:54.503216 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 12:49:54.510195 kernel: ACPI: button: Power Button [PWRF] Apr 30 12:49:54.524596 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 30 12:49:54.527662 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 12:49:54.553174 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1650) Apr 30 12:49:54.685834 ldconfig[1495]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:49:54.703244 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:49:54.714343 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:49:54.715038 systemd[1]: Reloading finished in 671 ms. Apr 30 12:49:54.728103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:49:54.732201 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:49:54.733743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:49:54.805686 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:49:54.806720 systemd[1]: Finished ensure-sysext.service. Apr 30 12:49:54.832498 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 12:49:54.836801 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:54.841459 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:49:54.844170 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:49:54.845071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:49:54.848440 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:49:54.860408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:49:54.866350 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:49:54.871369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:49:54.874714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:49:54.875736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:49:54.879348 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:49:54.882134 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:49:54.887167 lvm[1806]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:49:54.887371 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:49:54.898395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:49:54.913662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:49:54.914318 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:49:54.920062 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:49:54.932946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:54.934207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:54.937253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:49:54.937523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:49:54.939654 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:49:54.939884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:49:54.949828 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:49:54.950064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:49:54.951987 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:49:54.963073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:49:54.965504 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:49:54.968556 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:49:54.978136 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:49:54.978825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:49:54.978914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:49:54.988214 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:49:54.997962 lvm[1840]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:49:54.999432 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:49:55.012279 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:49:55.045627 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:49:55.051013 augenrules[1852]: No rules Apr 30 12:49:55.055281 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:49:55.055742 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:49:55.066442 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:49:55.079333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:49:55.089101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:55.097063 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:49:55.104878 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:49:55.120238 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:49:55.127461 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:49:55.190733 systemd-networkd[1820]: lo: Link UP Apr 30 12:49:55.190747 systemd-networkd[1820]: lo: Gained carrier Apr 30 12:49:55.192585 systemd-networkd[1820]: Enumeration completed Apr 30 12:49:55.192711 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:49:55.193717 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:55.193729 systemd-networkd[1820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:49:55.196794 systemd-networkd[1820]: eth0: Link UP Apr 30 12:49:55.198858 systemd-networkd[1820]: eth0: Gained carrier Apr 30 12:49:55.198889 systemd-networkd[1820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:55.204342 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:49:55.207908 systemd-resolved[1821]: Positive Trust Anchors: Apr 30 12:49:55.207924 systemd-resolved[1821]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:49:55.207977 systemd-resolved[1821]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:49:55.209219 systemd-networkd[1820]: eth0: DHCPv4 address 172.31.21.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 12:49:55.214959 systemd-resolved[1821]: Defaulting to hostname 'linux'. Apr 30 12:49:55.215027 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:49:55.217792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:49:55.220521 systemd[1]: Reached target network.target - Network. Apr 30 12:49:55.221130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:49:55.221709 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:49:55.222368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:49:55.222890 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:49:55.223619 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:49:55.224417 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:49:55.224927 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:49:55.225719 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:49:55.225768 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:49:55.227008 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:49:55.230817 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:49:55.233548 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:49:55.236492 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:49:55.237084 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:49:55.237506 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:49:55.244322 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:49:55.245424 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:49:55.246829 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:49:55.247452 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:49:55.248514 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:49:55.249050 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:49:55.249520 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:49:55.249560 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:49:55.257271 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:49:55.259768 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:49:55.264359 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:49:55.268434 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:49:55.274852 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:49:55.275991 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:49:55.279407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:49:55.287098 jq[1880]: false Apr 30 12:49:55.287410 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 12:49:55.291335 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:49:55.295290 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 12:49:55.299343 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:49:55.302849 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:49:55.314489 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:49:55.321562 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:49:55.322301 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:49:55.352361 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:49:55.357429 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:49:55.365886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:49:55.366162 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:49:55.382762 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:49:55.383069 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:49:55.398169 jq[1892]: true Apr 30 12:49:55.424280 update_engine[1889]: I20250430 12:49:55.420603 1889 main.cc:92] Flatcar Update Engine starting Apr 30 12:49:55.431044 dbus-daemon[1879]: [system] SELinux support is enabled Apr 30 12:49:55.431262 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:49:55.436815 dbus-daemon[1879]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1820 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 12:49:55.437766 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:49:55.438617 update_engine[1889]: I20250430 12:49:55.437007 1889 update_check_scheduler.cc:74] Next update check in 8m9s Apr 30 12:49:55.437808 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:49:55.438837 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:49:55.439340 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:49:55.448458 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 12:49:55.448529 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:49:55.449586 (ntainerd)[1910]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:49:55.458799 jq[1905]: true Apr 30 12:49:55.457043 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:49:55.474418 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 12:49:55.506549 extend-filesystems[1881]: Found loop4 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found loop5 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found loop6 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found loop7 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p1 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p2 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p3 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found usr Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p4 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p6 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p7 Apr 30 12:49:55.508273 extend-filesystems[1881]: Found nvme0n1p9 Apr 30 12:49:55.508273 extend-filesystems[1881]: Checking size of /dev/nvme0n1p9 Apr 30 12:49:55.555319 tar[1897]: linux-amd64/LICENSE Apr 30 12:49:55.555319 tar[1897]: linux-amd64/helm Apr 30 12:49:55.514494 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:49:55.514778 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:49:55.550194 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 12:49:55.587495 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:38:46 UTC 2025 (1): Starting Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:38:46 UTC 2025 (1): Starting Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: ---------------------------------------------------- Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: corporation. Support and training for ntp-4 are Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: available at https://www.nwtime.org/support Apr 30 12:49:55.591542 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: ---------------------------------------------------- Apr 30 12:49:55.587536 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 12:49:55.587549 ntpd[1883]: ---------------------------------------------------- Apr 30 12:49:55.587560 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Apr 30 12:49:55.587569 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 12:49:55.587578 ntpd[1883]: corporation. Support and training for ntp-4 are Apr 30 12:49:55.587588 ntpd[1883]: available at https://www.nwtime.org/support Apr 30 12:49:55.587597 ntpd[1883]: ---------------------------------------------------- Apr 30 12:49:55.598914 systemd-logind[1888]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 12:49:55.605407 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: proto: precision = 0.103 usec (-23) Apr 30 12:49:55.605407 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: basedate set to 2025-04-17 Apr 30 12:49:55.605407 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: gps base set to 2025-04-20 (week 2363) Apr 30 12:49:55.599758 ntpd[1883]: proto: precision = 0.103 usec (-23) Apr 30 12:49:55.598943 systemd-logind[1888]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 30 12:49:55.603315 ntpd[1883]: basedate set to 2025-04-17 Apr 30 12:49:55.598966 systemd-logind[1888]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 12:49:55.603339 ntpd[1883]: gps base set to 2025-04-20 (week 2363) Apr 30 12:49:55.600325 systemd-logind[1888]: New seat seat0. Apr 30 12:49:55.601536 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:49:55.615614 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Listen normally on 3 eth0 172.31.21.92:123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Listen normally on 4 lo [::1]:123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: bind(21) AF_INET6 fe80::48d:73ff:fe45:a6f3%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: unable to create socket on eth0 (5) for fe80::48d:73ff:fe45:a6f3%2#123 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: failed to init interface for address fe80::48d:73ff:fe45:a6f3%2 Apr 30 12:49:55.623212 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: Listening on routing socket on fd #21 for interface updates Apr 30 12:49:55.615686 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 12:49:55.615888 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 12:49:55.615928 ntpd[1883]: Listen normally on 3 eth0 172.31.21.92:123 Apr 30 12:49:55.615972 ntpd[1883]: Listen normally on 4 lo [::1]:123 Apr 30 12:49:55.616021 ntpd[1883]: bind(21) AF_INET6 fe80::48d:73ff:fe45:a6f3%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:49:55.616045 ntpd[1883]: unable to create socket on eth0 (5) for fe80::48d:73ff:fe45:a6f3%2#123 Apr 30 12:49:55.616061 ntpd[1883]: failed to init interface for address fe80::48d:73ff:fe45:a6f3%2 Apr 30 12:49:55.616092 ntpd[1883]: Listening on routing socket on fd #21 for interface updates Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch failed with 404: resource not found Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 12:49:55.644231 coreos-metadata[1878]: Apr 30 12:49:55.631 INFO Fetch successful Apr 30 12:49:55.656011 extend-filesystems[1881]: Resized partition /dev/nvme0n1p9 Apr 30 12:49:55.661303 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:55.661303 ntpd[1883]: 30 Apr 12:49:55 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:55.644632 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:55.644666 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:55.685448 extend-filesystems[1950]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:49:55.711273 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 12:49:55.713831 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:49:55.722434 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:49:55.795231 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 12:49:55.813640 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1650) Apr 30 12:49:55.815904 extend-filesystems[1950]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 12:49:55.815904 extend-filesystems[1950]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 12:49:55.815904 extend-filesystems[1950]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 12:49:55.817529 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:49:55.847581 bash[1952]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:49:55.847713 extend-filesystems[1881]: Resized filesystem in /dev/nvme0n1p9 Apr 30 12:49:55.819674 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:49:55.828340 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:49:55.846538 systemd[1]: Starting sshkeys.service... Apr 30 12:49:55.865528 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:49:55.876253 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:49:55.884855 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 12:49:55.898600 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 12:49:55.899710 dbus-daemon[1879]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1918 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 12:49:55.913896 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 12:49:55.952772 polkitd[1983]: Started polkitd version 121 Apr 30 12:49:55.963607 polkitd[1983]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 12:49:55.964281 polkitd[1983]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 12:49:55.965552 polkitd[1983]: Finished loading, compiling and executing 2 rules Apr 30 12:49:55.966526 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 12:49:55.966729 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 12:49:55.969302 polkitd[1983]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 12:49:55.997099 systemd-hostnamed[1918]: Hostname set to (transient) Apr 30 12:49:55.998209 systemd-resolved[1821]: System hostname changed to 'ip-172-31-21-92'. Apr 30 12:49:56.109099 locksmithd[1915]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:49:56.133619 coreos-metadata[1982]: Apr 30 12:49:56.133 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 12:49:56.136203 coreos-metadata[1982]: Apr 30 12:49:56.135 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 12:49:56.137352 coreos-metadata[1982]: Apr 30 12:49:56.137 INFO Fetch successful Apr 30 12:49:56.137352 coreos-metadata[1982]: Apr 30 12:49:56.137 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 12:49:56.140063 coreos-metadata[1982]: Apr 30 12:49:56.139 INFO Fetch successful Apr 30 12:49:56.141364 unknown[1982]: wrote ssh authorized keys file for user: core Apr 30 12:49:56.191469 update-ssh-keys[2046]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:49:56.193072 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:49:56.197727 systemd[1]: Finished sshkeys.service. Apr 30 12:49:56.367277 systemd-networkd[1820]: eth0: Gained IPv6LL Apr 30 12:49:56.375846 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:49:56.380076 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:49:56.385746 containerd[1910]: time="2025-04-30T12:49:56.385643877Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:49:56.393537 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 12:49:56.400296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:49:56.406652 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:49:56.488437 containerd[1910]: time="2025-04-30T12:49:56.488196182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497225 containerd[1910]: time="2025-04-30T12:49:56.496849495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497225 containerd[1910]: time="2025-04-30T12:49:56.496895444Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:49:56.497225 containerd[1910]: time="2025-04-30T12:49:56.496922918Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:49:56.497225 containerd[1910]: time="2025-04-30T12:49:56.497191711Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:49:56.497658 containerd[1910]: time="2025-04-30T12:49:56.497229214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497658 containerd[1910]: time="2025-04-30T12:49:56.497307519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497658 containerd[1910]: time="2025-04-30T12:49:56.497325947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497658 containerd[1910]: time="2025-04-30T12:49:56.497608079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497658 containerd[1910]: time="2025-04-30T12:49:56.497634543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497658 containerd[1910]: time="2025-04-30T12:49:56.497655319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497889 containerd[1910]: time="2025-04-30T12:49:56.497670981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.497889 containerd[1910]: time="2025-04-30T12:49:56.497775995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.501799 containerd[1910]: time="2025-04-30T12:49:56.499707085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:49:56.501799 containerd[1910]: time="2025-04-30T12:49:56.499963576Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:49:56.501799 containerd[1910]: time="2025-04-30T12:49:56.499985560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:49:56.501799 containerd[1910]: time="2025-04-30T12:49:56.500096045Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:49:56.501799 containerd[1910]: time="2025-04-30T12:49:56.500762445Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:49:56.512873 containerd[1910]: time="2025-04-30T12:49:56.512778535Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513064886Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513095295Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513120016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513153508Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513348914Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513756919Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513872987Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513893697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513915673Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513939337Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513963114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.513982287Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.514003579Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.514574 containerd[1910]: time="2025-04-30T12:49:56.514025471Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514050991Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514068755Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514085366Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514114372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514135156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514164372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514184843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514202561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514220759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514239557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514259413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514278351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514298408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515137 containerd[1910]: time="2025-04-30T12:49:56.514315118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515667 containerd[1910]: time="2025-04-30T12:49:56.514332634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515667 containerd[1910]: time="2025-04-30T12:49:56.514375378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515667 containerd[1910]: time="2025-04-30T12:49:56.514396564Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:49:56.515667 containerd[1910]: time="2025-04-30T12:49:56.514426630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515667 containerd[1910]: time="2025-04-30T12:49:56.514446449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.515667 containerd[1910]: time="2025-04-30T12:49:56.514462708Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519568067Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519623144Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519643010Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519661456Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519675429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519702456Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519719231Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:49:56.521802 containerd[1910]: time="2025-04-30T12:49:56.519734809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:49:56.522189 containerd[1910]: time="2025-04-30T12:49:56.520117931Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:49:56.522189 containerd[1910]: time="2025-04-30T12:49:56.520196284Z" level=info msg="Connect containerd service" Apr 30 12:49:56.522189 containerd[1910]: time="2025-04-30T12:49:56.520245917Z" level=info msg="using legacy CRI server" Apr 30 12:49:56.522189 containerd[1910]: time="2025-04-30T12:49:56.520255597Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:49:56.522189 containerd[1910]: time="2025-04-30T12:49:56.520440971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529157403Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529684343Z" level=info msg="Start subscribing containerd event" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529744893Z" level=info msg="Start recovering state" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529835753Z" level=info msg="Start event monitor" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529849412Z" level=info msg="Start snapshots syncer" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529863311Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:49:56.530384 containerd[1910]: time="2025-04-30T12:49:56.529873814Z" level=info msg="Start streaming server" Apr 30 12:49:56.531521 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:49:56.532407 containerd[1910]: time="2025-04-30T12:49:56.532183423Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:49:56.536623 containerd[1910]: time="2025-04-30T12:49:56.532260190Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:49:56.541966 containerd[1910]: time="2025-04-30T12:49:56.541928251Z" level=info msg="containerd successfully booted in 0.159487s" Apr 30 12:49:56.542037 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:49:56.620224 amazon-ssm-agent[2081]: Initializing new seelog logger Apr 30 12:49:56.620224 amazon-ssm-agent[2081]: New Seelog Logger Creation Complete Apr 30 12:49:56.620932 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.621066 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.622044 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 processing appconfig overrides Apr 30 12:49:56.624591 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.624678 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.624825 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 processing appconfig overrides Apr 30 12:49:56.626063 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.626158 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.626339 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 processing appconfig overrides Apr 30 12:49:56.627093 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO Proxy environment variables: Apr 30 12:49:56.632452 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.632452 amazon-ssm-agent[2081]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:49:56.632452 amazon-ssm-agent[2081]: 2025/04/30 12:49:56 processing appconfig overrides Apr 30 12:49:56.729927 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO https_proxy: Apr 30 12:49:56.827850 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO http_proxy: Apr 30 12:49:56.926233 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO no_proxy: Apr 30 12:49:56.928938 sshd_keygen[1911]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:49:57.016440 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:49:57.024328 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO Checking if agent identity type OnPrem can be assumed Apr 30 12:49:57.030478 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:49:57.038933 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:49:57.039525 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:49:57.051709 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:49:57.071055 tar[1897]: linux-amd64/README.md Apr 30 12:49:57.085301 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:49:57.097751 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:49:57.109572 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:49:57.111413 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:49:57.120224 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:49:57.122666 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO Checking if agent identity type EC2 can be assumed Apr 30 12:49:57.186370 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:49:57.195605 systemd[1]: Started sshd@0-172.31.21.92:22-147.75.109.163:48120.service - OpenSSH per-connection server daemon (147.75.109.163:48120). Apr 30 12:49:57.221218 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO Agent will take identity from EC2 Apr 30 12:49:57.320605 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:49:57.420411 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:49:57.519468 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:49:57.534175 sshd[2122]: Accepted publickey for core from 147.75.109.163 port 48120 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:49:57.536787 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:49:57.550778 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:49:57.559510 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:49:57.582858 systemd-logind[1888]: New session 1 of user core. Apr 30 12:49:57.593266 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:49:57.606537 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:49:57.620475 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 12:49:57.619371 (systemd)[2126]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:49:57.624664 systemd-logind[1888]: New session c1 of user core. Apr 30 12:49:57.719173 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 12:49:57.818558 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [Registrar] Starting registrar module Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:56 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:57 INFO [EC2Identity] EC2 registration was successful. Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:57 INFO [CredentialRefresher] credentialRefresher has started Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:57 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 12:49:57.821859 amazon-ssm-agent[2081]: 2025-04-30 12:49:57 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 12:49:57.850455 systemd[2126]: Queued start job for default target default.target. Apr 30 12:49:57.856252 systemd[2126]: Created slice app.slice - User Application Slice. Apr 30 12:49:57.856285 systemd[2126]: Reached target paths.target - Paths. Apr 30 12:49:57.856646 systemd[2126]: Reached target timers.target - Timers. Apr 30 12:49:57.857995 systemd[2126]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:49:57.870279 systemd[2126]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:49:57.870395 systemd[2126]: Reached target sockets.target - Sockets. Apr 30 12:49:57.870563 systemd[2126]: Reached target basic.target - Basic System. Apr 30 12:49:57.870648 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:49:57.871091 systemd[2126]: Reached target default.target - Main User Target. Apr 30 12:49:57.871128 systemd[2126]: Startup finished in 235ms. Apr 30 12:49:57.877352 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:49:57.918862 amazon-ssm-agent[2081]: 2025-04-30 12:49:57 INFO [CredentialRefresher] Next credential rotation will be in 30.24166140955 minutes Apr 30 12:49:58.091704 systemd[1]: Started sshd@1-172.31.21.92:22-147.75.109.163:48128.service - OpenSSH per-connection server daemon (147.75.109.163:48128). Apr 30 12:49:58.295018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:49:58.296699 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:49:58.298118 systemd[1]: Startup finished in 596ms (kernel) + 7.441s (initrd) + 7.200s (userspace) = 15.237s. Apr 30 12:49:58.298435 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:49:58.345755 sshd[2137]: Accepted publickey for core from 147.75.109.163 port 48128 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:49:58.346329 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:49:58.353559 systemd-logind[1888]: New session 2 of user core. Apr 30 12:49:58.359327 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:49:58.537985 sshd[2149]: Connection closed by 147.75.109.163 port 48128 Apr 30 12:49:58.538580 sshd-session[2137]: pam_unix(sshd:session): session closed for user core Apr 30 12:49:58.542053 systemd-logind[1888]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:49:58.542760 systemd[1]: sshd@1-172.31.21.92:22-147.75.109.163:48128.service: Deactivated successfully. Apr 30 12:49:58.544846 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:49:58.547029 systemd-logind[1888]: Removed session 2. Apr 30 12:49:58.587974 ntpd[1883]: Listen normally on 6 eth0 [fe80::48d:73ff:fe45:a6f3%2]:123 Apr 30 12:49:58.594168 ntpd[1883]: 30 Apr 12:49:58 ntpd[1883]: Listen normally on 6 eth0 [fe80::48d:73ff:fe45:a6f3%2]:123 Apr 30 12:49:58.592507 systemd[1]: Started sshd@2-172.31.21.92:22-147.75.109.163:48142.service - OpenSSH per-connection server daemon (147.75.109.163:48142). Apr 30 12:49:58.832138 amazon-ssm-agent[2081]: 2025-04-30 12:49:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 12:49:58.845706 sshd[2159]: Accepted publickey for core from 147.75.109.163 port 48142 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:49:58.847706 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:49:58.853102 systemd-logind[1888]: New session 3 of user core. Apr 30 12:49:58.858344 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:49:58.933159 amazon-ssm-agent[2081]: 2025-04-30 12:49:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2162) started Apr 30 12:49:59.033457 amazon-ssm-agent[2081]: 2025-04-30 12:49:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 12:49:59.035438 sshd[2167]: Connection closed by 147.75.109.163 port 48142 Apr 30 12:49:59.036698 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Apr 30 12:49:59.039929 systemd[1]: sshd@2-172.31.21.92:22-147.75.109.163:48142.service: Deactivated successfully. Apr 30 12:49:59.041674 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:49:59.043853 systemd-logind[1888]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:49:59.044883 systemd-logind[1888]: Removed session 3. Apr 30 12:49:59.081347 systemd[1]: Started sshd@3-172.31.21.92:22-147.75.109.163:48158.service - OpenSSH per-connection server daemon (147.75.109.163:48158). Apr 30 12:49:59.135877 kubelet[2144]: E0430 12:49:59.135788 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:49:59.138310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:49:59.138514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:49:59.138911 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 251.4M memory peak. Apr 30 12:49:59.333231 sshd[2179]: Accepted publickey for core from 147.75.109.163 port 48158 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:49:59.334396 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:49:59.339671 systemd-logind[1888]: New session 4 of user core. Apr 30 12:49:59.346359 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:49:59.523311 sshd[2182]: Connection closed by 147.75.109.163 port 48158 Apr 30 12:49:59.523857 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Apr 30 12:49:59.526810 systemd[1]: sshd@3-172.31.21.92:22-147.75.109.163:48158.service: Deactivated successfully. Apr 30 12:49:59.528497 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:49:59.529765 systemd-logind[1888]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:49:59.530858 systemd-logind[1888]: Removed session 4. Apr 30 12:49:59.577445 systemd[1]: Started sshd@4-172.31.21.92:22-147.75.109.163:48162.service - OpenSSH per-connection server daemon (147.75.109.163:48162). Apr 30 12:49:59.828097 sshd[2188]: Accepted publickey for core from 147.75.109.163 port 48162 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:49:59.830060 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:49:59.835959 systemd-logind[1888]: New session 5 of user core. Apr 30 12:49:59.845583 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:50:00.026265 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:50:00.026666 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:00.038952 sudo[2191]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:00.078540 sshd[2190]: Connection closed by 147.75.109.163 port 48162 Apr 30 12:50:00.080541 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:00.086077 systemd[1]: sshd@4-172.31.21.92:22-147.75.109.163:48162.service: Deactivated successfully. Apr 30 12:50:00.088465 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:50:00.090336 systemd-logind[1888]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:50:00.091685 systemd-logind[1888]: Removed session 5. Apr 30 12:50:00.144597 systemd[1]: Started sshd@5-172.31.21.92:22-147.75.109.163:48168.service - OpenSSH per-connection server daemon (147.75.109.163:48168). Apr 30 12:50:00.393307 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 48168 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:00.394518 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:00.399415 systemd-logind[1888]: New session 6 of user core. Apr 30 12:50:00.408384 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:50:00.548747 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:50:00.549036 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:00.552478 sudo[2201]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:00.558161 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:50:00.558451 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:00.572596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:50:00.603334 augenrules[2223]: No rules Apr 30 12:50:00.604719 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:50:00.605011 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:50:00.606375 sudo[2200]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:00.643713 sshd[2199]: Connection closed by 147.75.109.163 port 48168 Apr 30 12:50:00.644575 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:00.647188 systemd[1]: sshd@5-172.31.21.92:22-147.75.109.163:48168.service: Deactivated successfully. Apr 30 12:50:00.648824 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:50:00.650320 systemd-logind[1888]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:50:00.651349 systemd-logind[1888]: Removed session 6. Apr 30 12:50:00.700459 systemd[1]: Started sshd@6-172.31.21.92:22-147.75.109.163:48174.service - OpenSSH per-connection server daemon (147.75.109.163:48174). Apr 30 12:50:00.952000 sshd[2232]: Accepted publickey for core from 147.75.109.163 port 48174 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:00.955017 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:00.961924 systemd-logind[1888]: New session 7 of user core. Apr 30 12:50:00.967428 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:50:01.108217 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:50:01.108509 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:02.193826 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:50:02.194882 (dockerd)[2252]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:50:02.911802 systemd-resolved[1821]: Clock change detected. Flushing caches. Apr 30 12:50:03.152927 dockerd[2252]: time="2025-04-30T12:50:03.151886037Z" level=info msg="Starting up" Apr 30 12:50:03.432134 dockerd[2252]: time="2025-04-30T12:50:03.431914972Z" level=info msg="Loading containers: start." Apr 30 12:50:03.642884 kernel: Initializing XFRM netlink socket Apr 30 12:50:03.692902 (udev-worker)[2276]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:50:03.757918 systemd-networkd[1820]: docker0: Link UP Apr 30 12:50:03.787106 dockerd[2252]: time="2025-04-30T12:50:03.787059184Z" level=info msg="Loading containers: done." Apr 30 12:50:03.805388 dockerd[2252]: time="2025-04-30T12:50:03.805320593Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:50:03.805561 dockerd[2252]: time="2025-04-30T12:50:03.805435298Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:50:03.805561 dockerd[2252]: time="2025-04-30T12:50:03.805549338Z" level=info msg="Daemon has completed initialization" Apr 30 12:50:03.839853 dockerd[2252]: time="2025-04-30T12:50:03.839713494Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:50:03.839977 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:50:04.943197 containerd[1910]: time="2025-04-30T12:50:04.943154397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 12:50:05.582707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927897901.mount: Deactivated successfully. Apr 30 12:50:07.384168 containerd[1910]: time="2025-04-30T12:50:07.384108343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:07.385293 containerd[1910]: time="2025-04-30T12:50:07.385239130Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 12:50:07.386457 containerd[1910]: time="2025-04-30T12:50:07.386406619Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:07.388896 containerd[1910]: time="2025-04-30T12:50:07.388866244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:07.390077 containerd[1910]: time="2025-04-30T12:50:07.389885398Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.446691527s" Apr 30 12:50:07.390077 containerd[1910]: time="2025-04-30T12:50:07.389917508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 12:50:07.390449 containerd[1910]: time="2025-04-30T12:50:07.390430051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 12:50:09.620356 containerd[1910]: time="2025-04-30T12:50:09.620303529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:09.621644 containerd[1910]: time="2025-04-30T12:50:09.621587467Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 12:50:09.622686 containerd[1910]: time="2025-04-30T12:50:09.622624995Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:09.625613 containerd[1910]: time="2025-04-30T12:50:09.625551092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:09.626799 containerd[1910]: time="2025-04-30T12:50:09.626622486Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.236105716s" Apr 30 12:50:09.626799 containerd[1910]: time="2025-04-30T12:50:09.626661742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 12:50:09.627473 containerd[1910]: time="2025-04-30T12:50:09.627273529Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 12:50:09.649798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:50:09.655587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:09.859987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:09.863820 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:09.915623 kubelet[2507]: E0430 12:50:09.915536 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:09.919418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:09.919620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:09.920209 systemd[1]: kubelet.service: Consumed 158ms CPU time, 102.6M memory peak. Apr 30 12:50:11.392303 containerd[1910]: time="2025-04-30T12:50:11.392232855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:11.393327 containerd[1910]: time="2025-04-30T12:50:11.393276461Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 12:50:11.394441 containerd[1910]: time="2025-04-30T12:50:11.394360487Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:11.397631 containerd[1910]: time="2025-04-30T12:50:11.397590322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:11.399849 containerd[1910]: time="2025-04-30T12:50:11.398915191Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.771605662s" Apr 30 12:50:11.399849 containerd[1910]: time="2025-04-30T12:50:11.398958968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 12:50:11.399849 containerd[1910]: time="2025-04-30T12:50:11.399716775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 12:50:12.659916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579035816.mount: Deactivated successfully. Apr 30 12:50:13.210876 containerd[1910]: time="2025-04-30T12:50:13.210811810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:13.211741 containerd[1910]: time="2025-04-30T12:50:13.211689632Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 12:50:13.212810 containerd[1910]: time="2025-04-30T12:50:13.212763063Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:13.214548 containerd[1910]: time="2025-04-30T12:50:13.214496049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:13.215199 containerd[1910]: time="2025-04-30T12:50:13.215062086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.81531322s" Apr 30 12:50:13.215199 containerd[1910]: time="2025-04-30T12:50:13.215094387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 12:50:13.215685 containerd[1910]: time="2025-04-30T12:50:13.215590674Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 12:50:13.766402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927662224.mount: Deactivated successfully. Apr 30 12:50:14.947273 containerd[1910]: time="2025-04-30T12:50:14.947203165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:14.948620 containerd[1910]: time="2025-04-30T12:50:14.948561799Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 12:50:14.949972 containerd[1910]: time="2025-04-30T12:50:14.949917989Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:14.952933 containerd[1910]: time="2025-04-30T12:50:14.952788121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:14.954844 containerd[1910]: time="2025-04-30T12:50:14.954790099Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.739168739s" Apr 30 12:50:14.954844 containerd[1910]: time="2025-04-30T12:50:14.954847655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 12:50:14.955351 containerd[1910]: time="2025-04-30T12:50:14.955317718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 12:50:15.413616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953559145.mount: Deactivated successfully. Apr 30 12:50:15.418898 containerd[1910]: time="2025-04-30T12:50:15.418848700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:15.419673 containerd[1910]: time="2025-04-30T12:50:15.419623931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 12:50:15.420679 containerd[1910]: time="2025-04-30T12:50:15.420634171Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:15.422944 containerd[1910]: time="2025-04-30T12:50:15.422895728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:15.423787 containerd[1910]: time="2025-04-30T12:50:15.423481480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.084764ms" Apr 30 12:50:15.423787 containerd[1910]: time="2025-04-30T12:50:15.423511657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 12:50:15.424209 containerd[1910]: time="2025-04-30T12:50:15.424153019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 12:50:16.432609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225108753.mount: Deactivated successfully. Apr 30 12:50:18.871343 containerd[1910]: time="2025-04-30T12:50:18.871273883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:18.875081 containerd[1910]: time="2025-04-30T12:50:18.875012812Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 12:50:18.880003 containerd[1910]: time="2025-04-30T12:50:18.879928950Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:18.886584 containerd[1910]: time="2025-04-30T12:50:18.886506614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:18.887967 containerd[1910]: time="2025-04-30T12:50:18.887656625Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.463436622s" Apr 30 12:50:18.887967 containerd[1910]: time="2025-04-30T12:50:18.887688715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 12:50:20.150103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:50:20.157441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:20.441029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:20.450254 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:20.526820 kubelet[2664]: E0430 12:50:20.526766 2664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:20.530825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:20.531194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:20.531661 systemd[1]: kubelet.service: Consumed 194ms CPU time, 103.9M memory peak. Apr 30 12:50:22.239824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:22.240095 systemd[1]: kubelet.service: Consumed 194ms CPU time, 103.9M memory peak. Apr 30 12:50:22.246157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:22.286511 systemd[1]: Reload requested from client PID 2678 ('systemctl') (unit session-7.scope)... Apr 30 12:50:22.286530 systemd[1]: Reloading... Apr 30 12:50:22.388882 zram_generator::config[2721]: No configuration found. Apr 30 12:50:22.539928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:22.660535 systemd[1]: Reloading finished in 373 ms. Apr 30 12:50:22.711142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:22.724620 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:50:22.728697 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:22.729511 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:50:22.729890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:22.729951 systemd[1]: kubelet.service: Consumed 103ms CPU time, 92.6M memory peak. Apr 30 12:50:22.736245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:22.919546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:22.924122 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:50:22.977793 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:22.977793 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 12:50:22.977793 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:22.978275 kubelet[2789]: I0430 12:50:22.977910 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:50:23.610398 kubelet[2789]: I0430 12:50:23.610330 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 12:50:23.610398 kubelet[2789]: I0430 12:50:23.610368 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:50:23.610669 kubelet[2789]: I0430 12:50:23.610647 2789 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 12:50:23.658897 kubelet[2789]: I0430 12:50:23.658645 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:50:23.660769 kubelet[2789]: E0430 12:50:23.660641 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:23.678629 kubelet[2789]: E0430 12:50:23.678576 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:50:23.678629 kubelet[2789]: I0430 12:50:23.678624 2789 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:50:23.685717 kubelet[2789]: I0430 12:50:23.685493 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:50:23.689147 kubelet[2789]: I0430 12:50:23.689090 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:50:23.689316 kubelet[2789]: I0430 12:50:23.689137 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:50:23.691850 kubelet[2789]: I0430 12:50:23.691811 2789 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:50:23.691850 kubelet[2789]: I0430 12:50:23.691840 2789 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 12:50:23.691982 kubelet[2789]: I0430 12:50:23.691960 2789 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:23.698184 kubelet[2789]: I0430 12:50:23.698154 2789 kubelet.go:446] "Attempting to sync node with API server" Apr 30 12:50:23.698184 kubelet[2789]: I0430 12:50:23.698181 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:50:23.698184 kubelet[2789]: I0430 12:50:23.698202 2789 kubelet.go:352] "Adding apiserver pod source" Apr 30 12:50:23.699325 kubelet[2789]: I0430 12:50:23.698215 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:50:23.711146 kubelet[2789]: W0430 12:50:23.711101 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:23.711341 kubelet[2789]: E0430 12:50:23.711313 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:23.711442 kubelet[2789]: W0430 12:50:23.711408 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:23.711478 kubelet[2789]: E0430 12:50:23.711448 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:23.713345 kubelet[2789]: I0430 12:50:23.713318 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:50:23.717075 kubelet[2789]: I0430 12:50:23.717051 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:50:23.719168 kubelet[2789]: W0430 12:50:23.719124 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:50:23.719729 kubelet[2789]: I0430 12:50:23.719687 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 12:50:23.719729 kubelet[2789]: I0430 12:50:23.719720 2789 server.go:1287] "Started kubelet" Apr 30 12:50:23.721081 kubelet[2789]: I0430 12:50:23.720867 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:50:23.725959 kubelet[2789]: I0430 12:50:23.725897 2789 server.go:490] "Adding debug handlers to kubelet server" Apr 30 12:50:23.729135 kubelet[2789]: I0430 12:50:23.728754 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:50:23.729135 kubelet[2789]: I0430 12:50:23.729053 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:50:23.731173 kubelet[2789]: I0430 12:50:23.730556 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:50:23.734124 kubelet[2789]: E0430 12:50:23.730439 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.92:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-92.183b19989874a0a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-92,UID:ip-172-31-21-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-92,},FirstTimestamp:2025-04-30 12:50:23.719702696 +0000 UTC m=+0.791820297,LastTimestamp:2025-04-30 12:50:23.719702696 +0000 UTC m=+0.791820297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-92,}" Apr 30 12:50:23.735168 kubelet[2789]: I0430 12:50:23.735051 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:50:23.737358 kubelet[2789]: E0430 12:50:23.737297 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:23.737358 kubelet[2789]: I0430 12:50:23.737331 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 12:50:23.739138 kubelet[2789]: I0430 12:50:23.739116 2789 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:50:23.739214 kubelet[2789]: I0430 12:50:23.739172 2789 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:50:23.739903 kubelet[2789]: W0430 12:50:23.739798 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:23.739903 kubelet[2789]: E0430 12:50:23.739866 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:23.741465 kubelet[2789]: E0430 12:50:23.740187 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="200ms" Apr 30 12:50:23.741688 kubelet[2789]: I0430 12:50:23.741666 2789 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:50:23.741915 kubelet[2789]: I0430 12:50:23.741784 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:50:23.749858 kubelet[2789]: I0430 12:50:23.745729 2789 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:50:23.755907 kubelet[2789]: I0430 12:50:23.755865 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:50:23.757263 kubelet[2789]: I0430 12:50:23.757246 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:50:23.757398 kubelet[2789]: I0430 12:50:23.757388 2789 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 12:50:23.757457 kubelet[2789]: I0430 12:50:23.757450 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 12:50:23.757498 kubelet[2789]: I0430 12:50:23.757493 2789 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 12:50:23.757584 kubelet[2789]: E0430 12:50:23.757568 2789 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:50:23.765329 kubelet[2789]: E0430 12:50:23.765301 2789 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:50:23.765552 kubelet[2789]: W0430 12:50:23.765512 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:23.765714 kubelet[2789]: E0430 12:50:23.765563 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:23.777444 kubelet[2789]: I0430 12:50:23.777411 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 12:50:23.777444 kubelet[2789]: I0430 12:50:23.777433 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 12:50:23.777587 kubelet[2789]: I0430 12:50:23.777454 2789 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:23.782326 kubelet[2789]: I0430 12:50:23.782292 2789 policy_none.go:49] "None policy: Start" Apr 30 12:50:23.782326 kubelet[2789]: I0430 12:50:23.782322 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 12:50:23.782326 kubelet[2789]: I0430 12:50:23.782334 2789 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:50:23.791875 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:50:23.806119 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:50:23.809269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:50:23.814703 kubelet[2789]: I0430 12:50:23.814676 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:50:23.815381 kubelet[2789]: I0430 12:50:23.814885 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:50:23.815381 kubelet[2789]: I0430 12:50:23.814897 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:50:23.815381 kubelet[2789]: I0430 12:50:23.815115 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:50:23.816854 kubelet[2789]: E0430 12:50:23.816669 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 12:50:23.817114 kubelet[2789]: E0430 12:50:23.817089 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-92\" not found" Apr 30 12:50:23.867202 systemd[1]: Created slice kubepods-burstable-podb41e18afc3efad0cbb2625e8b964903a.slice - libcontainer container kubepods-burstable-podb41e18afc3efad0cbb2625e8b964903a.slice. Apr 30 12:50:23.880081 kubelet[2789]: E0430 12:50:23.879800 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:23.882554 systemd[1]: Created slice kubepods-burstable-pod740e420c02418a6617c89d61a6cc6cc8.slice - libcontainer container kubepods-burstable-pod740e420c02418a6617c89d61a6cc6cc8.slice. Apr 30 12:50:23.884947 kubelet[2789]: E0430 12:50:23.884921 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:23.887578 systemd[1]: Created slice kubepods-burstable-pod43cae4d31bdf06691b63c9b54206c5ad.slice - libcontainer container kubepods-burstable-pod43cae4d31bdf06691b63c9b54206c5ad.slice. Apr 30 12:50:23.889272 kubelet[2789]: E0430 12:50:23.889242 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:23.917225 kubelet[2789]: I0430 12:50:23.917185 2789 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:23.917551 kubelet[2789]: E0430 12:50:23.917511 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Apr 30 12:50:23.941376 kubelet[2789]: I0430 12:50:23.941024 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:23.941376 kubelet[2789]: I0430 12:50:23.941075 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:23.941376 kubelet[2789]: I0430 12:50:23.941109 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b41e18afc3efad0cbb2625e8b964903a-ca-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"b41e18afc3efad0cbb2625e8b964903a\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:23.941376 kubelet[2789]: I0430 12:50:23.941152 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:23.941376 kubelet[2789]: I0430 12:50:23.941178 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:23.941626 kubelet[2789]: I0430 12:50:23.941200 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:23.941626 kubelet[2789]: I0430 12:50:23.941225 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43cae4d31bdf06691b63c9b54206c5ad-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-92\" (UID: \"43cae4d31bdf06691b63c9b54206c5ad\") " pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:23.941626 kubelet[2789]: I0430 12:50:23.941254 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b41e18afc3efad0cbb2625e8b964903a-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"b41e18afc3efad0cbb2625e8b964903a\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:23.941626 kubelet[2789]: I0430 12:50:23.941282 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b41e18afc3efad0cbb2625e8b964903a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"b41e18afc3efad0cbb2625e8b964903a\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:23.941626 kubelet[2789]: E0430 12:50:23.941291 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="400ms" Apr 30 12:50:24.120042 kubelet[2789]: I0430 12:50:24.119946 2789 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:24.120374 kubelet[2789]: E0430 12:50:24.120210 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Apr 30 12:50:24.181105 containerd[1910]: time="2025-04-30T12:50:24.181065068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-92,Uid:b41e18afc3efad0cbb2625e8b964903a,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:24.191779 containerd[1910]: time="2025-04-30T12:50:24.191456543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-92,Uid:43cae4d31bdf06691b63c9b54206c5ad,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:24.191779 containerd[1910]: time="2025-04-30T12:50:24.191457625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-92,Uid:740e420c02418a6617c89d61a6cc6cc8,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:24.342617 kubelet[2789]: E0430 12:50:24.342575 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="800ms" Apr 30 12:50:24.521943 kubelet[2789]: I0430 12:50:24.521918 2789 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:24.522540 kubelet[2789]: E0430 12:50:24.522476 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Apr 30 12:50:24.654511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718621897.mount: Deactivated successfully. Apr 30 12:50:24.666192 containerd[1910]: time="2025-04-30T12:50:24.666138483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:24.674911 containerd[1910]: time="2025-04-30T12:50:24.674848039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 12:50:24.676900 containerd[1910]: time="2025-04-30T12:50:24.676852987Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:24.679280 containerd[1910]: time="2025-04-30T12:50:24.679237034Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:24.682915 containerd[1910]: time="2025-04-30T12:50:24.682853603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:50:24.685356 containerd[1910]: time="2025-04-30T12:50:24.685316806Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:24.687661 containerd[1910]: time="2025-04-30T12:50:24.687618507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:24.688562 containerd[1910]: time="2025-04-30T12:50:24.688527380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.367388ms" Apr 30 12:50:24.689449 containerd[1910]: time="2025-04-30T12:50:24.689395176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:50:24.691401 containerd[1910]: time="2025-04-30T12:50:24.691300138Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.549765ms" Apr 30 12:50:24.698012 kubelet[2789]: W0430 12:50:24.697956 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:24.698122 kubelet[2789]: E0430 12:50:24.698019 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:24.703627 containerd[1910]: time="2025-04-30T12:50:24.703516666Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.967268ms" Apr 30 12:50:24.955197 containerd[1910]: time="2025-04-30T12:50:24.955130050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:24.955845 containerd[1910]: time="2025-04-30T12:50:24.955719704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:24.955845 containerd[1910]: time="2025-04-30T12:50:24.955747071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:24.956749 containerd[1910]: time="2025-04-30T12:50:24.955898018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:24.956922 containerd[1910]: time="2025-04-30T12:50:24.956470507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:24.956922 containerd[1910]: time="2025-04-30T12:50:24.956517640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:24.956922 containerd[1910]: time="2025-04-30T12:50:24.956531897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:24.956922 containerd[1910]: time="2025-04-30T12:50:24.956597309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:24.965126 containerd[1910]: time="2025-04-30T12:50:24.964814881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:24.965126 containerd[1910]: time="2025-04-30T12:50:24.964890964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:24.965126 containerd[1910]: time="2025-04-30T12:50:24.964906767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:24.965126 containerd[1910]: time="2025-04-30T12:50:24.964981152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:24.982962 kubelet[2789]: W0430 12:50:24.982703 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:24.982962 kubelet[2789]: E0430 12:50:24.982763 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:24.988030 systemd[1]: Started cri-containerd-fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0.scope - libcontainer container fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0. Apr 30 12:50:24.994372 systemd[1]: Started cri-containerd-67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339.scope - libcontainer container 67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339. Apr 30 12:50:24.996478 systemd[1]: Started cri-containerd-d0f691d2fb0f15b43e0fe7d43a6f77a5e151db0600f2acde5962fa0863bd42cb.scope - libcontainer container d0f691d2fb0f15b43e0fe7d43a6f77a5e151db0600f2acde5962fa0863bd42cb. Apr 30 12:50:25.053205 containerd[1910]: time="2025-04-30T12:50:25.053132091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-92,Uid:740e420c02418a6617c89d61a6cc6cc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0\"" Apr 30 12:50:25.058590 containerd[1910]: time="2025-04-30T12:50:25.058426999Z" level=info msg="CreateContainer within sandbox \"fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:50:25.074951 containerd[1910]: time="2025-04-30T12:50:25.073992832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-92,Uid:b41e18afc3efad0cbb2625e8b964903a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0f691d2fb0f15b43e0fe7d43a6f77a5e151db0600f2acde5962fa0863bd42cb\"" Apr 30 12:50:25.081255 containerd[1910]: time="2025-04-30T12:50:25.081216432Z" level=info msg="CreateContainer within sandbox \"d0f691d2fb0f15b43e0fe7d43a6f77a5e151db0600f2acde5962fa0863bd42cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:50:25.085426 containerd[1910]: time="2025-04-30T12:50:25.085385773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-92,Uid:43cae4d31bdf06691b63c9b54206c5ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339\"" Apr 30 12:50:25.089746 containerd[1910]: time="2025-04-30T12:50:25.089574891Z" level=info msg="CreateContainer within sandbox \"67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:50:25.127154 containerd[1910]: time="2025-04-30T12:50:25.127098400Z" level=info msg="CreateContainer within sandbox \"67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda\"" Apr 30 12:50:25.127763 containerd[1910]: time="2025-04-30T12:50:25.127683724Z" level=info msg="StartContainer for \"417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda\"" Apr 30 12:50:25.141881 containerd[1910]: time="2025-04-30T12:50:25.141745598Z" level=info msg="CreateContainer within sandbox \"d0f691d2fb0f15b43e0fe7d43a6f77a5e151db0600f2acde5962fa0863bd42cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0fa1f69d75d6b716973fc1639e2809f2bedb0f8a3174a0f739207a7325f7f00\"" Apr 30 12:50:25.142349 containerd[1910]: time="2025-04-30T12:50:25.142320576Z" level=info msg="StartContainer for \"a0fa1f69d75d6b716973fc1639e2809f2bedb0f8a3174a0f739207a7325f7f00\"" Apr 30 12:50:25.142609 containerd[1910]: time="2025-04-30T12:50:25.142582314Z" level=info msg="CreateContainer within sandbox \"fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195\"" Apr 30 12:50:25.143612 kubelet[2789]: E0430 12:50:25.143563 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="1.6s" Apr 30 12:50:25.143916 containerd[1910]: time="2025-04-30T12:50:25.143646639Z" level=info msg="StartContainer for \"0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195\"" Apr 30 12:50:25.160023 systemd[1]: Started cri-containerd-417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda.scope - libcontainer container 417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda. Apr 30 12:50:25.192082 systemd[1]: Started cri-containerd-0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195.scope - libcontainer container 0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195. Apr 30 12:50:25.193640 systemd[1]: Started cri-containerd-a0fa1f69d75d6b716973fc1639e2809f2bedb0f8a3174a0f739207a7325f7f00.scope - libcontainer container a0fa1f69d75d6b716973fc1639e2809f2bedb0f8a3174a0f739207a7325f7f00. Apr 30 12:50:25.231512 containerd[1910]: time="2025-04-30T12:50:25.230846690Z" level=info msg="StartContainer for \"417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda\" returns successfully" Apr 30 12:50:25.263618 kubelet[2789]: W0430 12:50:25.263277 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:25.263618 kubelet[2789]: E0430 12:50:25.263468 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:25.269427 containerd[1910]: time="2025-04-30T12:50:25.269294855Z" level=info msg="StartContainer for \"a0fa1f69d75d6b716973fc1639e2809f2bedb0f8a3174a0f739207a7325f7f00\" returns successfully" Apr 30 12:50:25.286682 containerd[1910]: time="2025-04-30T12:50:25.286631009Z" level=info msg="StartContainer for \"0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195\" returns successfully" Apr 30 12:50:25.307957 kubelet[2789]: W0430 12:50:25.307732 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:25.308181 kubelet[2789]: E0430 12:50:25.307813 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:25.325897 kubelet[2789]: I0430 12:50:25.325593 2789 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:25.326476 kubelet[2789]: E0430 12:50:25.326437 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Apr 30 12:50:25.781622 kubelet[2789]: E0430 12:50:25.781249 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:25.784149 kubelet[2789]: E0430 12:50:25.783938 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:25.788606 kubelet[2789]: E0430 12:50:25.788426 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:25.841473 kubelet[2789]: E0430 12:50:25.841420 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:26.357342 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 12:50:26.745044 kubelet[2789]: E0430 12:50:26.744983 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="3.2s" Apr 30 12:50:26.788235 kubelet[2789]: E0430 12:50:26.788087 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:26.788235 kubelet[2789]: E0430 12:50:26.788172 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:26.820346 kubelet[2789]: W0430 12:50:26.820287 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:26.820346 kubelet[2789]: E0430 12:50:26.820336 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:26.928908 kubelet[2789]: I0430 12:50:26.928874 2789 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:26.929340 kubelet[2789]: E0430 12:50:26.929308 2789 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Apr 30 12:50:27.949848 kubelet[2789]: W0430 12:50:27.949741 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:27.949848 kubelet[2789]: E0430 12:50:27.949797 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:27.970769 kubelet[2789]: W0430 12:50:27.970729 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:27.970934 kubelet[2789]: E0430 12:50:27.970780 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:28.337650 kubelet[2789]: W0430 12:50:28.337515 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Apr 30 12:50:28.337650 kubelet[2789]: E0430 12:50:28.337576 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:50:28.514674 kubelet[2789]: E0430 12:50:28.514565 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.92:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-92.183b19989874a0a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-92,UID:ip-172-31-21-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-92,},FirstTimestamp:2025-04-30 12:50:23.719702696 +0000 UTC m=+0.791820297,LastTimestamp:2025-04-30 12:50:23.719702696 +0000 UTC m=+0.791820297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-92,}" Apr 30 12:50:30.132484 kubelet[2789]: I0430 12:50:30.132445 2789 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:31.006417 kubelet[2789]: E0430 12:50:31.006359 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:31.166904 kubelet[2789]: I0430 12:50:31.166867 2789 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-21-92" Apr 30 12:50:31.166904 kubelet[2789]: E0430 12:50:31.166907 2789 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-92\": node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.186286 kubelet[2789]: E0430 12:50:31.186252 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.287095 kubelet[2789]: E0430 12:50:31.286942 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.387564 kubelet[2789]: E0430 12:50:31.387492 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.488245 kubelet[2789]: E0430 12:50:31.488195 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.589212 kubelet[2789]: E0430 12:50:31.589081 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.690205 kubelet[2789]: E0430 12:50:31.690156 2789 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:31.710846 kubelet[2789]: E0430 12:50:31.710812 2789 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Apr 30 12:50:31.841278 kubelet[2789]: I0430 12:50:31.840178 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:31.852920 kubelet[2789]: I0430 12:50:31.852878 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:31.857518 kubelet[2789]: I0430 12:50:31.857488 2789 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:32.706789 kubelet[2789]: I0430 12:50:32.706737 2789 apiserver.go:52] "Watching apiserver" Apr 30 12:50:32.740206 kubelet[2789]: I0430 12:50:32.740154 2789 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:50:32.892299 systemd[1]: Reload requested from client PID 3065 ('systemctl') (unit session-7.scope)... Apr 30 12:50:32.892316 systemd[1]: Reloading... Apr 30 12:50:33.015105 zram_generator::config[3113]: No configuration found. Apr 30 12:50:33.147148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:33.281946 systemd[1]: Reloading finished in 389 ms. Apr 30 12:50:33.308665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:33.330245 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:50:33.330591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:33.330684 systemd[1]: kubelet.service: Consumed 1.134s CPU time, 123.5M memory peak. Apr 30 12:50:33.340201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:33.579459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:33.590559 (kubelet)[3170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:50:33.652875 kubelet[3170]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:33.653266 kubelet[3170]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 12:50:33.653266 kubelet[3170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:33.653266 kubelet[3170]: I0430 12:50:33.653096 3170 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:50:33.660857 kubelet[3170]: I0430 12:50:33.659902 3170 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 12:50:33.660857 kubelet[3170]: I0430 12:50:33.659927 3170 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:50:33.660857 kubelet[3170]: I0430 12:50:33.660167 3170 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 12:50:33.661579 kubelet[3170]: I0430 12:50:33.661560 3170 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:50:33.663708 kubelet[3170]: I0430 12:50:33.663684 3170 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:50:33.671118 kubelet[3170]: E0430 12:50:33.671076 3170 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:50:33.671272 kubelet[3170]: I0430 12:50:33.671194 3170 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:50:33.674751 kubelet[3170]: I0430 12:50:33.674727 3170 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:50:33.675026 kubelet[3170]: I0430 12:50:33.674990 3170 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:50:33.675222 kubelet[3170]: I0430 12:50:33.675021 3170 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:50:33.675449 kubelet[3170]: I0430 12:50:33.675226 3170 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:50:33.675449 kubelet[3170]: I0430 12:50:33.675240 3170 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 12:50:33.675449 kubelet[3170]: I0430 12:50:33.675290 3170 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:33.676974 kubelet[3170]: I0430 12:50:33.675504 3170 kubelet.go:446] "Attempting to sync node with API server" Apr 30 12:50:33.676974 kubelet[3170]: I0430 12:50:33.675521 3170 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:50:33.676974 kubelet[3170]: I0430 12:50:33.675544 3170 kubelet.go:352] "Adding apiserver pod source" Apr 30 12:50:33.676974 kubelet[3170]: I0430 12:50:33.675557 3170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:50:33.677585 kubelet[3170]: I0430 12:50:33.677566 3170 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:50:33.678280 kubelet[3170]: I0430 12:50:33.678261 3170 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:50:33.678915 kubelet[3170]: I0430 12:50:33.678900 3170 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 12:50:33.679025 kubelet[3170]: I0430 12:50:33.679016 3170 server.go:1287] "Started kubelet" Apr 30 12:50:33.685536 kubelet[3170]: I0430 12:50:33.685309 3170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:50:33.699642 kubelet[3170]: I0430 12:50:33.698484 3170 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:50:33.701058 kubelet[3170]: I0430 12:50:33.700454 3170 server.go:490] "Adding debug handlers to kubelet server" Apr 30 12:50:33.701973 kubelet[3170]: I0430 12:50:33.701904 3170 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:50:33.702200 kubelet[3170]: I0430 12:50:33.702184 3170 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:50:33.702487 kubelet[3170]: I0430 12:50:33.702448 3170 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:50:33.704427 kubelet[3170]: E0430 12:50:33.704388 3170 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Apr 30 12:50:33.704618 kubelet[3170]: I0430 12:50:33.704606 3170 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 12:50:33.705026 kubelet[3170]: I0430 12:50:33.705010 3170 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:50:33.705353 kubelet[3170]: I0430 12:50:33.705341 3170 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:50:33.710895 kubelet[3170]: I0430 12:50:33.710808 3170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:50:33.716422 kubelet[3170]: I0430 12:50:33.715733 3170 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:50:33.716641 kubelet[3170]: I0430 12:50:33.716612 3170 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:50:33.718597 kubelet[3170]: E0430 12:50:33.718568 3170 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:50:33.724903 kubelet[3170]: I0430 12:50:33.722706 3170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:50:33.724903 kubelet[3170]: I0430 12:50:33.722763 3170 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 12:50:33.724903 kubelet[3170]: I0430 12:50:33.722792 3170 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 12:50:33.724903 kubelet[3170]: I0430 12:50:33.722810 3170 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 12:50:33.724903 kubelet[3170]: E0430 12:50:33.722896 3170 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:50:33.728168 kubelet[3170]: I0430 12:50:33.728138 3170 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:50:33.785677 kubelet[3170]: I0430 12:50:33.785642 3170 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 12:50:33.785677 kubelet[3170]: I0430 12:50:33.785660 3170 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 12:50:33.785677 kubelet[3170]: I0430 12:50:33.785681 3170 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:33.786051 kubelet[3170]: I0430 12:50:33.785915 3170 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:50:33.786051 kubelet[3170]: I0430 12:50:33.785931 3170 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:50:33.786051 kubelet[3170]: I0430 12:50:33.785957 3170 policy_none.go:49] "None policy: Start" Apr 30 12:50:33.786051 kubelet[3170]: I0430 12:50:33.785970 3170 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 12:50:33.786051 kubelet[3170]: I0430 12:50:33.785983 3170 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:50:33.786623 kubelet[3170]: I0430 12:50:33.786212 3170 state_mem.go:75] "Updated machine memory state" Apr 30 12:50:33.791786 kubelet[3170]: I0430 12:50:33.791220 3170 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:50:33.791786 kubelet[3170]: I0430 12:50:33.791399 3170 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:50:33.791786 kubelet[3170]: I0430 12:50:33.791410 3170 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:50:33.792744 kubelet[3170]: I0430 12:50:33.792602 3170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:50:33.794855 kubelet[3170]: E0430 12:50:33.793403 3170 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 12:50:33.823396 kubelet[3170]: I0430 12:50:33.823366 3170 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:33.823967 kubelet[3170]: I0430 12:50:33.823728 3170 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:33.824397 kubelet[3170]: I0430 12:50:33.823809 3170 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.831004 kubelet[3170]: E0430 12:50:33.829991 3170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-92\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:33.831004 kubelet[3170]: E0430 12:50:33.830644 3170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-92\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.831310 kubelet[3170]: E0430 12:50:33.831293 3170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-92\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:33.894960 kubelet[3170]: I0430 12:50:33.894930 3170 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Apr 30 12:50:33.905918 kubelet[3170]: I0430 12:50:33.905681 3170 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-21-92" Apr 30 12:50:33.905918 kubelet[3170]: I0430 12:50:33.905750 3170 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-21-92" Apr 30 12:50:33.906296 kubelet[3170]: I0430 12:50:33.906201 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.906296 kubelet[3170]: I0430 12:50:33.906244 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.906296 kubelet[3170]: I0430 12:50:33.906263 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.906490 kubelet[3170]: I0430 12:50:33.906375 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b41e18afc3efad0cbb2625e8b964903a-ca-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"b41e18afc3efad0cbb2625e8b964903a\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:33.906490 kubelet[3170]: I0430 12:50:33.906395 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b41e18afc3efad0cbb2625e8b964903a-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"b41e18afc3efad0cbb2625e8b964903a\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:33.906490 kubelet[3170]: I0430 12:50:33.906411 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b41e18afc3efad0cbb2625e8b964903a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"b41e18afc3efad0cbb2625e8b964903a\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Apr 30 12:50:33.906490 kubelet[3170]: I0430 12:50:33.906427 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.906804 kubelet[3170]: I0430 12:50:33.906537 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/740e420c02418a6617c89d61a6cc6cc8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"740e420c02418a6617c89d61a6cc6cc8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Apr 30 12:50:33.906804 kubelet[3170]: I0430 12:50:33.906556 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43cae4d31bdf06691b63c9b54206c5ad-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-92\" (UID: \"43cae4d31bdf06691b63c9b54206c5ad\") " pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:33.908989 sudo[3202]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:50:33.909793 sudo[3202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:50:34.505924 sudo[3202]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:34.676711 kubelet[3170]: I0430 12:50:34.676399 3170 apiserver.go:52] "Watching apiserver" Apr 30 12:50:34.706017 kubelet[3170]: I0430 12:50:34.705942 3170 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:50:34.765986 kubelet[3170]: I0430 12:50:34.765137 3170 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:34.777294 kubelet[3170]: E0430 12:50:34.777043 3170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-92\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-92" Apr 30 12:50:34.782954 kubelet[3170]: I0430 12:50:34.782562 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-92" podStartSLOduration=3.7823446389999997 podStartE2EDuration="3.782344639s" podCreationTimestamp="2025-04-30 12:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:34.768596016 +0000 UTC m=+1.169743608" watchObservedRunningTime="2025-04-30 12:50:34.782344639 +0000 UTC m=+1.183492235" Apr 30 12:50:34.797299 kubelet[3170]: I0430 12:50:34.797146 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-92" podStartSLOduration=3.7971245639999998 podStartE2EDuration="3.797124564s" podCreationTimestamp="2025-04-30 12:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:34.796372617 +0000 UTC m=+1.197520218" watchObservedRunningTime="2025-04-30 12:50:34.797124564 +0000 UTC m=+1.198272158" Apr 30 12:50:34.798878 kubelet[3170]: I0430 12:50:34.798679 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-92" podStartSLOduration=3.798662998 podStartE2EDuration="3.798662998s" podCreationTimestamp="2025-04-30 12:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:34.783340243 +0000 UTC m=+1.184487849" watchObservedRunningTime="2025-04-30 12:50:34.798662998 +0000 UTC m=+1.199810603" Apr 30 12:50:36.289372 sudo[2235]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:36.326751 sshd[2234]: Connection closed by 147.75.109.163 port 48174 Apr 30 12:50:36.327886 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:36.331935 systemd[1]: sshd@6-172.31.21.92:22-147.75.109.163:48174.service: Deactivated successfully. Apr 30 12:50:36.338113 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:50:36.338426 systemd[1]: session-7.scope: Consumed 5.375s CPU time, 208.3M memory peak. Apr 30 12:50:36.342817 systemd-logind[1888]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:50:36.344147 systemd-logind[1888]: Removed session 7. Apr 30 12:50:39.800014 kubelet[3170]: I0430 12:50:39.799970 3170 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:50:39.800962 containerd[1910]: time="2025-04-30T12:50:39.800917782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:50:39.801300 kubelet[3170]: I0430 12:50:39.801236 3170 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:50:40.811685 systemd[1]: Created slice kubepods-besteffort-pod095d9ffc_150c_460c_a58c_f7d28ac91240.slice - libcontainer container kubepods-besteffort-pod095d9ffc_150c_460c_a58c_f7d28ac91240.slice. Apr 30 12:50:40.830764 systemd[1]: Created slice kubepods-burstable-pod6ba950d1_6d6f_4a50_af6b_d895c5c1b512.slice - libcontainer container kubepods-burstable-pod6ba950d1_6d6f_4a50_af6b_d895c5c1b512.slice. Apr 30 12:50:40.847163 kubelet[3170]: I0430 12:50:40.847123 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-cgroup\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.847647 kubelet[3170]: I0430 12:50:40.847181 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-etc-cni-netd\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.847647 kubelet[3170]: I0430 12:50:40.847208 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-net\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.847647 kubelet[3170]: I0430 12:50:40.847238 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/095d9ffc-150c-460c-a58c-f7d28ac91240-lib-modules\") pod \"kube-proxy-qcsfh\" (UID: \"095d9ffc-150c-460c-a58c-f7d28ac91240\") " pod="kube-system/kube-proxy-qcsfh" Apr 30 12:50:40.847647 kubelet[3170]: I0430 12:50:40.847265 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/095d9ffc-150c-460c-a58c-f7d28ac91240-kube-proxy\") pod \"kube-proxy-qcsfh\" (UID: \"095d9ffc-150c-460c-a58c-f7d28ac91240\") " pod="kube-system/kube-proxy-qcsfh" Apr 30 12:50:40.847647 kubelet[3170]: I0430 12:50:40.847298 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bpk\" (UniqueName: \"kubernetes.io/projected/095d9ffc-150c-460c-a58c-f7d28ac91240-kube-api-access-p5bpk\") pod \"kube-proxy-qcsfh\" (UID: \"095d9ffc-150c-460c-a58c-f7d28ac91240\") " pod="kube-system/kube-proxy-qcsfh" Apr 30 12:50:40.847647 kubelet[3170]: I0430 12:50:40.847334 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cni-path\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849478 kubelet[3170]: I0430 12:50:40.847363 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hubble-tls\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849478 kubelet[3170]: I0430 12:50:40.847392 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-lib-modules\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849478 kubelet[3170]: I0430 12:50:40.847422 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v4xx\" (UniqueName: \"kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-kube-api-access-8v4xx\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849478 kubelet[3170]: I0430 12:50:40.847454 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-run\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849478 kubelet[3170]: I0430 12:50:40.847489 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-bpf-maps\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849478 kubelet[3170]: I0430 12:50:40.847512 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hostproc\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849861 kubelet[3170]: I0430 12:50:40.847541 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/095d9ffc-150c-460c-a58c-f7d28ac91240-xtables-lock\") pod \"kube-proxy-qcsfh\" (UID: \"095d9ffc-150c-460c-a58c-f7d28ac91240\") " pod="kube-system/kube-proxy-qcsfh" Apr 30 12:50:40.849861 kubelet[3170]: I0430 12:50:40.847571 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-config-path\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849861 kubelet[3170]: I0430 12:50:40.847600 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-xtables-lock\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849861 kubelet[3170]: I0430 12:50:40.847631 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-clustermesh-secrets\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.849861 kubelet[3170]: I0430 12:50:40.847659 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-kernel\") pod \"cilium-vjbnb\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " pod="kube-system/cilium-vjbnb" Apr 30 12:50:40.891785 systemd[1]: Created slice kubepods-besteffort-pod8c481210_f294_4f75_a569_92bb965a1839.slice - libcontainer container kubepods-besteffort-pod8c481210_f294_4f75_a569_92bb965a1839.slice. Apr 30 12:50:40.949233 kubelet[3170]: I0430 12:50:40.948781 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c481210-f294-4f75-a569-92bb965a1839-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jhmqr\" (UID: \"8c481210-f294-4f75-a569-92bb965a1839\") " pod="kube-system/cilium-operator-6c4d7847fc-jhmqr" Apr 30 12:50:40.949233 kubelet[3170]: I0430 12:50:40.949007 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhw2\" (UniqueName: \"kubernetes.io/projected/8c481210-f294-4f75-a569-92bb965a1839-kube-api-access-xbhw2\") pod \"cilium-operator-6c4d7847fc-jhmqr\" (UID: \"8c481210-f294-4f75-a569-92bb965a1839\") " pod="kube-system/cilium-operator-6c4d7847fc-jhmqr" Apr 30 12:50:41.126879 containerd[1910]: time="2025-04-30T12:50:41.126742964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcsfh,Uid:095d9ffc-150c-460c-a58c-f7d28ac91240,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:41.138664 containerd[1910]: time="2025-04-30T12:50:41.138615302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjbnb,Uid:6ba950d1-6d6f-4a50-af6b-d895c5c1b512,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:41.174084 containerd[1910]: time="2025-04-30T12:50:41.173856715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:41.174084 containerd[1910]: time="2025-04-30T12:50:41.174025962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:41.174546 containerd[1910]: time="2025-04-30T12:50:41.174405888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:41.174826 containerd[1910]: time="2025-04-30T12:50:41.174667322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:41.190338 containerd[1910]: time="2025-04-30T12:50:41.190229364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:41.190622 containerd[1910]: time="2025-04-30T12:50:41.190318579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:41.190622 containerd[1910]: time="2025-04-30T12:50:41.190339758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:41.190622 containerd[1910]: time="2025-04-30T12:50:41.190453682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:41.201334 containerd[1910]: time="2025-04-30T12:50:41.201140090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jhmqr,Uid:8c481210-f294-4f75-a569-92bb965a1839,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:41.205061 systemd[1]: Started cri-containerd-f16cee13093c68c0f6a2726999d2c40450ee25c41f4e46d9f00af30487e79abe.scope - libcontainer container f16cee13093c68c0f6a2726999d2c40450ee25c41f4e46d9f00af30487e79abe. Apr 30 12:50:41.228086 systemd[1]: Started cri-containerd-db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e.scope - libcontainer container db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e. Apr 30 12:50:41.255862 containerd[1910]: time="2025-04-30T12:50:41.255670816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcsfh,Uid:095d9ffc-150c-460c-a58c-f7d28ac91240,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16cee13093c68c0f6a2726999d2c40450ee25c41f4e46d9f00af30487e79abe\"" Apr 30 12:50:41.263563 containerd[1910]: time="2025-04-30T12:50:41.263447506Z" level=info msg="CreateContainer within sandbox \"f16cee13093c68c0f6a2726999d2c40450ee25c41f4e46d9f00af30487e79abe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:50:41.279615 containerd[1910]: time="2025-04-30T12:50:41.270458673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:41.279615 containerd[1910]: time="2025-04-30T12:50:41.270533781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:41.279615 containerd[1910]: time="2025-04-30T12:50:41.270556600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:41.279615 containerd[1910]: time="2025-04-30T12:50:41.270638707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:41.282455 containerd[1910]: time="2025-04-30T12:50:41.282075125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjbnb,Uid:6ba950d1-6d6f-4a50-af6b-d895c5c1b512,Namespace:kube-system,Attempt:0,} returns sandbox id \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\"" Apr 30 12:50:41.286874 containerd[1910]: time="2025-04-30T12:50:41.286842536Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:50:41.294239 systemd[1]: Started cri-containerd-720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213.scope - libcontainer container 720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213. Apr 30 12:50:41.324131 containerd[1910]: time="2025-04-30T12:50:41.324013344Z" level=info msg="CreateContainer within sandbox \"f16cee13093c68c0f6a2726999d2c40450ee25c41f4e46d9f00af30487e79abe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dcf184aad5f3449221279274ae57ebfc79756fd195304259b5b33366e3260603\"" Apr 30 12:50:41.326817 containerd[1910]: time="2025-04-30T12:50:41.326695471Z" level=info msg="StartContainer for \"dcf184aad5f3449221279274ae57ebfc79756fd195304259b5b33366e3260603\"" Apr 30 12:50:41.339399 containerd[1910]: time="2025-04-30T12:50:41.339059026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jhmqr,Uid:8c481210-f294-4f75-a569-92bb965a1839,Namespace:kube-system,Attempt:0,} returns sandbox id \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\"" Apr 30 12:50:41.361117 systemd[1]: Started cri-containerd-dcf184aad5f3449221279274ae57ebfc79756fd195304259b5b33366e3260603.scope - libcontainer container dcf184aad5f3449221279274ae57ebfc79756fd195304259b5b33366e3260603. Apr 30 12:50:41.385474 update_engine[1889]: I20250430 12:50:41.384243 1889 update_attempter.cc:509] Updating boot flags... Apr 30 12:50:41.393239 containerd[1910]: time="2025-04-30T12:50:41.393158389Z" level=info msg="StartContainer for \"dcf184aad5f3449221279274ae57ebfc79756fd195304259b5b33366e3260603\" returns successfully" Apr 30 12:50:41.468970 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3412) Apr 30 12:50:41.745915 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3411) Apr 30 12:50:41.820547 kubelet[3170]: I0430 12:50:41.820419 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qcsfh" podStartSLOduration=1.820395865 podStartE2EDuration="1.820395865s" podCreationTimestamp="2025-04-30 12:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:41.816652573 +0000 UTC m=+8.217800189" watchObservedRunningTime="2025-04-30 12:50:41.820395865 +0000 UTC m=+8.221543470" Apr 30 12:50:42.038499 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3411) Apr 30 12:50:48.217024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648719698.mount: Deactivated successfully. Apr 30 12:50:50.613789 containerd[1910]: time="2025-04-30T12:50:50.613736903Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:50.615305 containerd[1910]: time="2025-04-30T12:50:50.615254560Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 12:50:50.618139 containerd[1910]: time="2025-04-30T12:50:50.617773768Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:50.619934 containerd[1910]: time="2025-04-30T12:50:50.619895188Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.333015524s" Apr 30 12:50:50.620014 containerd[1910]: time="2025-04-30T12:50:50.619939695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 12:50:50.621120 containerd[1910]: time="2025-04-30T12:50:50.621093894Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:50:50.624241 containerd[1910]: time="2025-04-30T12:50:50.624187330Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:50:50.689330 containerd[1910]: time="2025-04-30T12:50:50.689267078Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\"" Apr 30 12:50:50.690061 containerd[1910]: time="2025-04-30T12:50:50.690031682Z" level=info msg="StartContainer for \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\"" Apr 30 12:50:50.865806 systemd[1]: run-containerd-runc-k8s.io-f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc-runc.6ogVby.mount: Deactivated successfully. Apr 30 12:50:50.872026 systemd[1]: Started cri-containerd-f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc.scope - libcontainer container f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc. Apr 30 12:50:50.901611 containerd[1910]: time="2025-04-30T12:50:50.901256979Z" level=info msg="StartContainer for \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\" returns successfully" Apr 30 12:50:50.909316 systemd[1]: cri-containerd-f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc.scope: Deactivated successfully. Apr 30 12:50:51.064146 containerd[1910]: time="2025-04-30T12:50:51.053167844Z" level=info msg="shim disconnected" id=f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc namespace=k8s.io Apr 30 12:50:51.064146 containerd[1910]: time="2025-04-30T12:50:51.064138760Z" level=warning msg="cleaning up after shim disconnected" id=f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc namespace=k8s.io Apr 30 12:50:51.064146 containerd[1910]: time="2025-04-30T12:50:51.064155256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:50:51.680425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc-rootfs.mount: Deactivated successfully. Apr 30 12:50:51.851302 containerd[1910]: time="2025-04-30T12:50:51.851176727Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:50:51.902848 containerd[1910]: time="2025-04-30T12:50:51.901772035Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\"" Apr 30 12:50:51.915619 containerd[1910]: time="2025-04-30T12:50:51.915577312Z" level=info msg="StartContainer for \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\"" Apr 30 12:50:51.970870 systemd[1]: Started cri-containerd-3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c.scope - libcontainer container 3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c. Apr 30 12:50:52.032216 containerd[1910]: time="2025-04-30T12:50:52.031718162Z" level=info msg="StartContainer for \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\" returns successfully" Apr 30 12:50:52.050340 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:50:52.050967 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:50:52.051196 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:50:52.059032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:50:52.059272 systemd[1]: cri-containerd-3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c.scope: Deactivated successfully. Apr 30 12:50:52.113074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:50:52.125286 containerd[1910]: time="2025-04-30T12:50:52.125180470Z" level=info msg="shim disconnected" id=3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c namespace=k8s.io Apr 30 12:50:52.125286 containerd[1910]: time="2025-04-30T12:50:52.125238357Z" level=warning msg="cleaning up after shim disconnected" id=3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c namespace=k8s.io Apr 30 12:50:52.125286 containerd[1910]: time="2025-04-30T12:50:52.125251631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:50:52.678051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c-rootfs.mount: Deactivated successfully. Apr 30 12:50:52.853082 containerd[1910]: time="2025-04-30T12:50:52.853041145Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:50:52.901210 containerd[1910]: time="2025-04-30T12:50:52.901157530Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\"" Apr 30 12:50:52.902638 containerd[1910]: time="2025-04-30T12:50:52.901803902Z" level=info msg="StartContainer for \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\"" Apr 30 12:50:52.943036 systemd[1]: Started cri-containerd-2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6.scope - libcontainer container 2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6. Apr 30 12:50:52.976162 containerd[1910]: time="2025-04-30T12:50:52.976118548Z" level=info msg="StartContainer for \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\" returns successfully" Apr 30 12:50:52.976549 systemd[1]: cri-containerd-2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6.scope: Deactivated successfully. Apr 30 12:50:53.010937 containerd[1910]: time="2025-04-30T12:50:53.010862060Z" level=info msg="shim disconnected" id=2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6 namespace=k8s.io Apr 30 12:50:53.010937 containerd[1910]: time="2025-04-30T12:50:53.010909983Z" level=warning msg="cleaning up after shim disconnected" id=2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6 namespace=k8s.io Apr 30 12:50:53.010937 containerd[1910]: time="2025-04-30T12:50:53.010919084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:50:53.677192 systemd[1]: run-containerd-runc-k8s.io-2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6-runc.XuKVm9.mount: Deactivated successfully. Apr 30 12:50:53.677308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6-rootfs.mount: Deactivated successfully. Apr 30 12:50:53.862436 containerd[1910]: time="2025-04-30T12:50:53.862257532Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:50:53.896734 containerd[1910]: time="2025-04-30T12:50:53.896629072Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\"" Apr 30 12:50:53.897907 containerd[1910]: time="2025-04-30T12:50:53.897245275Z" level=info msg="StartContainer for \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\"" Apr 30 12:50:53.932045 systemd[1]: Started cri-containerd-4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1.scope - libcontainer container 4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1. Apr 30 12:50:53.962582 systemd[1]: cri-containerd-4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1.scope: Deactivated successfully. Apr 30 12:50:53.966877 containerd[1910]: time="2025-04-30T12:50:53.966819642Z" level=info msg="StartContainer for \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\" returns successfully" Apr 30 12:50:53.995376 containerd[1910]: time="2025-04-30T12:50:53.995324251Z" level=info msg="shim disconnected" id=4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1 namespace=k8s.io Apr 30 12:50:53.995561 containerd[1910]: time="2025-04-30T12:50:53.995413555Z" level=warning msg="cleaning up after shim disconnected" id=4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1 namespace=k8s.io Apr 30 12:50:53.995561 containerd[1910]: time="2025-04-30T12:50:53.995431310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:50:54.677125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1-rootfs.mount: Deactivated successfully. Apr 30 12:50:54.859999 containerd[1910]: time="2025-04-30T12:50:54.859963130Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:50:54.886805 containerd[1910]: time="2025-04-30T12:50:54.886751722Z" level=info msg="CreateContainer within sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\"" Apr 30 12:50:54.887362 containerd[1910]: time="2025-04-30T12:50:54.887340534Z" level=info msg="StartContainer for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\"" Apr 30 12:50:54.917039 systemd[1]: Started cri-containerd-dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b.scope - libcontainer container dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b. Apr 30 12:50:54.953401 containerd[1910]: time="2025-04-30T12:50:54.953202937Z" level=info msg="StartContainer for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" returns successfully" Apr 30 12:50:55.208071 kubelet[3170]: I0430 12:50:55.207132 3170 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 12:50:55.265631 kubelet[3170]: I0430 12:50:55.263256 3170 status_manager.go:890] "Failed to get status for pod" podUID="0e8a237f-d382-428d-9dff-02ede21a69af" pod="kube-system/coredns-668d6bf9bc-f7mn8" err="pods \"coredns-668d6bf9bc-f7mn8\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" Apr 30 12:50:55.266123 systemd[1]: Created slice kubepods-burstable-pod0e8a237f_d382_428d_9dff_02ede21a69af.slice - libcontainer container kubepods-burstable-pod0e8a237f_d382_428d_9dff_02ede21a69af.slice. Apr 30 12:50:55.271374 kubelet[3170]: I0430 12:50:55.271335 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e8a237f-d382-428d-9dff-02ede21a69af-config-volume\") pod \"coredns-668d6bf9bc-f7mn8\" (UID: \"0e8a237f-d382-428d-9dff-02ede21a69af\") " pod="kube-system/coredns-668d6bf9bc-f7mn8" Apr 30 12:50:55.271513 kubelet[3170]: I0430 12:50:55.271385 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9bca37c-59c8-4d73-b43e-cf3e0affc1c3-config-volume\") pod \"coredns-668d6bf9bc-7vg9m\" (UID: \"b9bca37c-59c8-4d73-b43e-cf3e0affc1c3\") " pod="kube-system/coredns-668d6bf9bc-7vg9m" Apr 30 12:50:55.271513 kubelet[3170]: I0430 12:50:55.271415 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkb9k\" (UniqueName: \"kubernetes.io/projected/b9bca37c-59c8-4d73-b43e-cf3e0affc1c3-kube-api-access-vkb9k\") pod \"coredns-668d6bf9bc-7vg9m\" (UID: \"b9bca37c-59c8-4d73-b43e-cf3e0affc1c3\") " pod="kube-system/coredns-668d6bf9bc-7vg9m" Apr 30 12:50:55.271513 kubelet[3170]: I0430 12:50:55.271450 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6645\" (UniqueName: \"kubernetes.io/projected/0e8a237f-d382-428d-9dff-02ede21a69af-kube-api-access-c6645\") pod \"coredns-668d6bf9bc-f7mn8\" (UID: \"0e8a237f-d382-428d-9dff-02ede21a69af\") " pod="kube-system/coredns-668d6bf9bc-f7mn8" Apr 30 12:50:55.277677 systemd[1]: Created slice kubepods-burstable-podb9bca37c_59c8_4d73_b43e_cf3e0affc1c3.slice - libcontainer container kubepods-burstable-podb9bca37c_59c8_4d73_b43e_cf3e0affc1c3.slice. Apr 30 12:50:55.577319 containerd[1910]: time="2025-04-30T12:50:55.576172354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f7mn8,Uid:0e8a237f-d382-428d-9dff-02ede21a69af,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:55.583468 containerd[1910]: time="2025-04-30T12:50:55.582927569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vg9m,Uid:b9bca37c-59c8-4d73-b43e-cf3e0affc1c3,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:55.895752 kubelet[3170]: I0430 12:50:55.895592 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vjbnb" podStartSLOduration=6.560387764 podStartE2EDuration="15.895571928s" podCreationTimestamp="2025-04-30 12:50:40 +0000 UTC" firstStartedPulling="2025-04-30 12:50:41.285744992 +0000 UTC m=+7.686892573" lastFinishedPulling="2025-04-30 12:50:50.62092914 +0000 UTC m=+17.022076737" observedRunningTime="2025-04-30 12:50:55.895024348 +0000 UTC m=+22.296171953" watchObservedRunningTime="2025-04-30 12:50:55.895571928 +0000 UTC m=+22.296719538" Apr 30 12:50:56.117640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196782288.mount: Deactivated successfully. Apr 30 12:50:56.715952 containerd[1910]: time="2025-04-30T12:50:56.715904231Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:56.717596 containerd[1910]: time="2025-04-30T12:50:56.717533876Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 12:50:56.719647 containerd[1910]: time="2025-04-30T12:50:56.719594852Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:56.721307 containerd[1910]: time="2025-04-30T12:50:56.721268286Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.10014099s" Apr 30 12:50:56.721414 containerd[1910]: time="2025-04-30T12:50:56.721323653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 12:50:56.724154 containerd[1910]: time="2025-04-30T12:50:56.723912163Z" level=info msg="CreateContainer within sandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:50:56.751843 containerd[1910]: time="2025-04-30T12:50:56.751793763Z" level=info msg="CreateContainer within sandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\"" Apr 30 12:50:56.752531 containerd[1910]: time="2025-04-30T12:50:56.752506147Z" level=info msg="StartContainer for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\"" Apr 30 12:50:56.797043 systemd[1]: Started cri-containerd-28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b.scope - libcontainer container 28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b. Apr 30 12:50:56.838375 containerd[1910]: time="2025-04-30T12:50:56.838259021Z" level=info msg="StartContainer for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" returns successfully" Apr 30 12:50:57.742683 systemd[1]: run-containerd-runc-k8s.io-28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b-runc.ONgSol.mount: Deactivated successfully. Apr 30 12:51:00.194139 systemd-networkd[1820]: cilium_host: Link UP Apr 30 12:51:00.194330 systemd-networkd[1820]: cilium_net: Link UP Apr 30 12:51:00.194336 systemd-networkd[1820]: cilium_net: Gained carrier Apr 30 12:51:00.194570 systemd-networkd[1820]: cilium_host: Gained carrier Apr 30 12:51:00.200573 (udev-worker)[4264]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:00.202607 (udev-worker)[4263]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:00.353604 (udev-worker)[4262]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:00.358763 systemd-networkd[1820]: cilium_vxlan: Link UP Apr 30 12:51:00.358771 systemd-networkd[1820]: cilium_vxlan: Gained carrier Apr 30 12:51:00.491261 systemd-networkd[1820]: cilium_net: Gained IPv6LL Apr 30 12:51:01.139147 systemd-networkd[1820]: cilium_host: Gained IPv6LL Apr 30 12:51:01.787941 systemd-networkd[1820]: cilium_vxlan: Gained IPv6LL Apr 30 12:51:02.097997 kernel: NET: Registered PF_ALG protocol family Apr 30 12:51:02.833732 systemd-networkd[1820]: lxc_health: Link UP Apr 30 12:51:02.841201 systemd-networkd[1820]: lxc_health: Gained carrier Apr 30 12:51:03.175217 kubelet[3170]: I0430 12:51:03.174885 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jhmqr" podStartSLOduration=7.793081563 podStartE2EDuration="23.17485826s" podCreationTimestamp="2025-04-30 12:50:40 +0000 UTC" firstStartedPulling="2025-04-30 12:50:41.340501699 +0000 UTC m=+7.741649292" lastFinishedPulling="2025-04-30 12:50:56.722278382 +0000 UTC m=+23.123425989" observedRunningTime="2025-04-30 12:50:56.902492798 +0000 UTC m=+23.303640406" watchObservedRunningTime="2025-04-30 12:51:03.17485826 +0000 UTC m=+29.576005865" Apr 30 12:51:03.226560 systemd-networkd[1820]: lxc5f2849cb52d8: Link UP Apr 30 12:51:03.233904 kernel: eth0: renamed from tmpb55d0 Apr 30 12:51:03.240681 (udev-worker)[4272]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:03.247261 systemd-networkd[1820]: lxc5f2849cb52d8: Gained carrier Apr 30 12:51:03.247484 systemd-networkd[1820]: lxcc84c0ba17021: Link UP Apr 30 12:51:03.258909 kernel: eth0: renamed from tmp7b47d Apr 30 12:51:03.268975 systemd-networkd[1820]: lxcc84c0ba17021: Gained carrier Apr 30 12:51:04.211040 systemd-networkd[1820]: lxc_health: Gained IPv6LL Apr 30 12:51:04.531053 systemd-networkd[1820]: lxcc84c0ba17021: Gained IPv6LL Apr 30 12:51:05.108929 systemd-networkd[1820]: lxc5f2849cb52d8: Gained IPv6LL Apr 30 12:51:06.229178 systemd[1]: Started sshd@7-172.31.21.92:22-147.75.109.163:42276.service - OpenSSH per-connection server daemon (147.75.109.163:42276). Apr 30 12:51:06.540932 sshd[4629]: Accepted publickey for core from 147.75.109.163 port 42276 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:06.543204 sshd-session[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:06.563933 systemd-logind[1888]: New session 8 of user core. Apr 30 12:51:06.576955 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:51:07.574163 sshd[4633]: Connection closed by 147.75.109.163 port 42276 Apr 30 12:51:07.579142 sshd-session[4629]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:07.591428 systemd[1]: sshd@7-172.31.21.92:22-147.75.109.163:42276.service: Deactivated successfully. Apr 30 12:51:07.602713 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:51:07.605251 systemd-logind[1888]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:51:07.608663 systemd-logind[1888]: Removed session 8. Apr 30 12:51:07.955121 containerd[1910]: time="2025-04-30T12:51:07.955008677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:07.957231 containerd[1910]: time="2025-04-30T12:51:07.956582739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:07.957231 containerd[1910]: time="2025-04-30T12:51:07.956617498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:07.957864 containerd[1910]: time="2025-04-30T12:51:07.957654192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:07.991896 containerd[1910]: time="2025-04-30T12:51:07.988302580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:07.991896 containerd[1910]: time="2025-04-30T12:51:07.991461797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:07.991896 containerd[1910]: time="2025-04-30T12:51:07.991504441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:07.991896 containerd[1910]: time="2025-04-30T12:51:07.991654182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:08.018065 systemd[1]: Started cri-containerd-b55d00d832fa7a01b519bda0229bd406b3c1f1ea69c57e680fe68beb0b866fb1.scope - libcontainer container b55d00d832fa7a01b519bda0229bd406b3c1f1ea69c57e680fe68beb0b866fb1. Apr 30 12:51:08.066307 systemd[1]: Started cri-containerd-7b47dcfd2b46017b9f1657f8b4b2d36aed70449b29284c73c4e6c22991745cbf.scope - libcontainer container 7b47dcfd2b46017b9f1657f8b4b2d36aed70449b29284c73c4e6c22991745cbf. Apr 30 12:51:08.150094 containerd[1910]: time="2025-04-30T12:51:08.149953633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vg9m,Uid:b9bca37c-59c8-4d73-b43e-cf3e0affc1c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b55d00d832fa7a01b519bda0229bd406b3c1f1ea69c57e680fe68beb0b866fb1\"" Apr 30 12:51:08.154294 containerd[1910]: time="2025-04-30T12:51:08.154144623Z" level=info msg="CreateContainer within sandbox \"b55d00d832fa7a01b519bda0229bd406b3c1f1ea69c57e680fe68beb0b866fb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:51:08.191380 containerd[1910]: time="2025-04-30T12:51:08.190764761Z" level=info msg="CreateContainer within sandbox \"b55d00d832fa7a01b519bda0229bd406b3c1f1ea69c57e680fe68beb0b866fb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"593a623bf905f1029af80508c266851fcc86c767f716c51a9ede3770d3ac2861\"" Apr 30 12:51:08.194057 containerd[1910]: time="2025-04-30T12:51:08.193404839Z" level=info msg="StartContainer for \"593a623bf905f1029af80508c266851fcc86c767f716c51a9ede3770d3ac2861\"" Apr 30 12:51:08.233300 containerd[1910]: time="2025-04-30T12:51:08.233152959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f7mn8,Uid:0e8a237f-d382-428d-9dff-02ede21a69af,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b47dcfd2b46017b9f1657f8b4b2d36aed70449b29284c73c4e6c22991745cbf\"" Apr 30 12:51:08.248209 containerd[1910]: time="2025-04-30T12:51:08.247930519Z" level=info msg="CreateContainer within sandbox \"7b47dcfd2b46017b9f1657f8b4b2d36aed70449b29284c73c4e6c22991745cbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:51:08.254324 systemd[1]: Started cri-containerd-593a623bf905f1029af80508c266851fcc86c767f716c51a9ede3770d3ac2861.scope - libcontainer container 593a623bf905f1029af80508c266851fcc86c767f716c51a9ede3770d3ac2861. Apr 30 12:51:08.279005 containerd[1910]: time="2025-04-30T12:51:08.278967417Z" level=info msg="CreateContainer within sandbox \"7b47dcfd2b46017b9f1657f8b4b2d36aed70449b29284c73c4e6c22991745cbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4715723360703b1e7622543561db9cde53d30450cce63187ac5a90e685e25b2\"" Apr 30 12:51:08.283634 containerd[1910]: time="2025-04-30T12:51:08.283122732Z" level=info msg="StartContainer for \"f4715723360703b1e7622543561db9cde53d30450cce63187ac5a90e685e25b2\"" Apr 30 12:51:08.297100 containerd[1910]: time="2025-04-30T12:51:08.296961080Z" level=info msg="StartContainer for \"593a623bf905f1029af80508c266851fcc86c767f716c51a9ede3770d3ac2861\" returns successfully" Apr 30 12:51:08.325040 systemd[1]: Started cri-containerd-f4715723360703b1e7622543561db9cde53d30450cce63187ac5a90e685e25b2.scope - libcontainer container f4715723360703b1e7622543561db9cde53d30450cce63187ac5a90e685e25b2. Apr 30 12:51:08.351931 containerd[1910]: time="2025-04-30T12:51:08.351888865Z" level=info msg="StartContainer for \"f4715723360703b1e7622543561db9cde53d30450cce63187ac5a90e685e25b2\" returns successfully" Apr 30 12:51:08.971561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407096382.mount: Deactivated successfully. Apr 30 12:51:08.978668 kubelet[3170]: I0430 12:51:08.978598 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f7mn8" podStartSLOduration=28.978574129 podStartE2EDuration="28.978574129s" podCreationTimestamp="2025-04-30 12:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:08.954942187 +0000 UTC m=+35.356089792" watchObservedRunningTime="2025-04-30 12:51:08.978574129 +0000 UTC m=+35.379721734" Apr 30 12:51:10.911706 ntpd[1883]: Listen normally on 7 cilium_host 192.168.0.84:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 7 cilium_host 192.168.0.84:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 8 cilium_net [fe80::b8d3:8ff:feb7:ad52%4]:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 9 cilium_host [fe80::48e5:35ff:fe29:bdad%5]:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 10 cilium_vxlan [fe80::88d:10ff:fe8e:d716%6]:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 11 lxc_health [fe80::b050:2fff:feb6:fe77%8]:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 12 lxc5f2849cb52d8 [fe80::90f7:26ff:fe21:7228%10]:123 Apr 30 12:51:10.912135 ntpd[1883]: 30 Apr 12:51:10 ntpd[1883]: Listen normally on 13 lxcc84c0ba17021 [fe80::c0ff:45ff:fe86:30ec%12]:123 Apr 30 12:51:10.911787 ntpd[1883]: Listen normally on 8 cilium_net [fe80::b8d3:8ff:feb7:ad52%4]:123 Apr 30 12:51:10.911857 ntpd[1883]: Listen normally on 9 cilium_host [fe80::48e5:35ff:fe29:bdad%5]:123 Apr 30 12:51:10.911890 ntpd[1883]: Listen normally on 10 cilium_vxlan [fe80::88d:10ff:fe8e:d716%6]:123 Apr 30 12:51:10.911933 ntpd[1883]: Listen normally on 11 lxc_health [fe80::b050:2fff:feb6:fe77%8]:123 Apr 30 12:51:10.911963 ntpd[1883]: Listen normally on 12 lxc5f2849cb52d8 [fe80::90f7:26ff:fe21:7228%10]:123 Apr 30 12:51:10.911991 ntpd[1883]: Listen normally on 13 lxcc84c0ba17021 [fe80::c0ff:45ff:fe86:30ec%12]:123 Apr 30 12:51:12.633212 systemd[1]: Started sshd@8-172.31.21.92:22-147.75.109.163:53480.service - OpenSSH per-connection server daemon (147.75.109.163:53480). Apr 30 12:51:12.918461 sshd[4813]: Accepted publickey for core from 147.75.109.163 port 53480 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:12.920423 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:12.925993 systemd-logind[1888]: New session 9 of user core. Apr 30 12:51:12.936049 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:51:13.339551 sshd[4819]: Connection closed by 147.75.109.163 port 53480 Apr 30 12:51:13.340456 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:13.343524 systemd[1]: sshd@8-172.31.21.92:22-147.75.109.163:53480.service: Deactivated successfully. Apr 30 12:51:13.345911 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:51:13.347687 systemd-logind[1888]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:51:13.349171 systemd-logind[1888]: Removed session 9. Apr 30 12:51:18.398227 systemd[1]: Started sshd@9-172.31.21.92:22-147.75.109.163:44808.service - OpenSSH per-connection server daemon (147.75.109.163:44808). Apr 30 12:51:18.650780 sshd[4833]: Accepted publickey for core from 147.75.109.163 port 44808 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:18.652077 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:18.656793 systemd-logind[1888]: New session 10 of user core. Apr 30 12:51:18.661037 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:51:18.917362 sshd[4835]: Connection closed by 147.75.109.163 port 44808 Apr 30 12:51:18.918104 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:18.921388 systemd[1]: sshd@9-172.31.21.92:22-147.75.109.163:44808.service: Deactivated successfully. Apr 30 12:51:18.923238 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:51:18.924864 systemd-logind[1888]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:51:18.926215 systemd-logind[1888]: Removed session 10. Apr 30 12:51:23.972359 systemd[1]: Started sshd@10-172.31.21.92:22-147.75.109.163:44822.service - OpenSSH per-connection server daemon (147.75.109.163:44822). Apr 30 12:51:24.220038 sshd[4848]: Accepted publickey for core from 147.75.109.163 port 44822 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:24.221376 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:24.225677 systemd-logind[1888]: New session 11 of user core. Apr 30 12:51:24.236024 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:51:24.486663 sshd[4850]: Connection closed by 147.75.109.163 port 44822 Apr 30 12:51:24.486800 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:24.490146 systemd[1]: sshd@10-172.31.21.92:22-147.75.109.163:44822.service: Deactivated successfully. Apr 30 12:51:24.492033 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:51:24.494126 systemd-logind[1888]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:51:24.495168 systemd-logind[1888]: Removed session 11. Apr 30 12:51:24.536197 systemd[1]: Started sshd@11-172.31.21.92:22-147.75.109.163:44836.service - OpenSSH per-connection server daemon (147.75.109.163:44836). Apr 30 12:51:24.791575 sshd[4863]: Accepted publickey for core from 147.75.109.163 port 44836 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:24.793001 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:24.798178 systemd-logind[1888]: New session 12 of user core. Apr 30 12:51:24.805088 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:51:25.125359 sshd[4865]: Connection closed by 147.75.109.163 port 44836 Apr 30 12:51:25.127022 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:25.131694 systemd[1]: sshd@11-172.31.21.92:22-147.75.109.163:44836.service: Deactivated successfully. Apr 30 12:51:25.136016 systemd-logind[1888]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:51:25.137073 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:51:25.141151 systemd-logind[1888]: Removed session 12. Apr 30 12:51:25.182201 systemd[1]: Started sshd@12-172.31.21.92:22-147.75.109.163:44842.service - OpenSSH per-connection server daemon (147.75.109.163:44842). Apr 30 12:51:25.449137 sshd[4876]: Accepted publickey for core from 147.75.109.163 port 44842 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:25.450567 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:25.455356 systemd-logind[1888]: New session 13 of user core. Apr 30 12:51:25.461038 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:51:25.721240 sshd[4878]: Connection closed by 147.75.109.163 port 44842 Apr 30 12:51:25.722043 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:25.725032 systemd[1]: sshd@12-172.31.21.92:22-147.75.109.163:44842.service: Deactivated successfully. Apr 30 12:51:25.726899 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:51:25.728596 systemd-logind[1888]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:51:25.729823 systemd-logind[1888]: Removed session 13. Apr 30 12:51:30.779167 systemd[1]: Started sshd@13-172.31.21.92:22-147.75.109.163:40954.service - OpenSSH per-connection server daemon (147.75.109.163:40954). Apr 30 12:51:31.028289 sshd[4891]: Accepted publickey for core from 147.75.109.163 port 40954 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:31.029775 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:31.035313 systemd-logind[1888]: New session 14 of user core. Apr 30 12:51:31.039058 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:51:31.282471 sshd[4893]: Connection closed by 147.75.109.163 port 40954 Apr 30 12:51:31.283048 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:31.286651 systemd[1]: sshd@13-172.31.21.92:22-147.75.109.163:40954.service: Deactivated successfully. Apr 30 12:51:31.289024 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:51:31.289794 systemd-logind[1888]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:51:31.290735 systemd-logind[1888]: Removed session 14. Apr 30 12:51:36.334154 systemd[1]: Started sshd@14-172.31.21.92:22-147.75.109.163:40958.service - OpenSSH per-connection server daemon (147.75.109.163:40958). Apr 30 12:51:36.585267 sshd[4907]: Accepted publickey for core from 147.75.109.163 port 40958 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:36.586786 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:36.591894 systemd-logind[1888]: New session 15 of user core. Apr 30 12:51:36.602063 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:51:36.835926 sshd[4909]: Connection closed by 147.75.109.163 port 40958 Apr 30 12:51:36.836774 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:36.839505 systemd[1]: sshd@14-172.31.21.92:22-147.75.109.163:40958.service: Deactivated successfully. Apr 30 12:51:36.841315 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:51:36.842680 systemd-logind[1888]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:51:36.843806 systemd-logind[1888]: Removed session 15. Apr 30 12:51:36.888471 systemd[1]: Started sshd@15-172.31.21.92:22-147.75.109.163:58504.service - OpenSSH per-connection server daemon (147.75.109.163:58504). Apr 30 12:51:37.143088 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 58504 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:37.144433 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:37.149309 systemd-logind[1888]: New session 16 of user core. Apr 30 12:51:37.158061 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:51:37.889955 sshd[4923]: Connection closed by 147.75.109.163 port 58504 Apr 30 12:51:37.890990 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:37.894884 systemd[1]: sshd@15-172.31.21.92:22-147.75.109.163:58504.service: Deactivated successfully. Apr 30 12:51:37.897350 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:51:37.898440 systemd-logind[1888]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:51:37.899419 systemd-logind[1888]: Removed session 16. Apr 30 12:51:37.941186 systemd[1]: Started sshd@16-172.31.21.92:22-147.75.109.163:58514.service - OpenSSH per-connection server daemon (147.75.109.163:58514). Apr 30 12:51:38.210534 sshd[4933]: Accepted publickey for core from 147.75.109.163 port 58514 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:38.211973 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:38.218043 systemd-logind[1888]: New session 17 of user core. Apr 30 12:51:38.226053 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:51:39.311582 sshd[4935]: Connection closed by 147.75.109.163 port 58514 Apr 30 12:51:39.313361 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:39.316530 systemd[1]: sshd@16-172.31.21.92:22-147.75.109.163:58514.service: Deactivated successfully. Apr 30 12:51:39.318822 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:51:39.320239 systemd-logind[1888]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:51:39.321537 systemd-logind[1888]: Removed session 17. Apr 30 12:51:39.362166 systemd[1]: Started sshd@17-172.31.21.92:22-147.75.109.163:58526.service - OpenSSH per-connection server daemon (147.75.109.163:58526). Apr 30 12:51:39.612001 sshd[4951]: Accepted publickey for core from 147.75.109.163 port 58526 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:39.613379 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:39.618290 systemd-logind[1888]: New session 18 of user core. Apr 30 12:51:39.623004 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:51:40.032113 sshd[4953]: Connection closed by 147.75.109.163 port 58526 Apr 30 12:51:40.032934 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:40.035716 systemd[1]: sshd@17-172.31.21.92:22-147.75.109.163:58526.service: Deactivated successfully. Apr 30 12:51:40.037482 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:51:40.038876 systemd-logind[1888]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:51:40.039896 systemd-logind[1888]: Removed session 18. Apr 30 12:51:40.091129 systemd[1]: Started sshd@18-172.31.21.92:22-147.75.109.163:58538.service - OpenSSH per-connection server daemon (147.75.109.163:58538). Apr 30 12:51:40.345497 sshd[4963]: Accepted publickey for core from 147.75.109.163 port 58538 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:40.346864 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:40.351937 systemd-logind[1888]: New session 19 of user core. Apr 30 12:51:40.359037 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:51:40.599304 sshd[4965]: Connection closed by 147.75.109.163 port 58538 Apr 30 12:51:40.600213 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:40.604071 systemd[1]: sshd@18-172.31.21.92:22-147.75.109.163:58538.service: Deactivated successfully. Apr 30 12:51:40.605891 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:51:40.606680 systemd-logind[1888]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:51:40.607989 systemd-logind[1888]: Removed session 19. Apr 30 12:51:45.654212 systemd[1]: Started sshd@19-172.31.21.92:22-147.75.109.163:58544.service - OpenSSH per-connection server daemon (147.75.109.163:58544). Apr 30 12:51:45.912437 sshd[4981]: Accepted publickey for core from 147.75.109.163 port 58544 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:45.913654 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:45.918251 systemd-logind[1888]: New session 20 of user core. Apr 30 12:51:45.925026 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:51:46.170933 sshd[4983]: Connection closed by 147.75.109.163 port 58544 Apr 30 12:51:46.171708 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:46.175626 systemd[1]: sshd@19-172.31.21.92:22-147.75.109.163:58544.service: Deactivated successfully. Apr 30 12:51:46.177574 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:51:46.178411 systemd-logind[1888]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:51:46.179434 systemd-logind[1888]: Removed session 20. Apr 30 12:51:51.221166 systemd[1]: Started sshd@20-172.31.21.92:22-147.75.109.163:39886.service - OpenSSH per-connection server daemon (147.75.109.163:39886). Apr 30 12:51:51.471357 sshd[4995]: Accepted publickey for core from 147.75.109.163 port 39886 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:51.473007 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:51.477853 systemd-logind[1888]: New session 21 of user core. Apr 30 12:51:51.483060 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:51:51.716602 sshd[4997]: Connection closed by 147.75.109.163 port 39886 Apr 30 12:51:51.717201 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:51.720169 systemd[1]: sshd@20-172.31.21.92:22-147.75.109.163:39886.service: Deactivated successfully. Apr 30 12:51:51.721998 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:51:51.723706 systemd-logind[1888]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:51:51.725721 systemd-logind[1888]: Removed session 21. Apr 30 12:51:56.767143 systemd[1]: Started sshd@21-172.31.21.92:22-147.75.109.163:39902.service - OpenSSH per-connection server daemon (147.75.109.163:39902). Apr 30 12:51:57.019394 sshd[5009]: Accepted publickey for core from 147.75.109.163 port 39902 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:57.020792 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:57.025749 systemd-logind[1888]: New session 22 of user core. Apr 30 12:51:57.035105 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:51:57.277176 sshd[5011]: Connection closed by 147.75.109.163 port 39902 Apr 30 12:51:57.277675 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:57.280677 systemd[1]: sshd@21-172.31.21.92:22-147.75.109.163:39902.service: Deactivated successfully. Apr 30 12:51:57.282634 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:51:57.284057 systemd-logind[1888]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:51:57.285166 systemd-logind[1888]: Removed session 22. Apr 30 12:51:57.328383 systemd[1]: Started sshd@22-172.31.21.92:22-147.75.109.163:37552.service - OpenSSH per-connection server daemon (147.75.109.163:37552). Apr 30 12:51:57.579502 sshd[5023]: Accepted publickey for core from 147.75.109.163 port 37552 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:57.579944 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:57.585440 systemd-logind[1888]: New session 23 of user core. Apr 30 12:51:57.588995 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:51:59.960931 kubelet[3170]: I0430 12:51:59.960269 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7vg9m" podStartSLOduration=79.960246437 podStartE2EDuration="1m19.960246437s" podCreationTimestamp="2025-04-30 12:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:09.007070489 +0000 UTC m=+35.408218104" watchObservedRunningTime="2025-04-30 12:51:59.960246437 +0000 UTC m=+86.361394040" Apr 30 12:52:00.134246 containerd[1910]: time="2025-04-30T12:52:00.134189684Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:52:00.148902 containerd[1910]: time="2025-04-30T12:52:00.148806313Z" level=info msg="StopContainer for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" with timeout 30 (s)" Apr 30 12:52:00.148902 containerd[1910]: time="2025-04-30T12:52:00.148860738Z" level=info msg="StopContainer for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" with timeout 2 (s)" Apr 30 12:52:00.150425 containerd[1910]: time="2025-04-30T12:52:00.150402120Z" level=info msg="Stop container \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" with signal terminated" Apr 30 12:52:00.150949 containerd[1910]: time="2025-04-30T12:52:00.150927867Z" level=info msg="Stop container \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" with signal terminated" Apr 30 12:52:00.179463 systemd-networkd[1820]: lxc_health: Link DOWN Apr 30 12:52:00.179473 systemd-networkd[1820]: lxc_health: Lost carrier Apr 30 12:52:00.195883 systemd[1]: cri-containerd-28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b.scope: Deactivated successfully. Apr 30 12:52:00.206029 systemd[1]: cri-containerd-dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b.scope: Deactivated successfully. Apr 30 12:52:00.206377 systemd[1]: cri-containerd-dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b.scope: Consumed 8.113s CPU time, 192M memory peak, 70.3M read from disk, 13.3M written to disk. Apr 30 12:52:00.234426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b-rootfs.mount: Deactivated successfully. Apr 30 12:52:00.242544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b-rootfs.mount: Deactivated successfully. Apr 30 12:52:00.256898 containerd[1910]: time="2025-04-30T12:52:00.256841505Z" level=info msg="shim disconnected" id=dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b namespace=k8s.io Apr 30 12:52:00.256898 containerd[1910]: time="2025-04-30T12:52:00.256899300Z" level=warning msg="cleaning up after shim disconnected" id=dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b namespace=k8s.io Apr 30 12:52:00.257198 containerd[1910]: time="2025-04-30T12:52:00.256907901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:00.257198 containerd[1910]: time="2025-04-30T12:52:00.256846021Z" level=info msg="shim disconnected" id=28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b namespace=k8s.io Apr 30 12:52:00.257198 containerd[1910]: time="2025-04-30T12:52:00.257145417Z" level=warning msg="cleaning up after shim disconnected" id=28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b namespace=k8s.io Apr 30 12:52:00.257198 containerd[1910]: time="2025-04-30T12:52:00.257151971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:00.282907 containerd[1910]: time="2025-04-30T12:52:00.282786329Z" level=info msg="StopContainer for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" returns successfully" Apr 30 12:52:00.285449 containerd[1910]: time="2025-04-30T12:52:00.285408686Z" level=info msg="StopContainer for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" returns successfully" Apr 30 12:52:00.292526 containerd[1910]: time="2025-04-30T12:52:00.292452291Z" level=info msg="StopPodSandbox for \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\"" Apr 30 12:52:00.295293 containerd[1910]: time="2025-04-30T12:52:00.294181985Z" level=info msg="StopPodSandbox for \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\"" Apr 30 12:52:00.296686 containerd[1910]: time="2025-04-30T12:52:00.295427657Z" level=info msg="Container to stop \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:00.296785 containerd[1910]: time="2025-04-30T12:52:00.296772536Z" level=info msg="Container to stop \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:00.296945 containerd[1910]: time="2025-04-30T12:52:00.296826271Z" level=info msg="Container to stop \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:00.297023 containerd[1910]: time="2025-04-30T12:52:00.297012138Z" level=info msg="Container to stop \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:00.297064 containerd[1910]: time="2025-04-30T12:52:00.297055898Z" level=info msg="Container to stop \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:00.297708 containerd[1910]: time="2025-04-30T12:52:00.294013526Z" level=info msg="Container to stop \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:00.299730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e-shm.mount: Deactivated successfully. Apr 30 12:52:00.305215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213-shm.mount: Deactivated successfully. Apr 30 12:52:00.311649 systemd[1]: cri-containerd-db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e.scope: Deactivated successfully. Apr 30 12:52:00.318244 systemd[1]: cri-containerd-720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213.scope: Deactivated successfully. Apr 30 12:52:00.355141 containerd[1910]: time="2025-04-30T12:52:00.355091634Z" level=info msg="shim disconnected" id=db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e namespace=k8s.io Apr 30 12:52:00.355635 containerd[1910]: time="2025-04-30T12:52:00.355390887Z" level=warning msg="cleaning up after shim disconnected" id=db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e namespace=k8s.io Apr 30 12:52:00.355635 containerd[1910]: time="2025-04-30T12:52:00.355408357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:00.356049 containerd[1910]: time="2025-04-30T12:52:00.355109522Z" level=info msg="shim disconnected" id=720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213 namespace=k8s.io Apr 30 12:52:00.356049 containerd[1910]: time="2025-04-30T12:52:00.355939943Z" level=warning msg="cleaning up after shim disconnected" id=720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213 namespace=k8s.io Apr 30 12:52:00.356049 containerd[1910]: time="2025-04-30T12:52:00.355946751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:00.372779 containerd[1910]: time="2025-04-30T12:52:00.372428133Z" level=info msg="TearDown network for sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" successfully" Apr 30 12:52:00.372779 containerd[1910]: time="2025-04-30T12:52:00.372458905Z" level=info msg="StopPodSandbox for \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" returns successfully" Apr 30 12:52:00.372779 containerd[1910]: time="2025-04-30T12:52:00.372622660Z" level=info msg="TearDown network for sandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" successfully" Apr 30 12:52:00.372779 containerd[1910]: time="2025-04-30T12:52:00.372644339Z" level=info msg="StopPodSandbox for \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" returns successfully" Apr 30 12:52:00.411473 kubelet[3170]: I0430 12:52:00.410990 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-lib-modules\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411473 kubelet[3170]: I0430 12:52:00.411063 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-config-path\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411473 kubelet[3170]: I0430 12:52:00.411083 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbhw2\" (UniqueName: \"kubernetes.io/projected/8c481210-f294-4f75-a569-92bb965a1839-kube-api-access-xbhw2\") pod \"8c481210-f294-4f75-a569-92bb965a1839\" (UID: \"8c481210-f294-4f75-a569-92bb965a1839\") " Apr 30 12:52:00.411473 kubelet[3170]: I0430 12:52:00.411107 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hubble-tls\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411473 kubelet[3170]: I0430 12:52:00.411124 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-bpf-maps\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411473 kubelet[3170]: I0430 12:52:00.411147 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-cgroup\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411776 kubelet[3170]: I0430 12:52:00.411163 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-kernel\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411776 kubelet[3170]: I0430 12:52:00.411177 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-net\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411776 kubelet[3170]: I0430 12:52:00.411190 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hostproc\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411776 kubelet[3170]: I0430 12:52:00.411205 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-etc-cni-netd\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411776 kubelet[3170]: I0430 12:52:00.411220 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-xtables-lock\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411776 kubelet[3170]: I0430 12:52:00.411236 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v4xx\" (UniqueName: \"kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-kube-api-access-8v4xx\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411951 kubelet[3170]: I0430 12:52:00.411251 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cni-path\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411951 kubelet[3170]: I0430 12:52:00.411266 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-run\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411951 kubelet[3170]: I0430 12:52:00.411283 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-clustermesh-secrets\") pod \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\" (UID: \"6ba950d1-6d6f-4a50-af6b-d895c5c1b512\") " Apr 30 12:52:00.411951 kubelet[3170]: I0430 12:52:00.411300 3170 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c481210-f294-4f75-a569-92bb965a1839-cilium-config-path\") pod \"8c481210-f294-4f75-a569-92bb965a1839\" (UID: \"8c481210-f294-4f75-a569-92bb965a1839\") " Apr 30 12:52:00.417393 kubelet[3170]: I0430 12:52:00.415314 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c481210-f294-4f75-a569-92bb965a1839-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c481210-f294-4f75-a569-92bb965a1839" (UID: "8c481210-f294-4f75-a569-92bb965a1839"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 12:52:00.419001 kubelet[3170]: I0430 12:52:00.418967 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 12:52:00.419496 kubelet[3170]: I0430 12:52:00.419453 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.420488 kubelet[3170]: I0430 12:52:00.420390 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.422785 kubelet[3170]: I0430 12:52:00.422552 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c481210-f294-4f75-a569-92bb965a1839-kube-api-access-xbhw2" (OuterVolumeSpecName: "kube-api-access-xbhw2") pod "8c481210-f294-4f75-a569-92bb965a1839" (UID: "8c481210-f294-4f75-a569-92bb965a1839"). InnerVolumeSpecName "kube-api-access-xbhw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:52:00.422785 kubelet[3170]: I0430 12:52:00.422596 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.422785 kubelet[3170]: I0430 12:52:00.422611 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.423312 kubelet[3170]: I0430 12:52:00.423293 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:52:00.423428 kubelet[3170]: I0430 12:52:00.423416 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.423508 kubelet[3170]: I0430 12:52:00.423498 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.423625 kubelet[3170]: I0430 12:52:00.423611 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.423702 kubelet[3170]: I0430 12:52:00.423692 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.423879 kubelet[3170]: I0430 12:52:00.423755 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.423879 kubelet[3170]: I0430 12:52:00.423770 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:52:00.424721 kubelet[3170]: I0430 12:52:00.424657 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-kube-api-access-8v4xx" (OuterVolumeSpecName: "kube-api-access-8v4xx") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "kube-api-access-8v4xx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:52:00.426270 kubelet[3170]: I0430 12:52:00.426245 3170 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ba950d1-6d6f-4a50-af6b-d895c5c1b512" (UID: "6ba950d1-6d6f-4a50-af6b-d895c5c1b512"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511651 3170 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-etc-cni-netd\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511699 3170 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-net\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511717 3170 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hostproc\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511728 3170 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8v4xx\" (UniqueName: \"kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-kube-api-access-8v4xx\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511741 3170 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-xtables-lock\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511756 3170 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cni-path\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511767 3170 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-run\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.511795 kubelet[3170]: I0430 12:52:00.511779 3170 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-clustermesh-secrets\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511792 3170 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c481210-f294-4f75-a569-92bb965a1839-cilium-config-path\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511803 3170 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-lib-modules\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511816 3170 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-config-path\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511857 3170 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xbhw2\" (UniqueName: \"kubernetes.io/projected/8c481210-f294-4f75-a569-92bb965a1839-kube-api-access-xbhw2\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511869 3170 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-cilium-cgroup\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511881 3170 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-hubble-tls\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511892 3170 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-bpf-maps\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:00.512332 kubelet[3170]: I0430 12:52:00.511902 3170 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ba950d1-6d6f-4a50-af6b-d895c5c1b512-host-proc-sys-kernel\") on node \"ip-172-31-21-92\" DevicePath \"\"" Apr 30 12:52:01.105235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213-rootfs.mount: Deactivated successfully. Apr 30 12:52:01.105382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e-rootfs.mount: Deactivated successfully. Apr 30 12:52:01.105471 systemd[1]: var-lib-kubelet-pods-8c481210\x2df294\x2d4f75\x2da569\x2d92bb965a1839-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxbhw2.mount: Deactivated successfully. Apr 30 12:52:01.105567 systemd[1]: var-lib-kubelet-pods-6ba950d1\x2d6d6f\x2d4a50\x2daf6b\x2dd895c5c1b512-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:52:01.105660 systemd[1]: var-lib-kubelet-pods-6ba950d1\x2d6d6f\x2d4a50\x2daf6b\x2dd895c5c1b512-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8v4xx.mount: Deactivated successfully. Apr 30 12:52:01.105765 systemd[1]: var-lib-kubelet-pods-6ba950d1\x2d6d6f\x2d4a50\x2daf6b\x2dd895c5c1b512-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:52:01.113671 systemd[1]: Removed slice kubepods-besteffort-pod8c481210_f294_4f75_a569_92bb965a1839.slice - libcontainer container kubepods-besteffort-pod8c481210_f294_4f75_a569_92bb965a1839.slice. Apr 30 12:52:01.122952 kubelet[3170]: I0430 12:52:01.122903 3170 scope.go:117] "RemoveContainer" containerID="28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b" Apr 30 12:52:01.134692 systemd[1]: Removed slice kubepods-burstable-pod6ba950d1_6d6f_4a50_af6b_d895c5c1b512.slice - libcontainer container kubepods-burstable-pod6ba950d1_6d6f_4a50_af6b_d895c5c1b512.slice. Apr 30 12:52:01.135066 systemd[1]: kubepods-burstable-pod6ba950d1_6d6f_4a50_af6b_d895c5c1b512.slice: Consumed 8.202s CPU time, 192.3M memory peak, 70.3M read from disk, 13.3M written to disk. Apr 30 12:52:01.135627 containerd[1910]: time="2025-04-30T12:52:01.135594272Z" level=info msg="RemoveContainer for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\"" Apr 30 12:52:01.147345 containerd[1910]: time="2025-04-30T12:52:01.147084780Z" level=info msg="RemoveContainer for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" returns successfully" Apr 30 12:52:01.147731 kubelet[3170]: I0430 12:52:01.147606 3170 scope.go:117] "RemoveContainer" containerID="28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b" Apr 30 12:52:01.148448 containerd[1910]: time="2025-04-30T12:52:01.148340668Z" level=error msg="ContainerStatus for \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\": not found" Apr 30 12:52:01.148994 kubelet[3170]: E0430 12:52:01.148741 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\": not found" containerID="28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b" Apr 30 12:52:01.157308 kubelet[3170]: I0430 12:52:01.150423 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b"} err="failed to get container status \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"28e381a168a21f9f83c6728811656f485bdb8df91c717967b7141de47aaefb6b\": not found" Apr 30 12:52:01.157308 kubelet[3170]: I0430 12:52:01.157155 3170 scope.go:117] "RemoveContainer" containerID="dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b" Apr 30 12:52:01.158890 containerd[1910]: time="2025-04-30T12:52:01.158842674Z" level=info msg="RemoveContainer for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\"" Apr 30 12:52:01.164303 containerd[1910]: time="2025-04-30T12:52:01.164260857Z" level=info msg="RemoveContainer for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" returns successfully" Apr 30 12:52:01.164480 kubelet[3170]: I0430 12:52:01.164461 3170 scope.go:117] "RemoveContainer" containerID="4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1" Apr 30 12:52:01.166599 containerd[1910]: time="2025-04-30T12:52:01.166558234Z" level=info msg="RemoveContainer for \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\"" Apr 30 12:52:01.173153 containerd[1910]: time="2025-04-30T12:52:01.173082869Z" level=info msg="RemoveContainer for \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\" returns successfully" Apr 30 12:52:01.173435 kubelet[3170]: I0430 12:52:01.173325 3170 scope.go:117] "RemoveContainer" containerID="2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6" Apr 30 12:52:01.174534 containerd[1910]: time="2025-04-30T12:52:01.174501861Z" level=info msg="RemoveContainer for \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\"" Apr 30 12:52:01.179601 containerd[1910]: time="2025-04-30T12:52:01.179555128Z" level=info msg="RemoveContainer for \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\" returns successfully" Apr 30 12:52:01.179802 kubelet[3170]: I0430 12:52:01.179771 3170 scope.go:117] "RemoveContainer" containerID="3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c" Apr 30 12:52:01.181213 containerd[1910]: time="2025-04-30T12:52:01.180929731Z" level=info msg="RemoveContainer for \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\"" Apr 30 12:52:01.186342 containerd[1910]: time="2025-04-30T12:52:01.186287235Z" level=info msg="RemoveContainer for \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\" returns successfully" Apr 30 12:52:01.186582 kubelet[3170]: I0430 12:52:01.186512 3170 scope.go:117] "RemoveContainer" containerID="f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc" Apr 30 12:52:01.187904 containerd[1910]: time="2025-04-30T12:52:01.187816892Z" level=info msg="RemoveContainer for \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\"" Apr 30 12:52:01.194651 containerd[1910]: time="2025-04-30T12:52:01.194608014Z" level=info msg="RemoveContainer for \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\" returns successfully" Apr 30 12:52:01.194939 kubelet[3170]: I0430 12:52:01.194880 3170 scope.go:117] "RemoveContainer" containerID="dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b" Apr 30 12:52:01.195165 containerd[1910]: time="2025-04-30T12:52:01.195121312Z" level=error msg="ContainerStatus for \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\": not found" Apr 30 12:52:01.195288 kubelet[3170]: E0430 12:52:01.195257 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\": not found" containerID="dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b" Apr 30 12:52:01.195343 kubelet[3170]: I0430 12:52:01.195286 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b"} err="failed to get container status \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbeec105bf2c91dda6ce440a9ccf5d81d64aea1672a106b11a758f30043ecc3b\": not found" Apr 30 12:52:01.195343 kubelet[3170]: I0430 12:52:01.195309 3170 scope.go:117] "RemoveContainer" containerID="4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1" Apr 30 12:52:01.195540 containerd[1910]: time="2025-04-30T12:52:01.195460806Z" level=error msg="ContainerStatus for \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\": not found" Apr 30 12:52:01.195607 kubelet[3170]: E0430 12:52:01.195560 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\": not found" containerID="4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1" Apr 30 12:52:01.195607 kubelet[3170]: I0430 12:52:01.195579 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1"} err="failed to get container status \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f739f1e346864e71c595d5586e40b80e0205b93595e42abb6b4f9bff0506ba1\": not found" Apr 30 12:52:01.195607 kubelet[3170]: I0430 12:52:01.195593 3170 scope.go:117] "RemoveContainer" containerID="2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6" Apr 30 12:52:01.195804 containerd[1910]: time="2025-04-30T12:52:01.195770763Z" level=error msg="ContainerStatus for \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\": not found" Apr 30 12:52:01.195926 kubelet[3170]: E0430 12:52:01.195911 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\": not found" containerID="2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6" Apr 30 12:52:01.195961 kubelet[3170]: I0430 12:52:01.195932 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6"} err="failed to get container status \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c2603999ea01d254f85b34bd7ef7bd791c03481189ccb9c3ab1c9296575dba6\": not found" Apr 30 12:52:01.195961 kubelet[3170]: I0430 12:52:01.195953 3170 scope.go:117] "RemoveContainer" containerID="3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c" Apr 30 12:52:01.196163 containerd[1910]: time="2025-04-30T12:52:01.196097730Z" level=error msg="ContainerStatus for \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\": not found" Apr 30 12:52:01.196248 kubelet[3170]: E0430 12:52:01.196220 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\": not found" containerID="3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c" Apr 30 12:52:01.196248 kubelet[3170]: I0430 12:52:01.196240 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c"} err="failed to get container status \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fa76a743671736e479cdcca2ca9546ce38f6da0094a9b7f03e97eec558f5c8c\": not found" Apr 30 12:52:01.196306 kubelet[3170]: I0430 12:52:01.196256 3170 scope.go:117] "RemoveContainer" containerID="f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc" Apr 30 12:52:01.196433 containerd[1910]: time="2025-04-30T12:52:01.196405038Z" level=error msg="ContainerStatus for \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\": not found" Apr 30 12:52:01.196528 kubelet[3170]: E0430 12:52:01.196500 3170 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\": not found" containerID="f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc" Apr 30 12:52:01.196528 kubelet[3170]: I0430 12:52:01.196520 3170 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc"} err="failed to get container status \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6969884a7c93edfa358e31c91244bda14bfb7a86f92613d6ac8141ec67e61dc\": not found" Apr 30 12:52:01.730372 kubelet[3170]: I0430 12:52:01.730335 3170 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba950d1-6d6f-4a50-af6b-d895c5c1b512" path="/var/lib/kubelet/pods/6ba950d1-6d6f-4a50-af6b-d895c5c1b512/volumes" Apr 30 12:52:01.734399 kubelet[3170]: I0430 12:52:01.734349 3170 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c481210-f294-4f75-a569-92bb965a1839" path="/var/lib/kubelet/pods/8c481210-f294-4f75-a569-92bb965a1839/volumes" Apr 30 12:52:01.935630 sshd[5025]: Connection closed by 147.75.109.163 port 37552 Apr 30 12:52:01.938707 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:01.953551 systemd[1]: sshd@22-172.31.21.92:22-147.75.109.163:37552.service: Deactivated successfully. Apr 30 12:52:01.974304 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:52:01.983256 systemd-logind[1888]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:52:02.007518 systemd[1]: Started sshd@23-172.31.21.92:22-147.75.109.163:37566.service - OpenSSH per-connection server daemon (147.75.109.163:37566). Apr 30 12:52:02.009352 systemd-logind[1888]: Removed session 23. Apr 30 12:52:02.350677 sshd[5188]: Accepted publickey for core from 147.75.109.163 port 37566 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:02.358731 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:02.371931 systemd-logind[1888]: New session 24 of user core. Apr 30 12:52:02.379112 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:52:02.911795 ntpd[1883]: Deleting interface #11 lxc_health, fe80::b050:2fff:feb6:fe77%8#123, interface stats: received=0, sent=0, dropped=0, active_time=52 secs Apr 30 12:52:02.913084 ntpd[1883]: 30 Apr 12:52:02 ntpd[1883]: Deleting interface #11 lxc_health, fe80::b050:2fff:feb6:fe77%8#123, interface stats: received=0, sent=0, dropped=0, active_time=52 secs Apr 30 12:52:03.289604 sshd[5191]: Connection closed by 147.75.109.163 port 37566 Apr 30 12:52:03.291862 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:03.296478 systemd-logind[1888]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:52:03.298372 systemd[1]: sshd@23-172.31.21.92:22-147.75.109.163:37566.service: Deactivated successfully. Apr 30 12:52:03.302258 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:52:03.313905 systemd-logind[1888]: Removed session 24. Apr 30 12:52:03.319619 kubelet[3170]: I0430 12:52:03.310011 3170 memory_manager.go:355] "RemoveStaleState removing state" podUID="6ba950d1-6d6f-4a50-af6b-d895c5c1b512" containerName="cilium-agent" Apr 30 12:52:03.322064 kubelet[3170]: I0430 12:52:03.319935 3170 memory_manager.go:355] "RemoveStaleState removing state" podUID="8c481210-f294-4f75-a569-92bb965a1839" containerName="cilium-operator" Apr 30 12:52:03.348276 systemd[1]: Started sshd@24-172.31.21.92:22-147.75.109.163:37570.service - OpenSSH per-connection server daemon (147.75.109.163:37570). Apr 30 12:52:03.413309 systemd[1]: Created slice kubepods-burstable-pod3026b8f8_8e4c_43f4_9a7a_8dafb11c59c3.slice - libcontainer container kubepods-burstable-pod3026b8f8_8e4c_43f4_9a7a_8dafb11c59c3.slice. Apr 30 12:52:03.456165 kubelet[3170]: I0430 12:52:03.456026 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-hostproc\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456165 kubelet[3170]: I0430 12:52:03.456138 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-cilium-cgroup\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456165 kubelet[3170]: I0430 12:52:03.456173 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-lib-modules\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456431 kubelet[3170]: I0430 12:52:03.456200 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czt8m\" (UniqueName: \"kubernetes.io/projected/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-kube-api-access-czt8m\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456431 kubelet[3170]: I0430 12:52:03.456234 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-cilium-run\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456431 kubelet[3170]: I0430 12:52:03.456256 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-cilium-ipsec-secrets\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456431 kubelet[3170]: I0430 12:52:03.456291 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-host-proc-sys-kernel\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456431 kubelet[3170]: I0430 12:52:03.456311 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-hubble-tls\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456431 kubelet[3170]: I0430 12:52:03.456336 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-cni-path\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456780 kubelet[3170]: I0430 12:52:03.456358 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-clustermesh-secrets\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456780 kubelet[3170]: I0430 12:52:03.456386 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-host-proc-sys-net\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456780 kubelet[3170]: I0430 12:52:03.456412 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-xtables-lock\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456780 kubelet[3170]: I0430 12:52:03.456437 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-bpf-maps\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456780 kubelet[3170]: I0430 12:52:03.456460 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-etc-cni-netd\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.456780 kubelet[3170]: I0430 12:52:03.456485 3170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3-cilium-config-path\") pod \"cilium-dnjgh\" (UID: \"3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3\") " pod="kube-system/cilium-dnjgh" Apr 30 12:52:03.634387 sshd[5202]: Accepted publickey for core from 147.75.109.163 port 37570 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:03.636164 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:03.640635 systemd-logind[1888]: New session 25 of user core. Apr 30 12:52:03.650090 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 12:52:03.720330 containerd[1910]: time="2025-04-30T12:52:03.720282343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnjgh,Uid:3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3,Namespace:kube-system,Attempt:0,}" Apr 30 12:52:03.768922 containerd[1910]: time="2025-04-30T12:52:03.768395568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:52:03.768922 containerd[1910]: time="2025-04-30T12:52:03.768527378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:52:03.768922 containerd[1910]: time="2025-04-30T12:52:03.768571965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:03.769256 containerd[1910]: time="2025-04-30T12:52:03.768928357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:03.805150 systemd[1]: Started cri-containerd-3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a.scope - libcontainer container 3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a. Apr 30 12:52:03.819725 kubelet[3170]: E0430 12:52:03.819677 3170 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:52:03.830486 sshd[5208]: Connection closed by 147.75.109.163 port 37570 Apr 30 12:52:03.829326 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:03.834385 systemd-logind[1888]: Session 25 logged out. Waiting for processes to exit. Apr 30 12:52:03.836494 systemd[1]: sshd@24-172.31.21.92:22-147.75.109.163:37570.service: Deactivated successfully. Apr 30 12:52:03.840855 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 12:52:03.847531 systemd-logind[1888]: Removed session 25. Apr 30 12:52:03.857350 containerd[1910]: time="2025-04-30T12:52:03.856465929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnjgh,Uid:3026b8f8-8e4c-43f4-9a7a-8dafb11c59c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\"" Apr 30 12:52:03.863900 containerd[1910]: time="2025-04-30T12:52:03.863847279Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:52:03.886187 systemd[1]: Started sshd@25-172.31.21.92:22-147.75.109.163:37572.service - OpenSSH per-connection server daemon (147.75.109.163:37572). Apr 30 12:52:03.923504 containerd[1910]: time="2025-04-30T12:52:03.923456390Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92\"" Apr 30 12:52:03.925977 containerd[1910]: time="2025-04-30T12:52:03.924969388Z" level=info msg="StartContainer for \"9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92\"" Apr 30 12:52:03.993887 systemd[1]: Started cri-containerd-9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92.scope - libcontainer container 9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92. Apr 30 12:52:04.041258 containerd[1910]: time="2025-04-30T12:52:04.041133283Z" level=info msg="StartContainer for \"9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92\" returns successfully" Apr 30 12:52:04.064433 systemd[1]: cri-containerd-9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92.scope: Deactivated successfully. Apr 30 12:52:04.064950 systemd[1]: cri-containerd-9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92.scope: Consumed 24ms CPU time, 9.5M memory peak, 3.2M read from disk. Apr 30 12:52:04.119940 containerd[1910]: time="2025-04-30T12:52:04.119821757Z" level=info msg="shim disconnected" id=9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92 namespace=k8s.io Apr 30 12:52:04.119940 containerd[1910]: time="2025-04-30T12:52:04.119934342Z" level=warning msg="cleaning up after shim disconnected" id=9bca9605b53cd72bfe9ead1de145e97e42a9b930336e3fa738834f4893018c92 namespace=k8s.io Apr 30 12:52:04.119940 containerd[1910]: time="2025-04-30T12:52:04.119945722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:04.148750 containerd[1910]: time="2025-04-30T12:52:04.148027159Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:52:04.167865 sshd[5255]: Accepted publickey for core from 147.75.109.163 port 37572 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:04.171654 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:04.182245 systemd-logind[1888]: New session 26 of user core. Apr 30 12:52:04.188240 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 12:52:04.191623 containerd[1910]: time="2025-04-30T12:52:04.191488025Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17\"" Apr 30 12:52:04.193648 containerd[1910]: time="2025-04-30T12:52:04.193611636Z" level=info msg="StartContainer for \"e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17\"" Apr 30 12:52:04.227119 systemd[1]: Started cri-containerd-e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17.scope - libcontainer container e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17. Apr 30 12:52:04.257056 containerd[1910]: time="2025-04-30T12:52:04.257002472Z" level=info msg="StartContainer for \"e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17\" returns successfully" Apr 30 12:52:04.269075 systemd[1]: cri-containerd-e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17.scope: Deactivated successfully. Apr 30 12:52:04.269822 systemd[1]: cri-containerd-e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17.scope: Consumed 19ms CPU time, 7.2M memory peak, 2.1M read from disk. Apr 30 12:52:04.307072 containerd[1910]: time="2025-04-30T12:52:04.307002726Z" level=info msg="shim disconnected" id=e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17 namespace=k8s.io Apr 30 12:52:04.307072 containerd[1910]: time="2025-04-30T12:52:04.307066404Z" level=warning msg="cleaning up after shim disconnected" id=e10b88fc66fa592bf62406035f7e0ffca9bd4191e5eebe743adbc126f1a5ce17 namespace=k8s.io Apr 30 12:52:04.307072 containerd[1910]: time="2025-04-30T12:52:04.307074653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:05.151707 containerd[1910]: time="2025-04-30T12:52:05.151670334Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:52:05.181191 containerd[1910]: time="2025-04-30T12:52:05.181131753Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a\"" Apr 30 12:52:05.182751 containerd[1910]: time="2025-04-30T12:52:05.181582674Z" level=info msg="StartContainer for \"c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a\"" Apr 30 12:52:05.217076 systemd[1]: Started cri-containerd-c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a.scope - libcontainer container c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a. Apr 30 12:52:05.247931 containerd[1910]: time="2025-04-30T12:52:05.247595572Z" level=info msg="StartContainer for \"c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a\" returns successfully" Apr 30 12:52:05.255153 systemd[1]: cri-containerd-c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a.scope: Deactivated successfully. Apr 30 12:52:05.292293 containerd[1910]: time="2025-04-30T12:52:05.292233371Z" level=info msg="shim disconnected" id=c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a namespace=k8s.io Apr 30 12:52:05.292477 containerd[1910]: time="2025-04-30T12:52:05.292301345Z" level=warning msg="cleaning up after shim disconnected" id=c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a namespace=k8s.io Apr 30 12:52:05.292477 containerd[1910]: time="2025-04-30T12:52:05.292311941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:05.566854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c439683b1ecb7490f73dd2ea5d22ae608f3946ebf5d5248ead0293ed8f790e3a-rootfs.mount: Deactivated successfully. Apr 30 12:52:06.133768 kubelet[3170]: I0430 12:52:06.133699 3170 setters.go:602] "Node became not ready" node="ip-172-31-21-92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:52:06Z","lastTransitionTime":"2025-04-30T12:52:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:52:06.156086 containerd[1910]: time="2025-04-30T12:52:06.155715040Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:52:06.178738 containerd[1910]: time="2025-04-30T12:52:06.178690473Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a\"" Apr 30 12:52:06.179323 containerd[1910]: time="2025-04-30T12:52:06.179286634Z" level=info msg="StartContainer for \"30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a\"" Apr 30 12:52:06.225282 systemd[1]: Started cri-containerd-30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a.scope - libcontainer container 30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a. Apr 30 12:52:06.252556 systemd[1]: cri-containerd-30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a.scope: Deactivated successfully. Apr 30 12:52:06.258348 containerd[1910]: time="2025-04-30T12:52:06.258300347Z" level=info msg="StartContainer for \"30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a\" returns successfully" Apr 30 12:52:06.283480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a-rootfs.mount: Deactivated successfully. Apr 30 12:52:06.297803 containerd[1910]: time="2025-04-30T12:52:06.297733973Z" level=info msg="shim disconnected" id=30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a namespace=k8s.io Apr 30 12:52:06.297803 containerd[1910]: time="2025-04-30T12:52:06.297781311Z" level=warning msg="cleaning up after shim disconnected" id=30ff109d49fe7401684de4002fbf4d851ff5c6e988b50bdb3a582b808656524a namespace=k8s.io Apr 30 12:52:06.297803 containerd[1910]: time="2025-04-30T12:52:06.297789253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:07.160213 containerd[1910]: time="2025-04-30T12:52:07.160015810Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:52:07.186528 containerd[1910]: time="2025-04-30T12:52:07.186487406Z" level=info msg="CreateContainer within sandbox \"3a2f423be340f3e176eabd76978f06c22abae1d30a5efe117c58203958983f9a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee\"" Apr 30 12:52:07.187947 containerd[1910]: time="2025-04-30T12:52:07.187172160Z" level=info msg="StartContainer for \"1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee\"" Apr 30 12:52:07.226062 systemd[1]: Started cri-containerd-1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee.scope - libcontainer container 1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee. Apr 30 12:52:07.261712 containerd[1910]: time="2025-04-30T12:52:07.261667585Z" level=info msg="StartContainer for \"1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee\" returns successfully" Apr 30 12:52:07.911865 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 12:52:08.175346 systemd[1]: run-containerd-runc-k8s.io-1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee-runc.Z2kbzh.mount: Deactivated successfully. Apr 30 12:52:08.189628 kubelet[3170]: I0430 12:52:08.183020 3170 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dnjgh" podStartSLOduration=5.182999881 podStartE2EDuration="5.182999881s" podCreationTimestamp="2025-04-30 12:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:52:08.179218011 +0000 UTC m=+94.580365613" watchObservedRunningTime="2025-04-30 12:52:08.182999881 +0000 UTC m=+94.584147484" Apr 30 12:52:10.791604 (udev-worker)[5577]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:52:10.791604 (udev-worker)[6062]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:52:10.808099 systemd-networkd[1820]: lxc_health: Link UP Apr 30 12:52:10.820023 systemd-networkd[1820]: lxc_health: Gained carrier Apr 30 12:52:11.112045 systemd[1]: run-containerd-runc-k8s.io-1335a2071f584d168008f1b1aa1f674152d05a23340e3d3f69fbbf3a0b049bee-runc.9LnVpZ.mount: Deactivated successfully. Apr 30 12:52:12.436047 systemd-networkd[1820]: lxc_health: Gained IPv6LL Apr 30 12:52:13.457219 kubelet[3170]: E0430 12:52:13.457157 3170 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50264->127.0.0.1:34517: write tcp 127.0.0.1:50264->127.0.0.1:34517: write: broken pipe Apr 30 12:52:14.911785 ntpd[1883]: Listen normally on 14 lxc_health [fe80::24c2:87ff:fe02:d1cb%14]:123 Apr 30 12:52:14.912336 ntpd[1883]: 30 Apr 12:52:14 ntpd[1883]: Listen normally on 14 lxc_health [fe80::24c2:87ff:fe02:d1cb%14]:123 Apr 30 12:52:17.783997 sshd[5321]: Connection closed by 147.75.109.163 port 37572 Apr 30 12:52:17.786373 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:17.790098 systemd[1]: sshd@25-172.31.21.92:22-147.75.109.163:37572.service: Deactivated successfully. Apr 30 12:52:17.792082 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 12:52:17.793514 systemd-logind[1888]: Session 26 logged out. Waiting for processes to exit. Apr 30 12:52:17.794573 systemd-logind[1888]: Removed session 26. Apr 30 12:52:31.450217 systemd[1]: cri-containerd-0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195.scope: Deactivated successfully. Apr 30 12:52:31.451030 systemd[1]: cri-containerd-0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195.scope: Consumed 2.879s CPU time, 75.9M memory peak, 31.6M read from disk. Apr 30 12:52:31.473527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195-rootfs.mount: Deactivated successfully. Apr 30 12:52:31.502428 containerd[1910]: time="2025-04-30T12:52:31.502367306Z" level=info msg="shim disconnected" id=0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195 namespace=k8s.io Apr 30 12:52:31.502428 containerd[1910]: time="2025-04-30T12:52:31.502422674Z" level=warning msg="cleaning up after shim disconnected" id=0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195 namespace=k8s.io Apr 30 12:52:31.502428 containerd[1910]: time="2025-04-30T12:52:31.502431367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:32.210987 kubelet[3170]: I0430 12:52:32.210946 3170 scope.go:117] "RemoveContainer" containerID="0f303915eaf9586ef7ffa6f0813e482e5708c162edb3098e140c06bf67406195" Apr 30 12:52:32.215860 containerd[1910]: time="2025-04-30T12:52:32.215800574Z" level=info msg="CreateContainer within sandbox \"fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 12:52:32.238575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718041450.mount: Deactivated successfully. Apr 30 12:52:32.246495 containerd[1910]: time="2025-04-30T12:52:32.246455372Z" level=info msg="CreateContainer within sandbox \"fbfb6ab233f8870d4eea8607e60153a5393df8bf79d7db389265a36ee5c581c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5e2559aad61eb23ad07478fc49a05d202ddb6285c3b493b53cc5036f52579163\"" Apr 30 12:52:32.247965 containerd[1910]: time="2025-04-30T12:52:32.246923535Z" level=info msg="StartContainer for \"5e2559aad61eb23ad07478fc49a05d202ddb6285c3b493b53cc5036f52579163\"" Apr 30 12:52:32.275978 systemd[1]: Started cri-containerd-5e2559aad61eb23ad07478fc49a05d202ddb6285c3b493b53cc5036f52579163.scope - libcontainer container 5e2559aad61eb23ad07478fc49a05d202ddb6285c3b493b53cc5036f52579163. Apr 30 12:52:32.321274 containerd[1910]: time="2025-04-30T12:52:32.321224730Z" level=info msg="StartContainer for \"5e2559aad61eb23ad07478fc49a05d202ddb6285c3b493b53cc5036f52579163\" returns successfully" Apr 30 12:52:33.733540 containerd[1910]: time="2025-04-30T12:52:33.733171071Z" level=info msg="StopPodSandbox for \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\"" Apr 30 12:52:33.733540 containerd[1910]: time="2025-04-30T12:52:33.733273872Z" level=info msg="TearDown network for sandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" successfully" Apr 30 12:52:33.733540 containerd[1910]: time="2025-04-30T12:52:33.733289376Z" level=info msg="StopPodSandbox for \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" returns successfully" Apr 30 12:52:33.735578 containerd[1910]: time="2025-04-30T12:52:33.734635047Z" level=info msg="RemovePodSandbox for \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\"" Apr 30 12:52:33.735578 containerd[1910]: time="2025-04-30T12:52:33.734672175Z" level=info msg="Forcibly stopping sandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\"" Apr 30 12:52:33.735578 containerd[1910]: time="2025-04-30T12:52:33.734740480Z" level=info msg="TearDown network for sandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" successfully" Apr 30 12:52:33.744267 containerd[1910]: time="2025-04-30T12:52:33.744219062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:52:33.744555 containerd[1910]: time="2025-04-30T12:52:33.744534141Z" level=info msg="RemovePodSandbox \"720480e2e85c68912bb2331a0412cf549614cf107c82040fd7066fb4a273b213\" returns successfully" Apr 30 12:52:33.745323 containerd[1910]: time="2025-04-30T12:52:33.745299517Z" level=info msg="StopPodSandbox for \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\"" Apr 30 12:52:33.745728 containerd[1910]: time="2025-04-30T12:52:33.745498117Z" level=info msg="TearDown network for sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" successfully" Apr 30 12:52:33.745728 containerd[1910]: time="2025-04-30T12:52:33.745516392Z" level=info msg="StopPodSandbox for \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" returns successfully" Apr 30 12:52:33.747870 containerd[1910]: time="2025-04-30T12:52:33.746085501Z" level=info msg="RemovePodSandbox for \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\"" Apr 30 12:52:33.747870 containerd[1910]: time="2025-04-30T12:52:33.746114159Z" level=info msg="Forcibly stopping sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\"" Apr 30 12:52:33.747870 containerd[1910]: time="2025-04-30T12:52:33.746179081Z" level=info msg="TearDown network for sandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" successfully" Apr 30 12:52:33.751751 containerd[1910]: time="2025-04-30T12:52:33.751720287Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:52:33.751961 containerd[1910]: time="2025-04-30T12:52:33.751942213Z" level=info msg="RemovePodSandbox \"db61eea685e686d2a398d957d69bd256527f02f5dd1e55130753e57f18de5d5e\" returns successfully" Apr 30 12:52:36.069044 kubelet[3170]: E0430 12:52:36.068765 3170 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 12:52:36.546363 systemd[1]: cri-containerd-417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda.scope: Deactivated successfully. Apr 30 12:52:36.546940 systemd[1]: cri-containerd-417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda.scope: Consumed 2.123s CPU time, 30.8M memory peak, 13.2M read from disk. Apr 30 12:52:36.571983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda-rootfs.mount: Deactivated successfully. Apr 30 12:52:36.599729 containerd[1910]: time="2025-04-30T12:52:36.599666490Z" level=info msg="shim disconnected" id=417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda namespace=k8s.io Apr 30 12:52:36.599729 containerd[1910]: time="2025-04-30T12:52:36.599714021Z" level=warning msg="cleaning up after shim disconnected" id=417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda namespace=k8s.io Apr 30 12:52:36.599729 containerd[1910]: time="2025-04-30T12:52:36.599722126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:37.232755 kubelet[3170]: I0430 12:52:37.232724 3170 scope.go:117] "RemoveContainer" containerID="417daa56ed09a27109b09b1ba1729daec7edc10920e40d9bb623c33ede6dacda" Apr 30 12:52:37.234660 containerd[1910]: time="2025-04-30T12:52:37.234621796Z" level=info msg="CreateContainer within sandbox \"67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 12:52:37.255933 containerd[1910]: time="2025-04-30T12:52:37.255889447Z" level=info msg="CreateContainer within sandbox \"67365fcdb95a26bd4e22a84923d18ea9c1c9eabe0decf1256838a0f7af82e339\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"76e7ec5534fa5c279020b55e3754679b04d4e34514083d9656ddb8efca810b4a\"" Apr 30 12:52:37.256415 containerd[1910]: time="2025-04-30T12:52:37.256386406Z" level=info msg="StartContainer for \"76e7ec5534fa5c279020b55e3754679b04d4e34514083d9656ddb8efca810b4a\"" Apr 30 12:52:37.291070 systemd[1]: Started cri-containerd-76e7ec5534fa5c279020b55e3754679b04d4e34514083d9656ddb8efca810b4a.scope - libcontainer container 76e7ec5534fa5c279020b55e3754679b04d4e34514083d9656ddb8efca810b4a. Apr 30 12:52:37.348432 containerd[1910]: time="2025-04-30T12:52:37.348387367Z" level=info msg="StartContainer for \"76e7ec5534fa5c279020b55e3754679b04d4e34514083d9656ddb8efca810b4a\" returns successfully" Apr 30 12:52:37.570887 systemd[1]: run-containerd-runc-k8s.io-76e7ec5534fa5c279020b55e3754679b04d4e34514083d9656ddb8efca810b4a-runc.AJGp3Q.mount: Deactivated successfully.