Apr 30 12:49:47.909912 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 12:49:47.909946 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:49:47.909963 kernel: BIOS-provided physical RAM map: Apr 30 12:49:47.909974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:49:47.909984 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 12:49:47.909994 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Apr 30 12:49:47.910007 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 12:49:47.910018 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 12:49:47.910029 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 12:49:47.910039 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 12:49:47.910053 kernel: NX (Execute Disable) protection: active Apr 30 12:49:47.910064 kernel: APIC: Static calls initialized Apr 30 12:49:47.910075 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Apr 30 12:49:47.910086 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Apr 30 12:49:47.910100 kernel: extended physical RAM map: Apr 30 12:49:47.910111 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:49:47.910126 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Apr 30 12:49:47.910138 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Apr 30 12:49:47.910150 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Apr 30 12:49:47.910162 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Apr 30 12:49:47.910174 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 12:49:47.910185 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 12:49:47.910197 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 12:49:47.910209 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 12:49:47.910220 kernel: efi: EFI v2.7 by EDK II Apr 30 12:49:47.910232 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 12:49:47.910247 kernel: secureboot: Secure boot disabled Apr 30 12:49:47.910258 kernel: SMBIOS 2.7 present. Apr 30 12:49:47.910270 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 12:49:47.910282 kernel: Hypervisor detected: KVM Apr 30 12:49:47.910294 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 12:49:47.910305 kernel: kvm-clock: using sched offset of 3793664496 cycles Apr 30 12:49:47.910317 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 12:49:47.910330 kernel: tsc: Detected 2499.996 MHz processor Apr 30 12:49:47.910343 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 12:49:47.910355 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 12:49:47.910367 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 12:49:47.910382 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 12:49:47.910394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 12:49:47.910407 kernel: Using GB pages for direct mapping Apr 30 12:49:47.910424 kernel: ACPI: Early table checksum verification disabled Apr 30 12:49:47.910438 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 12:49:47.910451 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 12:49:47.910479 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 12:49:47.910492 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 12:49:47.910505 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 12:49:47.910518 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 12:49:47.910531 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 12:49:47.910544 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 12:49:47.910557 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 12:49:47.910570 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 12:49:47.910586 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 12:49:47.910599 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 12:49:47.910612 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 12:49:47.910625 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 12:49:47.910638 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 12:49:47.910651 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 12:49:47.910663 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 12:49:47.910676 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 12:49:47.910691 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 12:49:47.910705 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 12:49:47.910718 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 12:49:47.910731 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 12:49:47.910744 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 12:49:47.910757 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 12:49:47.910770 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 12:49:47.910782 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 12:49:47.910795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 12:49:47.910809 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 12:49:47.910832 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 12:49:47.910846 kernel: Zone ranges: Apr 30 12:49:47.910861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 12:49:47.910876 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 12:49:47.910891 kernel: Normal empty Apr 30 12:49:47.910906 kernel: Movable zone start for each node Apr 30 12:49:47.910921 kernel: Early memory node ranges Apr 30 12:49:47.910936 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 12:49:47.910950 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 12:49:47.910969 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 12:49:47.910984 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 12:49:47.910999 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:49:47.911014 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 12:49:47.911029 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 12:49:47.911044 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 12:49:47.911059 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 12:49:47.911074 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 12:49:47.911089 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 12:49:47.911107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 12:49:47.911122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 12:49:47.911137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 12:49:47.911152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 12:49:47.911167 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 12:49:47.911182 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 12:49:47.911196 kernel: TSC deadline timer available Apr 30 12:49:47.911211 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 12:49:47.911226 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 12:49:47.911241 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 12:49:47.911258 kernel: Booting paravirtualized kernel on KVM Apr 30 12:49:47.911273 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 12:49:47.911288 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 12:49:47.911303 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 12:49:47.911318 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 12:49:47.911333 kernel: pcpu-alloc: [0] 0 1 Apr 30 12:49:47.911348 kernel: kvm-guest: PV spinlocks enabled Apr 30 12:49:47.911363 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 12:49:47.911384 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:49:47.911399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:49:47.911414 kernel: random: crng init done Apr 30 12:49:47.911429 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:49:47.911444 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 12:49:47.911471 kernel: Fallback order for Node 0: 0 Apr 30 12:49:47.911487 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 12:49:47.911502 kernel: Policy zone: DMA32 Apr 30 12:49:47.911520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:49:47.911536 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 165012K reserved, 0K cma-reserved) Apr 30 12:49:47.911551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:49:47.911566 kernel: Kernel/User page tables isolation: enabled Apr 30 12:49:47.911581 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 12:49:47.911608 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 12:49:47.911626 kernel: Dynamic Preempt: voluntary Apr 30 12:49:47.911642 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:49:47.911660 kernel: rcu: RCU event tracing is enabled. Apr 30 12:49:47.911676 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:49:47.911692 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:49:47.911708 kernel: Rude variant of Tasks RCU enabled. Apr 30 12:49:47.911728 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:49:47.911744 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:49:47.911761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:49:47.911777 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 12:49:47.911793 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:49:47.911813 kernel: Console: colour dummy device 80x25 Apr 30 12:49:47.911830 kernel: printk: console [tty0] enabled Apr 30 12:49:47.911846 kernel: printk: console [ttyS0] enabled Apr 30 12:49:47.911862 kernel: ACPI: Core revision 20230628 Apr 30 12:49:47.911879 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 12:49:47.911895 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 12:49:47.911911 kernel: x2apic enabled Apr 30 12:49:47.911928 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 12:49:47.911944 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 12:49:47.911964 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 30 12:49:47.911980 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 12:49:47.911996 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 12:49:47.912012 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 12:49:47.912028 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 12:49:47.912044 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 12:49:47.912060 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 12:49:47.912076 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 12:49:47.912092 kernel: RETBleed: Vulnerable Apr 30 12:49:47.912108 kernel: Speculative Store Bypass: Vulnerable Apr 30 12:49:47.912127 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 12:49:47.912143 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 12:49:47.912159 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 12:49:47.912175 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 12:49:47.912190 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 12:49:47.912207 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 12:49:47.912223 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 12:49:47.912239 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 12:49:47.912256 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 12:49:47.912271 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 12:49:47.912287 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 12:49:47.912306 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 12:49:47.912322 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 12:49:47.912338 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 12:49:47.912354 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 12:49:47.912370 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 12:49:47.912385 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 12:49:47.912401 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 12:49:47.912417 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 12:49:47.912433 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 12:49:47.912449 kernel: Freeing SMP alternatives memory: 32K Apr 30 12:49:47.912483 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:49:47.912496 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:49:47.912514 kernel: landlock: Up and running. Apr 30 12:49:47.912527 kernel: SELinux: Initializing. Apr 30 12:49:47.912542 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 12:49:47.912558 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 12:49:47.912573 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 12:49:47.912588 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:49:47.912604 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:49:47.912620 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:49:47.912636 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 12:49:47.912654 kernel: signal: max sigframe size: 3632 Apr 30 12:49:47.912670 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:49:47.912686 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:49:47.912701 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 12:49:47.912718 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:49:47.912733 kernel: smpboot: x86: Booting SMP configuration: Apr 30 12:49:47.912749 kernel: .... node #0, CPUs: #1 Apr 30 12:49:47.912765 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 12:49:47.912782 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 12:49:47.912800 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:49:47.912816 kernel: smpboot: Max logical packages: 1 Apr 30 12:49:47.912831 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 30 12:49:47.912846 kernel: devtmpfs: initialized Apr 30 12:49:47.912861 kernel: x86/mm: Memory block size: 128MB Apr 30 12:49:47.912876 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 12:49:47.912892 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:49:47.912908 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:49:47.912923 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:49:47.912942 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:49:47.912957 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:49:47.912973 kernel: audit: type=2000 audit(1746017388.266:1): state=initialized audit_enabled=0 res=1 Apr 30 12:49:47.912988 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:49:47.913004 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 12:49:47.913019 kernel: cpuidle: using governor menu Apr 30 12:49:47.913035 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:49:47.913050 kernel: dca service started, version 1.12.1 Apr 30 12:49:47.913066 kernel: PCI: Using configuration type 1 for base access Apr 30 12:49:47.913086 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 12:49:47.913101 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:49:47.913116 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:49:47.913130 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:49:47.913143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:49:47.913158 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:49:47.913175 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:49:47.913191 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:49:47.913207 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:49:47.913227 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 12:49:47.913244 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 12:49:47.913260 kernel: ACPI: Interpreter enabled Apr 30 12:49:47.913276 kernel: ACPI: PM: (supports S0 S5) Apr 30 12:49:47.913292 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 12:49:47.913309 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 12:49:47.913326 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 12:49:47.913342 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 12:49:47.913358 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:49:47.913615 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:49:47.913777 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 12:49:47.913915 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 12:49:47.913935 kernel: acpiphp: Slot [3] registered Apr 30 12:49:47.913952 kernel: acpiphp: Slot [4] registered Apr 30 12:49:47.913968 kernel: acpiphp: Slot [5] registered Apr 30 12:49:47.913984 kernel: acpiphp: Slot [6] registered Apr 30 12:49:47.914005 kernel: acpiphp: Slot [7] registered Apr 30 12:49:47.914021 kernel: acpiphp: Slot [8] registered Apr 30 12:49:47.914036 kernel: acpiphp: Slot [9] registered Apr 30 12:49:47.914052 kernel: acpiphp: Slot [10] registered Apr 30 12:49:47.914067 kernel: acpiphp: Slot [11] registered Apr 30 12:49:47.914081 kernel: acpiphp: Slot [12] registered Apr 30 12:49:47.914095 kernel: acpiphp: Slot [13] registered Apr 30 12:49:47.914110 kernel: acpiphp: Slot [14] registered Apr 30 12:49:47.914123 kernel: acpiphp: Slot [15] registered Apr 30 12:49:47.914137 kernel: acpiphp: Slot [16] registered Apr 30 12:49:47.914164 kernel: acpiphp: Slot [17] registered Apr 30 12:49:47.914184 kernel: acpiphp: Slot [18] registered Apr 30 12:49:47.914203 kernel: acpiphp: Slot [19] registered Apr 30 12:49:47.914222 kernel: acpiphp: Slot [20] registered Apr 30 12:49:47.914237 kernel: acpiphp: Slot [21] registered Apr 30 12:49:47.914252 kernel: acpiphp: Slot [22] registered Apr 30 12:49:47.914268 kernel: acpiphp: Slot [23] registered Apr 30 12:49:47.914283 kernel: acpiphp: Slot [24] registered Apr 30 12:49:47.914299 kernel: acpiphp: Slot [25] registered Apr 30 12:49:47.914318 kernel: acpiphp: Slot [26] registered Apr 30 12:49:47.914333 kernel: acpiphp: Slot [27] registered Apr 30 12:49:47.914349 kernel: acpiphp: Slot [28] registered Apr 30 12:49:47.914364 kernel: acpiphp: Slot [29] registered Apr 30 12:49:47.914379 kernel: acpiphp: Slot [30] registered Apr 30 12:49:47.914395 kernel: acpiphp: Slot [31] registered Apr 30 12:49:47.914410 kernel: PCI host bridge to bus 0000:00 Apr 30 12:49:47.915624 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 12:49:47.915779 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 12:49:47.915910 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 12:49:47.916032 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 12:49:47.916153 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 12:49:47.916274 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:49:47.916431 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 12:49:47.917649 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 12:49:47.917824 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 12:49:47.917957 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 12:49:47.918089 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 12:49:47.918225 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 12:49:47.918362 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 12:49:47.919555 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 12:49:47.919705 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 12:49:47.919844 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 12:49:47.919986 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 12:49:47.920122 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 12:49:47.920258 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 12:49:47.920393 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 12:49:47.921643 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 12:49:47.922524 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 12:49:47.922693 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 12:49:47.922838 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 12:49:47.922971 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 12:49:47.922991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 12:49:47.923008 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 12:49:47.923023 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 12:49:47.923038 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 12:49:47.923059 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 12:49:47.923074 kernel: iommu: Default domain type: Translated Apr 30 12:49:47.923090 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 12:49:47.923105 kernel: efivars: Registered efivars operations Apr 30 12:49:47.923120 kernel: PCI: Using ACPI for IRQ routing Apr 30 12:49:47.923135 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 12:49:47.923151 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Apr 30 12:49:47.923166 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 12:49:47.923181 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 12:49:47.923316 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 12:49:47.923452 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 12:49:47.924639 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 12:49:47.924660 kernel: vgaarb: loaded Apr 30 12:49:47.924676 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 12:49:47.924691 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 12:49:47.924706 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 12:49:47.924721 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:49:47.924737 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:49:47.924756 kernel: pnp: PnP ACPI init Apr 30 12:49:47.924771 kernel: pnp: PnP ACPI: found 5 devices Apr 30 12:49:47.924784 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 12:49:47.924798 kernel: NET: Registered PF_INET protocol family Apr 30 12:49:47.924811 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:49:47.924824 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 12:49:47.924839 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:49:47.924853 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 12:49:47.924868 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 12:49:47.924887 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 12:49:47.924902 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 12:49:47.924917 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 12:49:47.924932 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:49:47.924947 kernel: NET: Registered PF_XDP protocol family Apr 30 12:49:47.925115 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 12:49:47.925238 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 12:49:47.925358 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 12:49:47.926525 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 12:49:47.926665 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 12:49:47.926808 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 12:49:47.926830 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:49:47.926848 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 12:49:47.926864 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 12:49:47.926880 kernel: clocksource: Switched to clocksource tsc Apr 30 12:49:47.926895 kernel: Initialise system trusted keyrings Apr 30 12:49:47.926936 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 12:49:47.926969 kernel: Key type asymmetric registered Apr 30 12:49:47.926983 kernel: Asymmetric key parser 'x509' registered Apr 30 12:49:47.926998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 12:49:47.927014 kernel: io scheduler mq-deadline registered Apr 30 12:49:47.927030 kernel: io scheduler kyber registered Apr 30 12:49:47.927046 kernel: io scheduler bfq registered Apr 30 12:49:47.927059 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 12:49:47.927073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:49:47.927088 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 12:49:47.927108 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 12:49:47.927124 kernel: i8042: Warning: Keylock active Apr 30 12:49:47.927138 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 12:49:47.927153 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 12:49:47.927321 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 12:49:47.927467 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 12:49:47.929650 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T12:49:47 UTC (1746017387) Apr 30 12:49:47.929806 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 12:49:47.929828 kernel: intel_pstate: CPU model not supported Apr 30 12:49:47.929845 kernel: efifb: probing for efifb Apr 30 12:49:47.929861 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Apr 30 12:49:47.929903 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 12:49:47.929923 kernel: efifb: scrolling: redraw Apr 30 12:49:47.929941 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 12:49:47.929959 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 12:49:47.929976 kernel: fb0: EFI VGA frame buffer device Apr 30 12:49:47.929996 kernel: pstore: Using crash dump compression: deflate Apr 30 12:49:47.930014 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 12:49:47.930031 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:49:47.930048 kernel: Segment Routing with IPv6 Apr 30 12:49:47.930065 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:49:47.930081 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:49:47.930100 kernel: Key type dns_resolver registered Apr 30 12:49:47.930115 kernel: IPI shorthand broadcast: enabled Apr 30 12:49:47.930132 kernel: sched_clock: Marking stable (462003315, 132729672)->(664669157, -69936170) Apr 30 12:49:47.930149 kernel: registered taskstats version 1 Apr 30 12:49:47.930169 kernel: Loading compiled-in X.509 certificates Apr 30 12:49:47.930186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 12:49:47.930202 kernel: Key type .fscrypt registered Apr 30 12:49:47.930219 kernel: Key type fscrypt-provisioning registered Apr 30 12:49:47.930235 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:49:47.930251 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:49:47.930268 kernel: ima: No architecture policies found Apr 30 12:49:47.930285 kernel: clk: Disabling unused clocks Apr 30 12:49:47.930303 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 12:49:47.930319 kernel: Write protecting the kernel read-only data: 38912k Apr 30 12:49:47.930335 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 12:49:47.930352 kernel: Run /init as init process Apr 30 12:49:47.930368 kernel: with arguments: Apr 30 12:49:47.930384 kernel: /init Apr 30 12:49:47.930400 kernel: with environment: Apr 30 12:49:47.930417 kernel: HOME=/ Apr 30 12:49:47.930432 kernel: TERM=linux Apr 30 12:49:47.930452 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:49:47.930513 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:49:47.930539 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:49:47.930554 systemd[1]: Detected virtualization amazon. Apr 30 12:49:47.930569 systemd[1]: Detected architecture x86-64. Apr 30 12:49:47.930587 systemd[1]: Running in initrd. Apr 30 12:49:47.930610 systemd[1]: No hostname configured, using default hostname. Apr 30 12:49:47.930628 systemd[1]: Hostname set to . Apr 30 12:49:47.930645 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:49:47.930662 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:49:47.930679 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:49:47.930696 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:49:47.930718 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:49:47.930735 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:49:47.930752 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:49:47.930771 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:49:47.930789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:49:47.930807 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:49:47.930824 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:49:47.930844 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:49:47.930861 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:49:47.930878 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:49:47.930895 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:49:47.930912 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:49:47.930929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:49:47.930946 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:49:47.930963 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:49:47.930983 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:49:47.931000 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:49:47.931018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:49:47.931034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:49:47.931052 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:49:47.931069 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:49:47.931087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:49:47.931104 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:49:47.931121 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:49:47.931140 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:49:47.931157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:49:47.931173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:47.931190 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:49:47.931257 systemd-journald[179]: Collecting audit messages is disabled. Apr 30 12:49:47.931307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:49:47.931340 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:49:47.931356 systemd-journald[179]: Journal started Apr 30 12:49:47.931393 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2dc555d3310658fac251dd97ae33be) is 4.7M, max 38.1M, 33.4M free. Apr 30 12:49:47.929518 systemd-modules-load[180]: Inserted module 'overlay' Apr 30 12:49:47.945967 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:49:47.950972 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:49:47.952336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:47.954824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:49:47.970479 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:49:47.971747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:49:47.974912 kernel: Bridge firewalling registered Apr 30 12:49:47.973677 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 30 12:49:47.976058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:49:47.979664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:49:47.982810 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:49:47.995593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:49:48.002021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:48.010247 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 12:49:48.008077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:49:48.009351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:49:48.013401 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:49:48.022688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:49:48.031714 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:49:48.037435 dracut-cmdline[211]: dracut-dracut-053 Apr 30 12:49:48.041133 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:49:48.083326 systemd-resolved[215]: Positive Trust Anchors: Apr 30 12:49:48.083342 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:49:48.083411 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:49:48.091772 systemd-resolved[215]: Defaulting to hostname 'linux'. Apr 30 12:49:48.095316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:49:48.096073 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:49:48.132496 kernel: SCSI subsystem initialized Apr 30 12:49:48.142495 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:49:48.153482 kernel: iscsi: registered transport (tcp) Apr 30 12:49:48.175706 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:49:48.175795 kernel: QLogic iSCSI HBA Driver Apr 30 12:49:48.213381 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:49:48.217632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:49:48.244553 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:49:48.244631 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:49:48.244654 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:49:48.287488 kernel: raid6: avx512x4 gen() 18191 MB/s Apr 30 12:49:48.304485 kernel: raid6: avx512x2 gen() 17976 MB/s Apr 30 12:49:48.323482 kernel: raid6: avx512x1 gen() 18017 MB/s Apr 30 12:49:48.340492 kernel: raid6: avx2x4 gen() 17660 MB/s Apr 30 12:49:48.358481 kernel: raid6: avx2x2 gen() 17915 MB/s Apr 30 12:49:48.375696 kernel: raid6: avx2x1 gen() 13609 MB/s Apr 30 12:49:48.375745 kernel: raid6: using algorithm avx512x4 gen() 18191 MB/s Apr 30 12:49:48.395519 kernel: raid6: .... xor() 7855 MB/s, rmw enabled Apr 30 12:49:48.395580 kernel: raid6: using avx512x2 recovery algorithm Apr 30 12:49:48.416488 kernel: xor: automatically using best checksumming function avx Apr 30 12:49:48.570491 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:49:48.581057 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:49:48.586699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:49:48.603244 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 30 12:49:48.609107 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:49:48.617776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:49:48.635913 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 30 12:49:48.665693 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:49:48.672686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:49:48.723279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:49:48.734689 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:49:48.761581 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:49:48.764234 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:49:48.765550 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:49:48.767282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:49:48.774208 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:49:48.804774 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:49:48.831486 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 12:49:48.843759 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 12:49:48.879442 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 12:49:48.879664 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 12:49:48.879686 kernel: AES CTR mode by8 optimization enabled Apr 30 12:49:48.879706 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 12:49:48.879871 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:0a:b3:75:f5:eb Apr 30 12:49:48.870441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:49:48.870703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:48.872677 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:49:48.873253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:49:48.873574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:48.899569 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 12:49:48.899848 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 12:49:48.874312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:48.881926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:48.897152 (udev-worker)[455]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:49:48.917055 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 12:49:48.920467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:48.927709 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:49:48.927741 kernel: GPT:9289727 != 16777215 Apr 30 12:49:48.927759 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:49:48.927777 kernel: GPT:9289727 != 16777215 Apr 30 12:49:48.927793 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:49:48.928165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:48.929771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:49:48.949432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:48.991481 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (444) Apr 30 12:49:49.034098 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (446) Apr 30 12:49:49.048143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 12:49:49.068856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 12:49:49.079728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 12:49:49.096070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 12:49:49.096606 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 12:49:49.100650 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:49:49.109184 disk-uuid[628]: Primary Header is updated. Apr 30 12:49:49.109184 disk-uuid[628]: Secondary Entries is updated. Apr 30 12:49:49.109184 disk-uuid[628]: Secondary Header is updated. Apr 30 12:49:49.115481 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:50.126534 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:49:50.126881 disk-uuid[629]: The operation has completed successfully. Apr 30 12:49:50.240116 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:49:50.240225 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:49:50.276631 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:49:50.279793 sh[889]: Success Apr 30 12:49:50.293484 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 12:49:50.382712 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:49:50.390565 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:49:50.392520 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:49:50.423370 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 12:49:50.423429 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:50.423451 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:49:50.425578 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:49:50.427638 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:49:50.531487 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:49:50.564810 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:49:50.565953 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:49:50.576743 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:49:50.578630 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:49:50.606194 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:50.606261 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:50.606275 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:49:50.612490 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:49:50.617482 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:50.620860 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:49:50.624689 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:49:50.663984 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:49:50.672633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:49:50.695091 systemd-networkd[1078]: lo: Link UP Apr 30 12:49:50.695104 systemd-networkd[1078]: lo: Gained carrier Apr 30 12:49:50.696396 systemd-networkd[1078]: Enumeration completed Apr 30 12:49:50.696720 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:50.696724 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:49:50.697518 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:49:50.699000 systemd[1]: Reached target network.target - Network. Apr 30 12:49:50.699593 systemd-networkd[1078]: eth0: Link UP Apr 30 12:49:50.699600 systemd-networkd[1078]: eth0: Gained carrier Apr 30 12:49:50.699610 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:50.710402 systemd-networkd[1078]: eth0: DHCPv4 address 172.31.19.82/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 12:49:50.976304 ignition[1019]: Ignition 2.20.0 Apr 30 12:49:50.976410 ignition[1019]: Stage: fetch-offline Apr 30 12:49:50.976643 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:50.976651 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:50.976990 ignition[1019]: Ignition finished successfully Apr 30 12:49:50.978919 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:49:50.983667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:49:50.996468 ignition[1087]: Ignition 2.20.0 Apr 30 12:49:50.996544 ignition[1087]: Stage: fetch Apr 30 12:49:50.996879 ignition[1087]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:50.996893 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:50.996986 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:51.012560 ignition[1087]: PUT result: OK Apr 30 12:49:51.015067 ignition[1087]: parsed url from cmdline: "" Apr 30 12:49:51.015078 ignition[1087]: no config URL provided Apr 30 12:49:51.015085 ignition[1087]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:49:51.015096 ignition[1087]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:49:51.015112 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:51.016036 ignition[1087]: PUT result: OK Apr 30 12:49:51.016068 ignition[1087]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 12:49:51.017216 ignition[1087]: GET result: OK Apr 30 12:49:51.017269 ignition[1087]: parsing config with SHA512: 5a8e457e0c42ee7cdf6b155a896eeb0034af83dc1eec6e4d3f93a2efa5d1930257f7b5d135cc6f90d2cdab746c52fdfe759ae5f9b33ac109890c547dcd4b3b18 Apr 30 12:49:51.021510 unknown[1087]: fetched base config from "system" Apr 30 12:49:51.021520 unknown[1087]: fetched base config from "system" Apr 30 12:49:51.022075 ignition[1087]: fetch: fetch complete Apr 30 12:49:51.021525 unknown[1087]: fetched user config from "aws" Apr 30 12:49:51.022080 ignition[1087]: fetch: fetch passed Apr 30 12:49:51.022122 ignition[1087]: Ignition finished successfully Apr 30 12:49:51.024050 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:49:51.028690 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:49:51.043939 ignition[1093]: Ignition 2.20.0 Apr 30 12:49:51.043950 ignition[1093]: Stage: kargs Apr 30 12:49:51.044262 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:51.044272 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:51.044363 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:51.045167 ignition[1093]: PUT result: OK Apr 30 12:49:51.047660 ignition[1093]: kargs: kargs passed Apr 30 12:49:51.047719 ignition[1093]: Ignition finished successfully Apr 30 12:49:51.049138 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:49:51.054630 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:49:51.066991 ignition[1099]: Ignition 2.20.0 Apr 30 12:49:51.067005 ignition[1099]: Stage: disks Apr 30 12:49:51.067437 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:51.067452 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:51.067608 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:51.068426 ignition[1099]: PUT result: OK Apr 30 12:49:51.070938 ignition[1099]: disks: disks passed Apr 30 12:49:51.071013 ignition[1099]: Ignition finished successfully Apr 30 12:49:51.072646 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:49:51.073264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:49:51.073633 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:49:51.074252 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:49:51.074828 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:49:51.075365 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:49:51.086729 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:49:51.116241 systemd-fsck[1107]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 12:49:51.118672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:49:51.124580 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:49:51.225485 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 12:49:51.225814 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:49:51.226963 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:49:51.232569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:49:51.236163 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:49:51.237347 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 12:49:51.237420 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:49:51.237490 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:49:51.244893 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:49:51.246884 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:49:51.258496 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1126) Apr 30 12:49:51.261684 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:51.261785 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:51.264079 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:49:51.269487 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:49:51.271719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:49:51.614410 initrd-setup-root[1150]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:49:51.626621 initrd-setup-root[1157]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:49:51.644130 initrd-setup-root[1164]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:49:51.648660 initrd-setup-root[1171]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:49:51.858725 systemd-networkd[1078]: eth0: Gained IPv6LL Apr 30 12:49:51.880821 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:49:51.886599 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:49:51.888651 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:49:51.899471 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:49:51.901476 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:51.928299 ignition[1239]: INFO : Ignition 2.20.0 Apr 30 12:49:51.928299 ignition[1239]: INFO : Stage: mount Apr 30 12:49:51.929924 ignition[1239]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:51.929924 ignition[1239]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:51.929924 ignition[1239]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:51.932282 ignition[1239]: INFO : PUT result: OK Apr 30 12:49:51.934810 ignition[1239]: INFO : mount: mount passed Apr 30 12:49:51.935497 ignition[1239]: INFO : Ignition finished successfully Apr 30 12:49:51.937340 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:49:51.940643 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:49:51.944179 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:49:51.959722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:49:51.978482 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1251) Apr 30 12:49:51.978535 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:49:51.981482 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:49:51.981538 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:49:51.988512 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:49:51.990791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:49:52.009444 ignition[1268]: INFO : Ignition 2.20.0 Apr 30 12:49:52.009444 ignition[1268]: INFO : Stage: files Apr 30 12:49:52.010567 ignition[1268]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:52.010567 ignition[1268]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:52.010567 ignition[1268]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:52.011429 ignition[1268]: INFO : PUT result: OK Apr 30 12:49:52.012847 ignition[1268]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:49:52.013669 ignition[1268]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:49:52.013669 ignition[1268]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:49:52.045892 ignition[1268]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:49:52.046668 ignition[1268]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:49:52.046668 ignition[1268]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:49:52.046283 unknown[1268]: wrote ssh authorized keys file for user: core Apr 30 12:49:52.048437 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:49:52.048437 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 12:49:52.132968 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:49:52.338277 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:49:52.339413 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:49:52.339413 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 12:49:52.823800 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:49:52.952613 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:49:52.954438 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 12:49:53.438364 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:49:54.265523 ignition[1268]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 12:49:54.265523 ignition[1268]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:49:54.274939 ignition[1268]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:49:54.276471 ignition[1268]: INFO : files: files passed Apr 30 12:49:54.276471 ignition[1268]: INFO : Ignition finished successfully Apr 30 12:49:54.277492 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:49:54.283788 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:49:54.287965 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:49:54.296421 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:49:54.296602 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:49:54.305116 initrd-setup-root-after-ignition[1297]: grep: Apr 30 12:49:54.306270 initrd-setup-root-after-ignition[1301]: grep: Apr 30 12:49:54.306270 initrd-setup-root-after-ignition[1297]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:49:54.306270 initrd-setup-root-after-ignition[1297]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:49:54.309573 initrd-setup-root-after-ignition[1301]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:49:54.307901 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:49:54.309165 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:49:54.317756 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:49:54.343322 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:49:54.343471 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:49:54.344688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:49:54.345856 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:49:54.346656 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:49:54.351658 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:49:54.365881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:49:54.370693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:49:54.384471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:49:54.385678 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:49:54.386384 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:49:54.387223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:49:54.387416 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:49:54.388517 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:49:54.389339 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:49:54.390220 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:49:54.390989 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:49:54.391758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:49:54.392537 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:49:54.393276 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:49:54.394143 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:49:54.395273 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:49:54.396025 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:49:54.396744 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:49:54.396931 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:49:54.398096 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:49:54.398899 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:49:54.399580 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:49:54.399728 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:49:54.400354 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:49:54.400555 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:49:54.402006 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:49:54.402191 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:49:54.402920 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:49:54.403075 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:49:54.410768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:49:54.412393 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:49:54.412652 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:49:54.420745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:49:54.422161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:49:54.423169 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:49:54.424711 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:49:54.425648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:49:54.431421 ignition[1321]: INFO : Ignition 2.20.0 Apr 30 12:49:54.431421 ignition[1321]: INFO : Stage: umount Apr 30 12:49:54.435892 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:49:54.435892 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:49:54.435892 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:49:54.435892 ignition[1321]: INFO : PUT result: OK Apr 30 12:49:54.435738 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:49:54.442740 ignition[1321]: INFO : umount: umount passed Apr 30 12:49:54.442740 ignition[1321]: INFO : Ignition finished successfully Apr 30 12:49:54.436274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:49:54.444812 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:49:54.445441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:49:54.447220 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:49:54.447341 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:49:54.449192 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:49:54.449729 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:49:54.450347 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:49:54.450405 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:49:54.450979 systemd[1]: Stopped target network.target - Network. Apr 30 12:49:54.451412 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:49:54.452403 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:49:54.452880 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:49:54.455597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:49:54.461567 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:49:54.463334 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:49:54.463822 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:49:54.464521 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:49:54.464588 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:49:54.465206 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:49:54.465260 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:49:54.465917 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:49:54.465992 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:49:54.466596 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:49:54.466657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:49:54.467388 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:49:54.467981 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:49:54.470406 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:49:54.471386 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:49:54.471529 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:49:54.472681 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:49:54.472786 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:49:54.474527 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:49:54.474661 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:49:54.478547 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:49:54.479192 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:49:54.479325 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:49:54.481212 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:49:54.482819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:49:54.482883 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:49:54.488566 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:49:54.489145 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:49:54.489224 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:49:54.491864 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:49:54.491945 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:49:54.492724 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:49:54.492786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:49:54.493311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:49:54.493371 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:49:54.494200 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:49:54.497279 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:49:54.497377 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:49:54.508377 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:49:54.509159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:49:54.510010 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:49:54.510143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:49:54.511333 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:49:54.511392 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:49:54.512439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:49:54.512492 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:49:54.513103 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:49:54.513155 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:49:54.514204 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:49:54.514257 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:49:54.515340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:49:54.515389 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:49:54.521638 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:49:54.522631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:49:54.522691 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:49:54.523504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:49:54.523548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:54.525471 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:49:54.525535 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:49:54.528087 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:49:54.528198 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:49:54.529246 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:49:54.539742 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:49:54.548524 systemd[1]: Switching root. Apr 30 12:49:54.600293 systemd-journald[179]: Journal stopped Apr 30 12:49:56.402577 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 30 12:49:56.402652 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:49:56.402673 kernel: SELinux: policy capability open_perms=1 Apr 30 12:49:56.402685 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:49:56.402697 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:49:56.402709 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:49:56.402721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:49:56.402733 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:49:56.402744 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:49:56.402756 kernel: audit: type=1403 audit(1746017394.965:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:49:56.402770 systemd[1]: Successfully loaded SELinux policy in 73.630ms. Apr 30 12:49:56.402798 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.688ms. Apr 30 12:49:56.402812 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:49:56.402825 systemd[1]: Detected virtualization amazon. Apr 30 12:49:56.402838 systemd[1]: Detected architecture x86-64. Apr 30 12:49:56.402851 systemd[1]: Detected first boot. Apr 30 12:49:56.402867 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:49:56.402879 zram_generator::config[1366]: No configuration found. Apr 30 12:49:56.402893 kernel: Guest personality initialized and is inactive Apr 30 12:49:56.402907 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Apr 30 12:49:56.402921 kernel: Initialized host personality Apr 30 12:49:56.402933 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:49:56.402946 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:49:56.402959 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:49:56.402972 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:49:56.402984 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:49:56.402996 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:49:56.403009 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:49:56.403024 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:49:56.403036 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:49:56.403049 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:49:56.403061 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:49:56.403074 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:49:56.403086 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:49:56.403099 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:49:56.403111 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:49:56.403126 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:49:56.403138 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:49:56.403153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:49:56.403167 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:49:56.403179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:49:56.403191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:49:56.403204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:49:56.403216 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:49:56.403231 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:49:56.403244 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:49:56.403256 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:49:56.403268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:49:56.403280 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:49:56.403293 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:49:56.403305 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:49:56.403317 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:49:56.403329 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:49:56.403344 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:49:56.403356 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:49:56.403369 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:49:56.403381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:49:56.403394 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:49:56.403406 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:49:56.403418 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:49:56.403430 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:49:56.403443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:56.403487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:49:56.403501 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:49:56.403513 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:49:56.403526 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:49:56.403539 systemd[1]: Reached target machines.target - Containers. Apr 30 12:49:56.403552 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:49:56.403565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:49:56.403577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:49:56.403592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:49:56.403605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:49:56.403617 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:49:56.403630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:49:56.403642 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:49:56.403655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:49:56.403668 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:49:56.403680 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:49:56.403692 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:49:56.403709 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:49:56.403721 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:49:56.403735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:49:56.403747 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:49:56.403759 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:49:56.403772 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:49:56.403784 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:49:56.403797 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:49:56.403813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:49:56.403825 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:49:56.403838 systemd[1]: Stopped verity-setup.service. Apr 30 12:49:56.403851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:56.403863 kernel: loop: module loaded Apr 30 12:49:56.403877 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:49:56.403890 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:49:56.403904 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:49:56.403916 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:49:56.403929 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:49:56.403942 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:49:56.403954 kernel: fuse: init (API version 7.39) Apr 30 12:49:56.403969 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:49:56.403981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:49:56.403994 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:49:56.404007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:49:56.404019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:49:56.404032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:49:56.404045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:49:56.404061 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:49:56.404074 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:49:56.404087 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:49:56.404099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:49:56.404137 systemd-journald[1449]: Collecting audit messages is disabled. Apr 30 12:49:56.404161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:49:56.404180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:49:56.405509 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:49:56.405537 systemd-journald[1449]: Journal started Apr 30 12:49:56.405565 systemd-journald[1449]: Runtime Journal (/run/log/journal/ec2dc555d3310658fac251dd97ae33be) is 4.7M, max 38.1M, 33.4M free. Apr 30 12:49:56.128816 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:49:56.136663 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 12:49:56.137164 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:49:56.413527 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:49:56.413581 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:49:56.422206 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:49:56.428586 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:49:56.429382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:49:56.429427 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:49:56.431334 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:49:56.436640 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:49:56.438544 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:49:56.439095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:49:56.441759 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:49:56.445622 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:49:56.446170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:49:56.447216 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:49:56.449374 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:49:56.450597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:49:56.456695 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:49:56.459124 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:49:56.459728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:49:56.460644 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:49:56.466940 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:49:56.478183 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:49:56.504629 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:49:56.512870 kernel: ACPI: bus type drm_connector registered Apr 30 12:49:56.512663 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:49:56.513412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:49:56.516666 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:49:56.518737 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:49:56.525637 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:49:56.526420 systemd-journald[1449]: Time spent on flushing to /var/log/journal/ec2dc555d3310658fac251dd97ae33be is 51.762ms for 1014 entries. Apr 30 12:49:56.526420 systemd-journald[1449]: System Journal (/var/log/journal/ec2dc555d3310658fac251dd97ae33be) is 8M, max 195.6M, 187.6M free. Apr 30 12:49:56.600535 systemd-journald[1449]: Received client request to flush runtime journal. Apr 30 12:49:56.600603 kernel: loop0: detected capacity change from 0 to 210664 Apr 30 12:49:56.564094 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:49:56.573131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:49:56.587696 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:49:56.602051 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:49:56.604678 udevadm[1514]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 12:49:56.608289 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:49:56.614512 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:49:56.621385 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:49:56.655920 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Apr 30 12:49:56.655949 systemd-tmpfiles[1520]: ACLs are not supported, ignoring. Apr 30 12:49:56.663775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:49:56.738488 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:49:56.776491 kernel: loop1: detected capacity change from 0 to 138176 Apr 30 12:49:56.903504 kernel: loop2: detected capacity change from 0 to 147912 Apr 30 12:49:57.045583 kernel: loop3: detected capacity change from 0 to 62832 Apr 30 12:49:57.090547 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 12:49:57.128672 kernel: loop5: detected capacity change from 0 to 138176 Apr 30 12:49:57.142023 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:49:57.158523 kernel: loop6: detected capacity change from 0 to 147912 Apr 30 12:49:57.181488 kernel: loop7: detected capacity change from 0 to 62832 Apr 30 12:49:57.203033 (sd-merge)[1529]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 12:49:57.203794 (sd-merge)[1529]: Merged extensions into '/usr'. Apr 30 12:49:57.212907 systemd[1]: Reload requested from client PID 1489 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:49:57.213086 systemd[1]: Reloading... Apr 30 12:49:57.317480 zram_generator::config[1556]: No configuration found. Apr 30 12:49:57.470992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:49:57.553394 systemd[1]: Reloading finished in 339 ms. Apr 30 12:49:57.571086 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:49:57.571863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:49:57.586849 systemd[1]: Starting ensure-sysext.service... Apr 30 12:49:57.591675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:49:57.604215 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:49:57.623160 systemd[1]: Reload requested from client PID 1609 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:49:57.623337 systemd[1]: Reloading... Apr 30 12:49:57.661770 systemd-udevd[1611]: Using default interface naming scheme 'v255'. Apr 30 12:49:57.673971 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:49:57.674366 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:49:57.682695 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:49:57.683162 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Apr 30 12:49:57.683261 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Apr 30 12:49:57.690487 zram_generator::config[1639]: No configuration found. Apr 30 12:49:57.706584 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:49:57.706601 systemd-tmpfiles[1610]: Skipping /boot Apr 30 12:49:57.737614 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:49:57.737814 systemd-tmpfiles[1610]: Skipping /boot Apr 30 12:49:57.902039 (udev-worker)[1667]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:49:58.017607 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 12:49:58.027570 kernel: ACPI: button: Power Button [PWRF] Apr 30 12:49:58.027670 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 30 12:49:58.027696 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 12:49:58.077279 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1676) Apr 30 12:49:58.077370 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 12:49:58.105571 ldconfig[1484]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:49:58.123934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:49:58.184502 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 12:49:58.293001 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:49:58.293162 systemd[1]: Reloading finished in 669 ms. Apr 30 12:49:58.308696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:49:58.309761 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:49:58.326427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:49:58.342534 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:49:58.377381 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:49:58.382863 systemd[1]: Finished ensure-sysext.service. Apr 30 12:49:58.411898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 12:49:58.412716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:58.417763 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:49:58.420656 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:49:58.423769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:49:58.426684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:49:58.434757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:49:58.438124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:49:58.440649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:49:58.450188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:49:58.452205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:49:58.460675 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:49:58.468623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:49:58.476491 lvm[1808]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:49:58.477753 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:49:58.485842 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:49:58.498652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:49:58.500018 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:49:58.508677 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:49:58.517232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:49:58.519668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:49:58.520965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:49:58.522748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:49:58.524858 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:49:58.525352 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:49:58.527162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:49:58.527383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:49:58.528352 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:49:58.530638 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:49:58.542869 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:49:58.551790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:49:58.554422 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:49:58.563767 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:49:58.565571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:49:58.565669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:49:58.575714 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:49:58.589152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:49:58.594985 lvm[1842]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:49:58.600890 augenrules[1849]: No rules Apr 30 12:49:58.603312 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:49:58.603798 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:49:58.623813 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:49:58.631686 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:49:58.633525 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:49:58.660733 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:49:58.661677 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:49:58.662654 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:49:58.674309 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:49:58.712241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:49:58.778343 systemd-networkd[1824]: lo: Link UP Apr 30 12:49:58.778354 systemd-networkd[1824]: lo: Gained carrier Apr 30 12:49:58.780583 systemd-networkd[1824]: Enumeration completed Apr 30 12:49:58.780746 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:49:58.781942 systemd-networkd[1824]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:58.781958 systemd-networkd[1824]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:49:58.787833 systemd-networkd[1824]: eth0: Link UP Apr 30 12:49:58.790048 systemd-networkd[1824]: eth0: Gained carrier Apr 30 12:49:58.790210 systemd-networkd[1824]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:49:58.792798 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:49:58.792942 systemd-resolved[1825]: Positive Trust Anchors: Apr 30 12:49:58.792958 systemd-resolved[1825]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:49:58.793024 systemd-resolved[1825]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:49:58.799708 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:49:58.801912 systemd-networkd[1824]: eth0: DHCPv4 address 172.31.19.82/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 12:49:58.802123 systemd-resolved[1825]: Defaulting to hostname 'linux'. Apr 30 12:49:58.810749 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:49:58.812684 systemd[1]: Reached target network.target - Network. Apr 30 12:49:58.813539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:49:58.814183 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:49:58.814956 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:49:58.815734 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:49:58.819556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:49:58.820632 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:49:58.822491 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:49:58.823092 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:49:58.823132 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:49:58.823653 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:49:58.826130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:49:58.828978 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:49:58.832697 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:49:58.833360 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:49:58.834017 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:49:58.836978 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:49:58.838281 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:49:58.840082 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:49:58.840841 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:49:58.842056 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:49:58.842673 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:49:58.843178 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:49:58.843303 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:49:58.850633 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:49:58.855109 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:49:58.862427 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:49:58.865708 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:49:58.870620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:49:58.871321 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:49:58.874539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:49:58.879742 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 12:49:58.891724 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:49:58.898638 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 12:49:58.903684 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:49:58.925576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:49:58.933409 jq[1879]: false Apr 30 12:49:58.935642 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:49:58.937649 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:49:58.938405 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:49:58.944713 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:49:58.948623 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:49:58.976082 update_engine[1890]: I20250430 12:49:58.975960 1890 main.cc:92] Flatcar Update Engine starting Apr 30 12:49:58.977738 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:49:58.979772 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:49:58.993056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:49:58.993333 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:49:59.020973 extend-filesystems[1880]: Found loop4 Apr 30 12:49:59.022305 extend-filesystems[1880]: Found loop5 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found loop6 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found loop7 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1p2 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1p3 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found usr Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1p4 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1p6 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1p7 Apr 30 12:49:59.025890 extend-filesystems[1880]: Found nvme0n1p9 Apr 30 12:49:59.025890 extend-filesystems[1880]: Checking size of /dev/nvme0n1p9 Apr 30 12:49:59.038803 jq[1891]: true Apr 30 12:49:59.037821 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:49:59.037055 dbus-daemon[1878]: [system] SELinux support is enabled Apr 30 12:49:59.039997 dbus-daemon[1878]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1824 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 12:49:59.050661 update_engine[1890]: I20250430 12:49:59.041406 1890 update_check_scheduler.cc:74] Next update check in 4m36s Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:38:46 UTC 2025 (1): Starting Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: ---------------------------------------------------- Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: ntp-4 is maintained by Network Time Foundation, Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: corporation. Support and training for ntp-4 are Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: available at https://www.nwtime.org/support Apr 30 12:49:59.066273 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: ---------------------------------------------------- Apr 30 12:49:59.063089 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:49:59.060819 ntpd[1882]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:38:46 UTC 2025 (1): Starting Apr 30 12:49:59.063136 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:49:59.060849 ntpd[1882]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 12:49:59.065012 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:49:59.060859 ntpd[1882]: ---------------------------------------------------- Apr 30 12:49:59.065043 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:49:59.060868 ntpd[1882]: ntp-4 is maintained by Network Time Foundation, Apr 30 12:49:59.060878 ntpd[1882]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 12:49:59.060887 ntpd[1882]: corporation. Support and training for ntp-4 are Apr 30 12:49:59.060896 ntpd[1882]: available at https://www.nwtime.org/support Apr 30 12:49:59.060908 ntpd[1882]: ---------------------------------------------------- Apr 30 12:49:59.072068 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 12:49:59.087615 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: proto: precision = 0.098 usec (-23) Apr 30 12:49:59.072179 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:49:59.080024 ntpd[1882]: proto: precision = 0.098 usec (-23) Apr 30 12:49:59.086670 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:49:59.088261 ntpd[1882]: basedate set to 2025-04-17 Apr 30 12:49:59.092994 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: basedate set to 2025-04-17 Apr 30 12:49:59.092994 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: gps base set to 2025-04-20 (week 2363) Apr 30 12:49:59.088286 ntpd[1882]: gps base set to 2025-04-20 (week 2363) Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Listen normally on 3 eth0 172.31.19.82:123 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Listen normally on 4 lo [::1]:123 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: bind(21) AF_INET6 fe80::40a:b3ff:fe75:f5eb%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: unable to create socket on eth0 (5) for fe80::40a:b3ff:fe75:f5eb%2#123 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: failed to init interface for address fe80::40a:b3ff:fe75:f5eb%2 Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: Listening on routing socket on fd #21 for interface updates Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:59.115548 ntpd[1882]: 30 Apr 12:49:59 ntpd[1882]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:59.109988 (ntainerd)[1911]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:49:59.113329 ntpd[1882]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 12:49:59.116366 jq[1907]: true Apr 30 12:49:59.113381 ntpd[1882]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 12:49:59.113603 ntpd[1882]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 12:49:59.113640 ntpd[1882]: Listen normally on 3 eth0 172.31.19.82:123 Apr 30 12:49:59.113691 ntpd[1882]: Listen normally on 4 lo [::1]:123 Apr 30 12:49:59.113740 ntpd[1882]: bind(21) AF_INET6 fe80::40a:b3ff:fe75:f5eb%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:49:59.113761 ntpd[1882]: unable to create socket on eth0 (5) for fe80::40a:b3ff:fe75:f5eb%2#123 Apr 30 12:49:59.113777 ntpd[1882]: failed to init interface for address fe80::40a:b3ff:fe75:f5eb%2 Apr 30 12:49:59.113807 ntpd[1882]: Listening on routing socket on fd #21 for interface updates Apr 30 12:49:59.115246 ntpd[1882]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:59.115280 ntpd[1882]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:49:59.128853 extend-filesystems[1880]: Resized partition /dev/nvme0n1p9 Apr 30 12:49:59.146733 extend-filesystems[1927]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:49:59.153705 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 12:49:59.158515 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 12:49:59.156287 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:49:59.158231 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:49:59.167197 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 12:49:59.174825 tar[1895]: linux-amd64/helm Apr 30 12:49:59.241480 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 12:49:59.266862 extend-filesystems[1927]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 12:49:59.266862 extend-filesystems[1927]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 12:49:59.266862 extend-filesystems[1927]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 12:49:59.270710 extend-filesystems[1880]: Resized filesystem in /dev/nvme0n1p9 Apr 30 12:49:59.270710 extend-filesystems[1880]: Found nvme0n1p1 Apr 30 12:49:59.267899 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:49:59.268138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:49:59.278480 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1680) Apr 30 12:49:59.294877 coreos-metadata[1877]: Apr 30 12:49:59.294 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 12:49:59.298274 systemd-logind[1889]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 12:49:59.298991 systemd-logind[1889]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 12:49:59.299021 systemd-logind[1889]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 12:49:59.299291 systemd-logind[1889]: New seat seat0. Apr 30 12:49:59.300340 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:49:59.311379 coreos-metadata[1877]: Apr 30 12:49:59.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 12:49:59.323123 coreos-metadata[1877]: Apr 30 12:49:59.320 INFO Fetch successful Apr 30 12:49:59.323123 coreos-metadata[1877]: Apr 30 12:49:59.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 12:49:59.323123 coreos-metadata[1877]: Apr 30 12:49:59.322 INFO Fetch successful Apr 30 12:49:59.323123 coreos-metadata[1877]: Apr 30 12:49:59.322 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 12:49:59.324331 coreos-metadata[1877]: Apr 30 12:49:59.323 INFO Fetch successful Apr 30 12:49:59.324331 coreos-metadata[1877]: Apr 30 12:49:59.323 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 12:49:59.327705 coreos-metadata[1877]: Apr 30 12:49:59.325 INFO Fetch successful Apr 30 12:49:59.327705 coreos-metadata[1877]: Apr 30 12:49:59.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 12:49:59.333492 bash[1955]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:49:59.332884 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:49:59.333788 coreos-metadata[1877]: Apr 30 12:49:59.330 INFO Fetch failed with 404: resource not found Apr 30 12:49:59.333788 coreos-metadata[1877]: Apr 30 12:49:59.331 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 12:49:59.339832 coreos-metadata[1877]: Apr 30 12:49:59.334 INFO Fetch successful Apr 30 12:49:59.339832 coreos-metadata[1877]: Apr 30 12:49:59.334 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 12:49:59.339832 coreos-metadata[1877]: Apr 30 12:49:59.339 INFO Fetch successful Apr 30 12:49:59.339832 coreos-metadata[1877]: Apr 30 12:49:59.339 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 12:49:59.347621 coreos-metadata[1877]: Apr 30 12:49:59.346 INFO Fetch successful Apr 30 12:49:59.347621 coreos-metadata[1877]: Apr 30 12:49:59.347 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 12:49:59.347166 systemd[1]: Starting sshkeys.service... Apr 30 12:49:59.350081 coreos-metadata[1877]: Apr 30 12:49:59.349 INFO Fetch successful Apr 30 12:49:59.350081 coreos-metadata[1877]: Apr 30 12:49:59.349 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 12:49:59.366880 coreos-metadata[1877]: Apr 30 12:49:59.355 INFO Fetch successful Apr 30 12:49:59.470221 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:49:59.479137 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:49:59.482921 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:49:59.499973 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:49:59.671060 locksmithd[1919]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:49:59.717236 coreos-metadata[1993]: Apr 30 12:49:59.717 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 12:49:59.725948 coreos-metadata[1993]: Apr 30 12:49:59.725 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 12:49:59.726680 coreos-metadata[1993]: Apr 30 12:49:59.726 INFO Fetch successful Apr 30 12:49:59.726775 coreos-metadata[1993]: Apr 30 12:49:59.726 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 12:49:59.734389 coreos-metadata[1993]: Apr 30 12:49:59.734 INFO Fetch successful Apr 30 12:49:59.739776 unknown[1993]: wrote ssh authorized keys file for user: core Apr 30 12:49:59.779275 update-ssh-keys[2066]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:49:59.786942 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 12:49:59.789747 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:49:59.804041 systemd[1]: Finished sshkeys.service. Apr 30 12:49:59.810685 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 12:49:59.821940 dbus-daemon[1878]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1929 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 12:49:59.840850 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 12:49:59.872775 polkitd[2070]: Started polkitd version 121 Apr 30 12:49:59.887102 polkitd[2070]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 12:49:59.890991 polkitd[2070]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 12:49:59.893407 polkitd[2070]: Finished loading, compiling and executing 2 rules Apr 30 12:49:59.896079 dbus-daemon[1878]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 12:49:59.896287 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 12:49:59.898541 polkitd[2070]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 12:49:59.950204 systemd-resolved[1825]: System hostname changed to 'ip-172-31-19-82'. Apr 30 12:49:59.950888 systemd-hostnamed[1929]: Hostname set to (transient) Apr 30 12:49:59.977186 containerd[1911]: time="2025-04-30T12:49:59.976640460Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:49:59.985984 sshd_keygen[1909]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:50:00.023743 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:50:00.038475 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:50:00.051375 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:50:00.053086 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:50:00.061324 ntpd[1882]: bind(24) AF_INET6 fe80::40a:b3ff:fe75:f5eb%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:50:00.061589 ntpd[1882]: unable to create socket on eth0 (6) for fe80::40a:b3ff:fe75:f5eb%2#123 Apr 30 12:50:00.064886 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:50:00.066148 ntpd[1882]: 30 Apr 12:50:00 ntpd[1882]: bind(24) AF_INET6 fe80::40a:b3ff:fe75:f5eb%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:50:00.066148 ntpd[1882]: 30 Apr 12:50:00 ntpd[1882]: unable to create socket on eth0 (6) for fe80::40a:b3ff:fe75:f5eb%2#123 Apr 30 12:50:00.066148 ntpd[1882]: 30 Apr 12:50:00 ntpd[1882]: failed to init interface for address fe80::40a:b3ff:fe75:f5eb%2 Apr 30 12:50:00.061609 ntpd[1882]: failed to init interface for address fe80::40a:b3ff:fe75:f5eb%2 Apr 30 12:50:00.068283 containerd[1911]: time="2025-04-30T12:50:00.068048705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.073894560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.073942715Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.073967664Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.074157620Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.074177493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.074245758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074404 containerd[1911]: time="2025-04-30T12:50:00.074262986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074739 containerd[1911]: time="2025-04-30T12:50:00.074648912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074739 containerd[1911]: time="2025-04-30T12:50:00.074677214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074739 containerd[1911]: time="2025-04-30T12:50:00.074698726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074739 containerd[1911]: time="2025-04-30T12:50:00.074714075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.074879 containerd[1911]: time="2025-04-30T12:50:00.074822921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.075439 containerd[1911]: time="2025-04-30T12:50:00.075093523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:50:00.075439 containerd[1911]: time="2025-04-30T12:50:00.075297506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:50:00.075439 containerd[1911]: time="2025-04-30T12:50:00.075318232Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:50:00.075439 containerd[1911]: time="2025-04-30T12:50:00.075416601Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:50:00.076920 containerd[1911]: time="2025-04-30T12:50:00.076540840Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:50:00.081226 containerd[1911]: time="2025-04-30T12:50:00.081170805Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.081599494Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.081632907Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.081666585Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.081688955Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.081878056Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.082219526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.082372014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.082395053Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.082416340Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.082438392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082475 containerd[1911]: time="2025-04-30T12:50:00.082474165Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082507069Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082525126Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082543944Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082562364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082579230Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082597728Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082626457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082647128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082665859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082685942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082704027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082723136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082740874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.082923 containerd[1911]: time="2025-04-30T12:50:00.082760897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082780465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082802847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082820694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082839810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082860501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082882730Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082914164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082934918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.082952291Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.083009387Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.083034993Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.083051414Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:50:00.083435 containerd[1911]: time="2025-04-30T12:50:00.083071084Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:50:00.083876 containerd[1911]: time="2025-04-30T12:50:00.083086200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.083876 containerd[1911]: time="2025-04-30T12:50:00.083112081Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:50:00.083876 containerd[1911]: time="2025-04-30T12:50:00.083128602Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:50:00.083876 containerd[1911]: time="2025-04-30T12:50:00.083144194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:50:00.085405 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.084519495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.084599414Z" level=info msg="Connect containerd service" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.084659834Z" level=info msg="using legacy CRI server" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.084669287Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.084862034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.085614810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.086330886Z" level=info msg="Start subscribing containerd event" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.086442106Z" level=info msg="Start recovering state" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.086635990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.086704581Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.087585479Z" level=info msg="Start event monitor" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.087619239Z" level=info msg="Start snapshots syncer" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.087633689Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.087651190Z" level=info msg="Start streaming server" Apr 30 12:50:00.088162 containerd[1911]: time="2025-04-30T12:50:00.087807300Z" level=info msg="containerd successfully booted in 0.114224s" Apr 30 12:50:00.089057 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:50:00.103041 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:50:00.105875 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:50:00.108195 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:50:00.369854 tar[1895]: linux-amd64/LICENSE Apr 30 12:50:00.370272 tar[1895]: linux-amd64/README.md Apr 30 12:50:00.398808 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:50:00.754687 systemd-networkd[1824]: eth0: Gained IPv6LL Apr 30 12:50:00.757636 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:50:00.759124 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:50:00.764910 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 12:50:00.768513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:00.774832 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:50:00.819771 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:50:00.837505 amazon-ssm-agent[2102]: Initializing new seelog logger Apr 30 12:50:00.837505 amazon-ssm-agent[2102]: New Seelog Logger Creation Complete Apr 30 12:50:00.837505 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.837505 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.838213 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 processing appconfig overrides Apr 30 12:50:00.838593 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.838654 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.838748 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 processing appconfig overrides Apr 30 12:50:00.839064 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.839129 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.839260 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 processing appconfig overrides Apr 30 12:50:00.839785 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO Proxy environment variables: Apr 30 12:50:00.842834 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.842939 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:50:00.843246 amazon-ssm-agent[2102]: 2025/04/30 12:50:00 processing appconfig overrides Apr 30 12:50:00.939284 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO https_proxy: Apr 30 12:50:01.037857 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO http_proxy: Apr 30 12:50:01.136808 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO no_proxy: Apr 30 12:50:01.234783 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO Checking if agent identity type OnPrem can be assumed Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO Checking if agent identity type EC2 can be assumed Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO Agent will take identity from EC2 Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 12:50:01.276629 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [Registrar] Starting registrar module Apr 30 12:50:01.277055 amazon-ssm-agent[2102]: 2025-04-30 12:50:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 12:50:01.277055 amazon-ssm-agent[2102]: 2025-04-30 12:50:01 INFO [EC2Identity] EC2 registration was successful. Apr 30 12:50:01.277055 amazon-ssm-agent[2102]: 2025-04-30 12:50:01 INFO [CredentialRefresher] credentialRefresher has started Apr 30 12:50:01.277055 amazon-ssm-agent[2102]: 2025-04-30 12:50:01 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 12:50:01.277055 amazon-ssm-agent[2102]: 2025-04-30 12:50:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 12:50:01.333539 amazon-ssm-agent[2102]: 2025-04-30 12:50:01 INFO [CredentialRefresher] Next credential rotation will be in 30.2083237041 minutes Apr 30 12:50:01.671619 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:50:01.676849 systemd[1]: Started sshd@0-172.31.19.82:22-147.75.109.163:44120.service - OpenSSH per-connection server daemon (147.75.109.163:44120). Apr 30 12:50:01.963809 sshd[2123]: Accepted publickey for core from 147.75.109.163 port 44120 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:01.966104 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:01.978278 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:50:01.992760 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:50:02.000902 systemd-logind[1889]: New session 1 of user core. Apr 30 12:50:02.012020 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:50:02.020031 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:50:02.032974 (systemd)[2127]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:50:02.036863 systemd-logind[1889]: New session c1 of user core. Apr 30 12:50:02.215509 systemd[2127]: Queued start job for default target default.target. Apr 30 12:50:02.234977 systemd[2127]: Created slice app.slice - User Application Slice. Apr 30 12:50:02.235026 systemd[2127]: Reached target paths.target - Paths. Apr 30 12:50:02.235083 systemd[2127]: Reached target timers.target - Timers. Apr 30 12:50:02.236516 systemd[2127]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:50:02.248989 systemd[2127]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:50:02.249069 systemd[2127]: Reached target sockets.target - Sockets. Apr 30 12:50:02.249129 systemd[2127]: Reached target basic.target - Basic System. Apr 30 12:50:02.249181 systemd[2127]: Reached target default.target - Main User Target. Apr 30 12:50:02.249218 systemd[2127]: Startup finished in 204ms. Apr 30 12:50:02.249376 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:50:02.256662 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:50:02.288744 amazon-ssm-agent[2102]: 2025-04-30 12:50:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 12:50:02.389128 amazon-ssm-agent[2102]: 2025-04-30 12:50:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2137) started Apr 30 12:50:02.466888 systemd[1]: Started sshd@1-172.31.19.82:22-147.75.109.163:44122.service - OpenSSH per-connection server daemon (147.75.109.163:44122). Apr 30 12:50:02.489960 amazon-ssm-agent[2102]: 2025-04-30 12:50:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 12:50:02.714153 sshd[2150]: Accepted publickey for core from 147.75.109.163 port 44122 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:02.715435 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:02.719957 systemd-logind[1889]: New session 2 of user core. Apr 30 12:50:02.727725 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:50:02.904609 sshd[2152]: Connection closed by 147.75.109.163 port 44122 Apr 30 12:50:02.905917 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:02.909811 systemd[1]: sshd@1-172.31.19.82:22-147.75.109.163:44122.service: Deactivated successfully. Apr 30 12:50:02.911443 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:50:02.912140 systemd-logind[1889]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:50:02.913003 systemd-logind[1889]: Removed session 2. Apr 30 12:50:02.955805 systemd[1]: Started sshd@2-172.31.19.82:22-147.75.109.163:44136.service - OpenSSH per-connection server daemon (147.75.109.163:44136). Apr 30 12:50:03.061272 ntpd[1882]: Listen normally on 7 eth0 [fe80::40a:b3ff:fe75:f5eb%2]:123 Apr 30 12:50:03.061762 ntpd[1882]: 30 Apr 12:50:03 ntpd[1882]: Listen normally on 7 eth0 [fe80::40a:b3ff:fe75:f5eb%2]:123 Apr 30 12:50:03.097821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:03.100257 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:50:03.101855 systemd[1]: Startup finished in 590ms (kernel) + 7.239s (initrd) + 8.207s (userspace) = 16.037s. Apr 30 12:50:03.106897 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:03.210521 sshd[2158]: Accepted publickey for core from 147.75.109.163 port 44136 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:03.211899 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:03.217112 systemd-logind[1889]: New session 3 of user core. Apr 30 12:50:03.221643 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:50:03.402401 sshd[2170]: Connection closed by 147.75.109.163 port 44136 Apr 30 12:50:03.403549 sshd-session[2158]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:03.406410 systemd[1]: sshd@2-172.31.19.82:22-147.75.109.163:44136.service: Deactivated successfully. Apr 30 12:50:03.408740 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:50:03.409835 systemd-logind[1889]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:50:03.410969 systemd-logind[1889]: Removed session 3. Apr 30 12:50:04.417658 kubelet[2165]: E0430 12:50:04.417570 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:04.420310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:04.420478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:04.420784 systemd[1]: kubelet.service: Consumed 996ms CPU time, 246.2M memory peak. Apr 30 12:50:07.418088 systemd-resolved[1825]: Clock change detected. Flushing caches. Apr 30 12:50:14.809481 systemd[1]: Started sshd@3-172.31.19.82:22-147.75.109.163:51968.service - OpenSSH per-connection server daemon (147.75.109.163:51968). Apr 30 12:50:15.056350 sshd[2183]: Accepted publickey for core from 147.75.109.163 port 51968 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:15.057721 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:15.062110 systemd-logind[1889]: New session 4 of user core. Apr 30 12:50:15.065413 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:50:15.247723 sshd[2185]: Connection closed by 147.75.109.163 port 51968 Apr 30 12:50:15.248353 sshd-session[2183]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:15.251987 systemd[1]: sshd@3-172.31.19.82:22-147.75.109.163:51968.service: Deactivated successfully. Apr 30 12:50:15.253786 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:50:15.254484 systemd-logind[1889]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:50:15.255456 systemd-logind[1889]: Removed session 4. Apr 30 12:50:15.298653 systemd[1]: Started sshd@4-172.31.19.82:22-147.75.109.163:51984.service - OpenSSH per-connection server daemon (147.75.109.163:51984). Apr 30 12:50:15.546324 sshd[2191]: Accepted publickey for core from 147.75.109.163 port 51984 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:15.547528 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:15.552036 systemd-logind[1889]: New session 5 of user core. Apr 30 12:50:15.562456 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:50:15.733082 sshd[2193]: Connection closed by 147.75.109.163 port 51984 Apr 30 12:50:15.733632 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:15.736404 systemd[1]: sshd@4-172.31.19.82:22-147.75.109.163:51984.service: Deactivated successfully. Apr 30 12:50:15.738153 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:50:15.739581 systemd-logind[1889]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:50:15.740560 systemd-logind[1889]: Removed session 5. Apr 30 12:50:15.779340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:50:15.794500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:15.796347 systemd[1]: Started sshd@5-172.31.19.82:22-147.75.109.163:51992.service - OpenSSH per-connection server daemon (147.75.109.163:51992). Apr 30 12:50:15.960042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:15.974627 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:16.023480 kubelet[2209]: E0430 12:50:16.023423 2209 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:16.027588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:16.027735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:16.028179 systemd[1]: kubelet.service: Consumed 144ms CPU time, 97.3M memory peak. Apr 30 12:50:16.046261 sshd[2200]: Accepted publickey for core from 147.75.109.163 port 51992 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:16.047683 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:16.053599 systemd-logind[1889]: New session 6 of user core. Apr 30 12:50:16.063431 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:50:16.236567 sshd[2217]: Connection closed by 147.75.109.163 port 51992 Apr 30 12:50:16.237109 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:16.239788 systemd[1]: sshd@5-172.31.19.82:22-147.75.109.163:51992.service: Deactivated successfully. Apr 30 12:50:16.241484 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:50:16.242716 systemd-logind[1889]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:50:16.243775 systemd-logind[1889]: Removed session 6. Apr 30 12:50:16.287464 systemd[1]: Started sshd@6-172.31.19.82:22-147.75.109.163:52008.service - OpenSSH per-connection server daemon (147.75.109.163:52008). Apr 30 12:50:16.537192 sshd[2223]: Accepted publickey for core from 147.75.109.163 port 52008 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:16.538500 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:16.542855 systemd-logind[1889]: New session 7 of user core. Apr 30 12:50:16.553436 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:50:16.707063 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:50:16.707458 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:16.722825 sudo[2226]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:16.760180 sshd[2225]: Connection closed by 147.75.109.163 port 52008 Apr 30 12:50:16.760920 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:16.764251 systemd[1]: sshd@6-172.31.19.82:22-147.75.109.163:52008.service: Deactivated successfully. Apr 30 12:50:16.766000 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:50:16.767365 systemd-logind[1889]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:50:16.768639 systemd-logind[1889]: Removed session 7. Apr 30 12:50:16.814525 systemd[1]: Started sshd@7-172.31.19.82:22-147.75.109.163:49338.service - OpenSSH per-connection server daemon (147.75.109.163:49338). Apr 30 12:50:17.063651 sshd[2232]: Accepted publickey for core from 147.75.109.163 port 49338 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:17.065029 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:17.069686 systemd-logind[1889]: New session 8 of user core. Apr 30 12:50:17.076402 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:50:17.219923 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:50:17.220325 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:17.224094 sudo[2236]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:17.229468 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:50:17.229833 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:17.243577 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:50:17.273252 augenrules[2258]: No rules Apr 30 12:50:17.274735 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:50:17.275015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:50:17.276286 sudo[2235]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:17.313887 sshd[2234]: Connection closed by 147.75.109.163 port 49338 Apr 30 12:50:17.314495 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:17.317158 systemd[1]: sshd@7-172.31.19.82:22-147.75.109.163:49338.service: Deactivated successfully. Apr 30 12:50:17.318897 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:50:17.320152 systemd-logind[1889]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:50:17.320976 systemd-logind[1889]: Removed session 8. Apr 30 12:50:17.366480 systemd[1]: Started sshd@8-172.31.19.82:22-147.75.109.163:49340.service - OpenSSH per-connection server daemon (147.75.109.163:49340). Apr 30 12:50:17.617855 sshd[2267]: Accepted publickey for core from 147.75.109.163 port 49340 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:50:17.618130 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:50:17.626397 systemd-logind[1889]: New session 9 of user core. Apr 30 12:50:17.636402 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:50:17.772371 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:50:17.772642 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:50:18.131558 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:50:18.132489 (dockerd)[2288]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:50:18.515527 dockerd[2288]: time="2025-04-30T12:50:18.515388191Z" level=info msg="Starting up" Apr 30 12:50:18.642263 dockerd[2288]: time="2025-04-30T12:50:18.638018411Z" level=info msg="Loading containers: start." Apr 30 12:50:18.808205 kernel: Initializing XFRM netlink socket Apr 30 12:50:18.839060 (udev-worker)[2311]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:50:18.903539 systemd-networkd[1824]: docker0: Link UP Apr 30 12:50:18.937878 dockerd[2288]: time="2025-04-30T12:50:18.937831103Z" level=info msg="Loading containers: done." Apr 30 12:50:18.954707 dockerd[2288]: time="2025-04-30T12:50:18.954644999Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:50:18.954890 dockerd[2288]: time="2025-04-30T12:50:18.954747818Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:50:18.954890 dockerd[2288]: time="2025-04-30T12:50:18.954861477Z" level=info msg="Daemon has completed initialization" Apr 30 12:50:18.987572 dockerd[2288]: time="2025-04-30T12:50:18.987518903Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:50:18.989639 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:50:20.592977 containerd[1911]: time="2025-04-30T12:50:20.592714084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 12:50:21.157622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090298337.mount: Deactivated successfully. Apr 30 12:50:23.192429 containerd[1911]: time="2025-04-30T12:50:23.192361308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:23.193390 containerd[1911]: time="2025-04-30T12:50:23.193346756Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 12:50:23.194208 containerd[1911]: time="2025-04-30T12:50:23.194148677Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:23.197205 containerd[1911]: time="2025-04-30T12:50:23.197110833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:23.198584 containerd[1911]: time="2025-04-30T12:50:23.198337863Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.605586453s" Apr 30 12:50:23.198584 containerd[1911]: time="2025-04-30T12:50:23.198383012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 12:50:23.222331 containerd[1911]: time="2025-04-30T12:50:23.222147906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 12:50:25.458682 containerd[1911]: time="2025-04-30T12:50:25.458598931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:25.459623 containerd[1911]: time="2025-04-30T12:50:25.459574020Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 12:50:25.460636 containerd[1911]: time="2025-04-30T12:50:25.460592868Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:25.463058 containerd[1911]: time="2025-04-30T12:50:25.463027850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:25.464217 containerd[1911]: time="2025-04-30T12:50:25.464084290Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.241592762s" Apr 30 12:50:25.464217 containerd[1911]: time="2025-04-30T12:50:25.464115617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 12:50:25.485211 containerd[1911]: time="2025-04-30T12:50:25.485158485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 12:50:26.278424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:50:26.283386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:26.551318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:26.558659 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:26.628548 kubelet[2559]: E0430 12:50:26.628503 2559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:26.633096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:26.633360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:26.634277 systemd[1]: kubelet.service: Consumed 161ms CPU time, 93.7M memory peak. Apr 30 12:50:27.035017 containerd[1911]: time="2025-04-30T12:50:27.034889236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:27.036161 containerd[1911]: time="2025-04-30T12:50:27.036116025Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 12:50:27.037347 containerd[1911]: time="2025-04-30T12:50:27.037306627Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:27.039917 containerd[1911]: time="2025-04-30T12:50:27.039869442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:27.041103 containerd[1911]: time="2025-04-30T12:50:27.040991378Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.555781891s" Apr 30 12:50:27.041103 containerd[1911]: time="2025-04-30T12:50:27.041022444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 12:50:27.063988 containerd[1911]: time="2025-04-30T12:50:27.063942690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 12:50:28.057499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529941512.mount: Deactivated successfully. Apr 30 12:50:28.508770 containerd[1911]: time="2025-04-30T12:50:28.508625253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:28.509853 containerd[1911]: time="2025-04-30T12:50:28.509798450Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 12:50:28.511043 containerd[1911]: time="2025-04-30T12:50:28.510993351Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:28.512824 containerd[1911]: time="2025-04-30T12:50:28.512777249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:28.513689 containerd[1911]: time="2025-04-30T12:50:28.513318007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.449259138s" Apr 30 12:50:28.513689 containerd[1911]: time="2025-04-30T12:50:28.513350841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 12:50:28.535294 containerd[1911]: time="2025-04-30T12:50:28.535259323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:50:29.108828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30666838.mount: Deactivated successfully. Apr 30 12:50:30.103877 containerd[1911]: time="2025-04-30T12:50:30.103803571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:30.104844 containerd[1911]: time="2025-04-30T12:50:30.104796875Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 12:50:30.105970 containerd[1911]: time="2025-04-30T12:50:30.105917911Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:30.108506 containerd[1911]: time="2025-04-30T12:50:30.108478432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:30.109727 containerd[1911]: time="2025-04-30T12:50:30.109542971Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.574247852s" Apr 30 12:50:30.109727 containerd[1911]: time="2025-04-30T12:50:30.109577130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 12:50:30.134676 containerd[1911]: time="2025-04-30T12:50:30.134639883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 12:50:30.641003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448484612.mount: Deactivated successfully. Apr 30 12:50:30.652845 containerd[1911]: time="2025-04-30T12:50:30.652789261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:30.654673 containerd[1911]: time="2025-04-30T12:50:30.654621474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 12:50:30.657025 containerd[1911]: time="2025-04-30T12:50:30.656962210Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:30.661068 containerd[1911]: time="2025-04-30T12:50:30.661009340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:30.662270 containerd[1911]: time="2025-04-30T12:50:30.661753674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 527.074989ms" Apr 30 12:50:30.662270 containerd[1911]: time="2025-04-30T12:50:30.661793770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 12:50:30.686361 containerd[1911]: time="2025-04-30T12:50:30.686314987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 12:50:31.316627 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 12:50:31.324470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992595812.mount: Deactivated successfully. Apr 30 12:50:34.095716 containerd[1911]: time="2025-04-30T12:50:34.095643065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:34.097502 containerd[1911]: time="2025-04-30T12:50:34.097450414Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 12:50:34.099874 containerd[1911]: time="2025-04-30T12:50:34.099820520Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:34.103767 containerd[1911]: time="2025-04-30T12:50:34.103730440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:50:34.104928 containerd[1911]: time="2025-04-30T12:50:34.104692495Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.418343368s" Apr 30 12:50:34.104928 containerd[1911]: time="2025-04-30T12:50:34.104722252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 12:50:36.884096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:50:36.893295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:37.147462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:37.150778 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:50:37.228265 kubelet[2756]: E0430 12:50:37.228213 2756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:50:37.231916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:50:37.232116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:50:37.233443 systemd[1]: kubelet.service: Consumed 183ms CPU time, 96.5M memory peak. Apr 30 12:50:37.735509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:37.735905 systemd[1]: kubelet.service: Consumed 183ms CPU time, 96.5M memory peak. Apr 30 12:50:37.742552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:37.769560 systemd[1]: Reload requested from client PID 2771 ('systemctl') (unit session-9.scope)... Apr 30 12:50:37.769578 systemd[1]: Reloading... Apr 30 12:50:37.878190 zram_generator::config[2816]: No configuration found. Apr 30 12:50:38.025065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:38.152456 systemd[1]: Reloading finished in 382 ms. Apr 30 12:50:38.199046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:38.204591 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:50:38.208806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:38.209200 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:50:38.209408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:38.209454 systemd[1]: kubelet.service: Consumed 111ms CPU time, 85M memory peak. Apr 30 12:50:38.214526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:38.801630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:38.812649 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:50:38.867552 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:38.867552 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:50:38.867552 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:38.869483 kubelet[2886]: I0430 12:50:38.869427 2886 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:50:39.162581 kubelet[2886]: I0430 12:50:39.162461 2886 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:50:39.162581 kubelet[2886]: I0430 12:50:39.162489 2886 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:50:39.162844 kubelet[2886]: I0430 12:50:39.162773 2886 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:50:39.197364 kubelet[2886]: I0430 12:50:39.196975 2886 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:50:39.202645 kubelet[2886]: E0430 12:50:39.202612 2886 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.226390 kubelet[2886]: I0430 12:50:39.226349 2886 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:50:39.228308 kubelet[2886]: I0430 12:50:39.228255 2886 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:50:39.228490 kubelet[2886]: I0430 12:50:39.228307 2886 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:50:39.229281 kubelet[2886]: I0430 12:50:39.229258 2886 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:50:39.229281 kubelet[2886]: I0430 12:50:39.229279 2886 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:50:39.231385 kubelet[2886]: I0430 12:50:39.231359 2886 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:39.232437 kubelet[2886]: I0430 12:50:39.232417 2886 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:50:39.232437 kubelet[2886]: I0430 12:50:39.232435 2886 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:50:39.232660 kubelet[2886]: I0430 12:50:39.232459 2886 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:50:39.232660 kubelet[2886]: I0430 12:50:39.232481 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:50:39.239131 kubelet[2886]: W0430 12:50:39.238999 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.239131 kubelet[2886]: E0430 12:50:39.239055 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.240985 kubelet[2886]: W0430 12:50:39.240668 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-82&limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.240985 kubelet[2886]: E0430 12:50:39.240717 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-82&limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.240985 kubelet[2886]: I0430 12:50:39.240811 2886 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:50:39.243212 kubelet[2886]: I0430 12:50:39.243189 2886 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:50:39.243309 kubelet[2886]: W0430 12:50:39.243261 2886 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:50:39.243881 kubelet[2886]: I0430 12:50:39.243785 2886 server.go:1264] "Started kubelet" Apr 30 12:50:39.244870 kubelet[2886]: I0430 12:50:39.244846 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:50:39.250896 kubelet[2886]: I0430 12:50:39.250859 2886 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:50:39.251800 kubelet[2886]: E0430 12:50:39.251367 2886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.82:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.82:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-82.183b199c35c2ff90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-82,UID:ip-172-31-19-82,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-82,},FirstTimestamp:2025-04-30 12:50:39.2437636 +0000 UTC m=+0.426842857,LastTimestamp:2025-04-30 12:50:39.2437636 +0000 UTC m=+0.426842857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-82,}" Apr 30 12:50:39.251800 kubelet[2886]: I0430 12:50:39.251493 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:50:39.251800 kubelet[2886]: I0430 12:50:39.251762 2886 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:50:39.254108 kubelet[2886]: E0430 12:50:39.254083 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:39.254224 kubelet[2886]: I0430 12:50:39.254133 2886 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:50:39.254256 kubelet[2886]: I0430 12:50:39.254241 2886 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:50:39.254289 kubelet[2886]: I0430 12:50:39.254285 2886 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:50:39.255030 kubelet[2886]: W0430 12:50:39.254559 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.255030 kubelet[2886]: E0430 12:50:39.254601 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.255810 kubelet[2886]: I0430 12:50:39.255452 2886 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:50:39.259185 kubelet[2886]: E0430 12:50:39.258688 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": dial tcp 172.31.19.82:6443: connect: connection refused" interval="200ms" Apr 30 12:50:39.259185 kubelet[2886]: I0430 12:50:39.258932 2886 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:50:39.259185 kubelet[2886]: I0430 12:50:39.259016 2886 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:50:39.264564 kubelet[2886]: I0430 12:50:39.264539 2886 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:50:39.272975 kubelet[2886]: I0430 12:50:39.272866 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:50:39.274230 kubelet[2886]: I0430 12:50:39.274002 2886 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:50:39.274230 kubelet[2886]: I0430 12:50:39.274025 2886 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:50:39.274230 kubelet[2886]: I0430 12:50:39.274043 2886 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:50:39.274230 kubelet[2886]: E0430 12:50:39.274078 2886 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:50:39.282696 kubelet[2886]: E0430 12:50:39.282666 2886 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:50:39.283397 kubelet[2886]: W0430 12:50:39.283350 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.283485 kubelet[2886]: E0430 12:50:39.283403 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:39.291081 kubelet[2886]: I0430 12:50:39.291053 2886 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:50:39.291081 kubelet[2886]: I0430 12:50:39.291071 2886 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:50:39.291081 kubelet[2886]: I0430 12:50:39.291088 2886 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:39.296750 kubelet[2886]: I0430 12:50:39.296719 2886 policy_none.go:49] "None policy: Start" Apr 30 12:50:39.297371 kubelet[2886]: I0430 12:50:39.297352 2886 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:50:39.297445 kubelet[2886]: I0430 12:50:39.297385 2886 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:50:39.306830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:50:39.315451 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:50:39.319025 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:50:39.330099 kubelet[2886]: I0430 12:50:39.330076 2886 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:50:39.331290 kubelet[2886]: I0430 12:50:39.330741 2886 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:50:39.331290 kubelet[2886]: I0430 12:50:39.330851 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:50:39.332990 kubelet[2886]: E0430 12:50:39.332840 2886 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-82\" not found" Apr 30 12:50:39.358407 kubelet[2886]: I0430 12:50:39.358349 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:39.358683 kubelet[2886]: E0430 12:50:39.358661 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.82:6443/api/v1/nodes\": dial tcp 172.31.19.82:6443: connect: connection refused" node="ip-172-31-19-82" Apr 30 12:50:39.375384 kubelet[2886]: I0430 12:50:39.375238 2886 topology_manager.go:215] "Topology Admit Handler" podUID="c6dbd436fdaca72d621389c589501f9b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-82" Apr 30 12:50:39.376823 kubelet[2886]: I0430 12:50:39.376795 2886 topology_manager.go:215] "Topology Admit Handler" podUID="b6290bee8f5c6eaaf6fd8803fb5752ea" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:39.378532 kubelet[2886]: I0430 12:50:39.378376 2886 topology_manager.go:215] "Topology Admit Handler" podUID="5d10e2aca4cd4775a20a4af8ac37ad8d" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-82" Apr 30 12:50:39.384919 systemd[1]: Created slice kubepods-burstable-podc6dbd436fdaca72d621389c589501f9b.slice - libcontainer container kubepods-burstable-podc6dbd436fdaca72d621389c589501f9b.slice. Apr 30 12:50:39.400493 systemd[1]: Created slice kubepods-burstable-podb6290bee8f5c6eaaf6fd8803fb5752ea.slice - libcontainer container kubepods-burstable-podb6290bee8f5c6eaaf6fd8803fb5752ea.slice. Apr 30 12:50:39.415385 systemd[1]: Created slice kubepods-burstable-pod5d10e2aca4cd4775a20a4af8ac37ad8d.slice - libcontainer container kubepods-burstable-pod5d10e2aca4cd4775a20a4af8ac37ad8d.slice. Apr 30 12:50:39.455485 kubelet[2886]: I0430 12:50:39.455449 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6dbd436fdaca72d621389c589501f9b-ca-certs\") pod \"kube-apiserver-ip-172-31-19-82\" (UID: \"c6dbd436fdaca72d621389c589501f9b\") " pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:39.455485 kubelet[2886]: I0430 12:50:39.455485 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:39.455485 kubelet[2886]: I0430 12:50:39.455511 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6dbd436fdaca72d621389c589501f9b-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-82\" (UID: \"c6dbd436fdaca72d621389c589501f9b\") " pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:39.455485 kubelet[2886]: I0430 12:50:39.455527 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6dbd436fdaca72d621389c589501f9b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-82\" (UID: \"c6dbd436fdaca72d621389c589501f9b\") " pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:39.455485 kubelet[2886]: I0430 12:50:39.455546 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:39.455821 kubelet[2886]: I0430 12:50:39.455566 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:39.455821 kubelet[2886]: I0430 12:50:39.455580 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:39.455821 kubelet[2886]: I0430 12:50:39.455594 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:39.455821 kubelet[2886]: I0430 12:50:39.455610 2886 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d10e2aca4cd4775a20a4af8ac37ad8d-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-82\" (UID: \"5d10e2aca4cd4775a20a4af8ac37ad8d\") " pod="kube-system/kube-scheduler-ip-172-31-19-82" Apr 30 12:50:39.459942 kubelet[2886]: E0430 12:50:39.459900 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": dial tcp 172.31.19.82:6443: connect: connection refused" interval="400ms" Apr 30 12:50:39.561047 kubelet[2886]: I0430 12:50:39.560995 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:39.561490 kubelet[2886]: E0430 12:50:39.561452 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.82:6443/api/v1/nodes\": dial tcp 172.31.19.82:6443: connect: connection refused" node="ip-172-31-19-82" Apr 30 12:50:39.700041 containerd[1911]: time="2025-04-30T12:50:39.699930507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-82,Uid:c6dbd436fdaca72d621389c589501f9b,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:39.718350 containerd[1911]: time="2025-04-30T12:50:39.718292470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-82,Uid:b6290bee8f5c6eaaf6fd8803fb5752ea,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:39.718549 containerd[1911]: time="2025-04-30T12:50:39.718315490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-82,Uid:5d10e2aca4cd4775a20a4af8ac37ad8d,Namespace:kube-system,Attempt:0,}" Apr 30 12:50:39.860626 kubelet[2886]: E0430 12:50:39.860573 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": dial tcp 172.31.19.82:6443: connect: connection refused" interval="800ms" Apr 30 12:50:39.963871 kubelet[2886]: I0430 12:50:39.963766 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:39.964375 kubelet[2886]: E0430 12:50:39.964344 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.82:6443/api/v1/nodes\": dial tcp 172.31.19.82:6443: connect: connection refused" node="ip-172-31-19-82" Apr 30 12:50:40.057331 kubelet[2886]: W0430 12:50:40.057271 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-82&limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.057331 kubelet[2886]: E0430 12:50:40.057332 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-82&limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.099145 kubelet[2886]: W0430 12:50:40.098996 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.099145 kubelet[2886]: E0430 12:50:40.099094 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.231923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120820082.mount: Deactivated successfully. Apr 30 12:50:40.246630 containerd[1911]: time="2025-04-30T12:50:40.246576998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:40.254749 containerd[1911]: time="2025-04-30T12:50:40.254600814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 12:50:40.256769 containerd[1911]: time="2025-04-30T12:50:40.256725734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:40.259240 containerd[1911]: time="2025-04-30T12:50:40.259184851Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:40.262935 containerd[1911]: time="2025-04-30T12:50:40.262632741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:50:40.265331 containerd[1911]: time="2025-04-30T12:50:40.265231718Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:40.267300 containerd[1911]: time="2025-04-30T12:50:40.267251420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:50:40.269298 containerd[1911]: time="2025-04-30T12:50:40.269250709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:50:40.270422 containerd[1911]: time="2025-04-30T12:50:40.269961367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.939887ms" Apr 30 12:50:40.272197 containerd[1911]: time="2025-04-30T12:50:40.272083949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.48048ms" Apr 30 12:50:40.285906 containerd[1911]: time="2025-04-30T12:50:40.285620775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.239373ms" Apr 30 12:50:40.476686 containerd[1911]: time="2025-04-30T12:50:40.476598326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:40.479414 containerd[1911]: time="2025-04-30T12:50:40.479217420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:40.479414 containerd[1911]: time="2025-04-30T12:50:40.479256125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:40.480091 containerd[1911]: time="2025-04-30T12:50:40.479992031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:40.483776 containerd[1911]: time="2025-04-30T12:50:40.483611096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:40.483776 containerd[1911]: time="2025-04-30T12:50:40.483735228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:40.485503 containerd[1911]: time="2025-04-30T12:50:40.483779613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:40.486051 containerd[1911]: time="2025-04-30T12:50:40.485953645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:40.486462 containerd[1911]: time="2025-04-30T12:50:40.486378326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:50:40.486636 containerd[1911]: time="2025-04-30T12:50:40.486597789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:50:40.488572 containerd[1911]: time="2025-04-30T12:50:40.487037379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:40.488572 containerd[1911]: time="2025-04-30T12:50:40.488492708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:50:40.518398 systemd[1]: Started cri-containerd-3fa24a6eae822e6adc8dda3322fa201ddf1256d8e770c84f4916fb011458902a.scope - libcontainer container 3fa24a6eae822e6adc8dda3322fa201ddf1256d8e770c84f4916fb011458902a. Apr 30 12:50:40.534409 systemd[1]: Started cri-containerd-5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81.scope - libcontainer container 5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81. Apr 30 12:50:40.537141 systemd[1]: Started cri-containerd-7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab.scope - libcontainer container 7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab. Apr 30 12:50:40.554966 kubelet[2886]: W0430 12:50:40.554886 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.554966 kubelet[2886]: E0430 12:50:40.554937 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.608906 containerd[1911]: time="2025-04-30T12:50:40.608828318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-82,Uid:c6dbd436fdaca72d621389c589501f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fa24a6eae822e6adc8dda3322fa201ddf1256d8e770c84f4916fb011458902a\"" Apr 30 12:50:40.626015 containerd[1911]: time="2025-04-30T12:50:40.625599916Z" level=info msg="CreateContainer within sandbox \"3fa24a6eae822e6adc8dda3322fa201ddf1256d8e770c84f4916fb011458902a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:50:40.636298 containerd[1911]: time="2025-04-30T12:50:40.636233541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-82,Uid:5d10e2aca4cd4775a20a4af8ac37ad8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81\"" Apr 30 12:50:40.640221 containerd[1911]: time="2025-04-30T12:50:40.640143050Z" level=info msg="CreateContainer within sandbox \"5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:50:40.647351 containerd[1911]: time="2025-04-30T12:50:40.647313602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-82,Uid:b6290bee8f5c6eaaf6fd8803fb5752ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab\"" Apr 30 12:50:40.649665 kubelet[2886]: W0430 12:50:40.649595 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.649665 kubelet[2886]: E0430 12:50:40.649671 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:40.651778 containerd[1911]: time="2025-04-30T12:50:40.651631285Z" level=info msg="CreateContainer within sandbox \"7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:50:40.661852 kubelet[2886]: E0430 12:50:40.661791 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": dial tcp 172.31.19.82:6443: connect: connection refused" interval="1.6s" Apr 30 12:50:40.676799 containerd[1911]: time="2025-04-30T12:50:40.676754528Z" level=info msg="CreateContainer within sandbox \"3fa24a6eae822e6adc8dda3322fa201ddf1256d8e770c84f4916fb011458902a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41d6c3f180350d97857504710d32664c9c446b16f078fe8e0082f8789b052873\"" Apr 30 12:50:40.677487 containerd[1911]: time="2025-04-30T12:50:40.677460224Z" level=info msg="StartContainer for \"41d6c3f180350d97857504710d32664c9c446b16f078fe8e0082f8789b052873\"" Apr 30 12:50:40.683692 containerd[1911]: time="2025-04-30T12:50:40.683552909Z" level=info msg="CreateContainer within sandbox \"5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235\"" Apr 30 12:50:40.685782 containerd[1911]: time="2025-04-30T12:50:40.684302563Z" level=info msg="StartContainer for \"33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235\"" Apr 30 12:50:40.700661 containerd[1911]: time="2025-04-30T12:50:40.700618626Z" level=info msg="CreateContainer within sandbox \"7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249\"" Apr 30 12:50:40.701839 containerd[1911]: time="2025-04-30T12:50:40.701810702Z" level=info msg="StartContainer for \"37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249\"" Apr 30 12:50:40.720407 systemd[1]: Started cri-containerd-41d6c3f180350d97857504710d32664c9c446b16f078fe8e0082f8789b052873.scope - libcontainer container 41d6c3f180350d97857504710d32664c9c446b16f078fe8e0082f8789b052873. Apr 30 12:50:40.742426 systemd[1]: Started cri-containerd-33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235.scope - libcontainer container 33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235. Apr 30 12:50:40.762799 systemd[1]: Started cri-containerd-37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249.scope - libcontainer container 37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249. Apr 30 12:50:40.768058 kubelet[2886]: I0430 12:50:40.768028 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:40.768646 kubelet[2886]: E0430 12:50:40.768590 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.82:6443/api/v1/nodes\": dial tcp 172.31.19.82:6443: connect: connection refused" node="ip-172-31-19-82" Apr 30 12:50:40.818698 containerd[1911]: time="2025-04-30T12:50:40.818653520Z" level=info msg="StartContainer for \"41d6c3f180350d97857504710d32664c9c446b16f078fe8e0082f8789b052873\" returns successfully" Apr 30 12:50:40.847127 containerd[1911]: time="2025-04-30T12:50:40.846560329Z" level=info msg="StartContainer for \"37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249\" returns successfully" Apr 30 12:50:40.851973 containerd[1911]: time="2025-04-30T12:50:40.851925275Z" level=info msg="StartContainer for \"33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235\" returns successfully" Apr 30 12:50:41.235282 kubelet[2886]: E0430 12:50:41.235150 2886 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:41.468879 kubelet[2886]: E0430 12:50:41.468724 2886 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.82:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.82:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-82.183b199c35c2ff90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-82,UID:ip-172-31-19-82,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-82,},FirstTimestamp:2025-04-30 12:50:39.2437636 +0000 UTC m=+0.426842857,LastTimestamp:2025-04-30 12:50:39.2437636 +0000 UTC m=+0.426842857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-82,}" Apr 30 12:50:41.729376 kubelet[2886]: W0430 12:50:41.729016 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-82&limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:41.729376 kubelet[2886]: E0430 12:50:41.729103 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-82&limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:42.019796 kubelet[2886]: W0430 12:50:42.019409 2886 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:42.019796 kubelet[2886]: E0430 12:50:42.019477 2886 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.82:6443: connect: connection refused Apr 30 12:50:42.263038 kubelet[2886]: E0430 12:50:42.262992 2886 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": dial tcp 172.31.19.82:6443: connect: connection refused" interval="3.2s" Apr 30 12:50:42.371105 kubelet[2886]: I0430 12:50:42.371023 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:42.371459 kubelet[2886]: E0430 12:50:42.371430 2886 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.82:6443/api/v1/nodes\": dial tcp 172.31.19.82:6443: connect: connection refused" node="ip-172-31-19-82" Apr 30 12:50:44.344560 kubelet[2886]: E0430 12:50:44.344516 2886 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-82" not found Apr 30 12:50:44.712014 kubelet[2886]: E0430 12:50:44.711904 2886 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-82" not found Apr 30 12:50:45.137886 kubelet[2886]: E0430 12:50:45.137855 2886 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-19-82" not found Apr 30 12:50:45.426125 update_engine[1890]: I20250430 12:50:45.425928 1890 update_attempter.cc:509] Updating boot flags... Apr 30 12:50:45.473881 kubelet[2886]: E0430 12:50:45.472980 2886 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-82\" not found" node="ip-172-31-19-82" Apr 30 12:50:45.477191 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3174) Apr 30 12:50:45.585265 kubelet[2886]: I0430 12:50:45.584432 2886 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:45.608637 kubelet[2886]: I0430 12:50:45.608609 2886 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-82" Apr 30 12:50:45.622897 kubelet[2886]: E0430 12:50:45.622006 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:45.629244 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3174) Apr 30 12:50:45.722677 kubelet[2886]: E0430 12:50:45.722600 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:45.773299 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3174) Apr 30 12:50:45.824664 kubelet[2886]: E0430 12:50:45.824621 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:45.924927 kubelet[2886]: E0430 12:50:45.924862 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.025669 kubelet[2886]: E0430 12:50:46.025539 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.126279 kubelet[2886]: E0430 12:50:46.126232 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.227196 kubelet[2886]: E0430 12:50:46.227123 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.256616 systemd[1]: Reload requested from client PID 3428 ('systemctl') (unit session-9.scope)... Apr 30 12:50:46.256634 systemd[1]: Reloading... Apr 30 12:50:46.329088 kubelet[2886]: E0430 12:50:46.328100 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.365193 zram_generator::config[3469]: No configuration found. Apr 30 12:50:46.428712 kubelet[2886]: E0430 12:50:46.428666 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.515319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:50:46.529305 kubelet[2886]: E0430 12:50:46.529250 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.629506 kubelet[2886]: E0430 12:50:46.629336 2886 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-19-82\" not found" Apr 30 12:50:46.653155 systemd[1]: Reloading finished in 395 ms. Apr 30 12:50:46.682989 kubelet[2886]: I0430 12:50:46.682927 2886 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:50:46.683114 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:46.693291 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:50:46.693510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:46.693578 systemd[1]: kubelet.service: Consumed 778ms CPU time, 110.7M memory peak. Apr 30 12:50:46.699447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:50:46.967378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:50:46.968449 (kubelet)[3533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:50:47.027633 kubelet[3533]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:47.027633 kubelet[3533]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:50:47.027633 kubelet[3533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:50:47.031163 kubelet[3533]: I0430 12:50:47.030805 3533 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:50:47.036950 kubelet[3533]: I0430 12:50:47.036917 3533 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:50:47.036950 kubelet[3533]: I0430 12:50:47.036943 3533 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:50:47.037158 kubelet[3533]: I0430 12:50:47.037144 3533 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:50:47.040344 kubelet[3533]: I0430 12:50:47.039645 3533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:50:47.041104 kubelet[3533]: I0430 12:50:47.041079 3533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:50:47.049153 kubelet[3533]: I0430 12:50:47.049121 3533 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:50:47.049385 kubelet[3533]: I0430 12:50:47.049357 3533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:50:47.049623 kubelet[3533]: I0430 12:50:47.049384 3533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:50:47.049735 kubelet[3533]: I0430 12:50:47.049644 3533 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:50:47.049735 kubelet[3533]: I0430 12:50:47.049659 3533 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:50:47.049735 kubelet[3533]: I0430 12:50:47.049699 3533 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:47.049814 kubelet[3533]: I0430 12:50:47.049799 3533 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:50:47.049840 kubelet[3533]: I0430 12:50:47.049815 3533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:50:47.049840 kubelet[3533]: I0430 12:50:47.049839 3533 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:50:47.049887 kubelet[3533]: I0430 12:50:47.049856 3533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:50:47.052286 kubelet[3533]: I0430 12:50:47.052267 3533 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:50:47.052629 kubelet[3533]: I0430 12:50:47.052602 3533 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:50:47.053225 kubelet[3533]: I0430 12:50:47.053211 3533 server.go:1264] "Started kubelet" Apr 30 12:50:47.055514 kubelet[3533]: I0430 12:50:47.055482 3533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:50:47.056116 kubelet[3533]: I0430 12:50:47.056092 3533 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:50:47.057770 kubelet[3533]: I0430 12:50:47.057756 3533 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:50:47.058947 kubelet[3533]: I0430 12:50:47.058889 3533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:50:47.059252 kubelet[3533]: I0430 12:50:47.059195 3533 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:50:47.061187 kubelet[3533]: I0430 12:50:47.061019 3533 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:50:47.062596 kubelet[3533]: I0430 12:50:47.062582 3533 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:50:47.062777 kubelet[3533]: I0430 12:50:47.062768 3533 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:50:47.067370 kubelet[3533]: I0430 12:50:47.067344 3533 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:50:47.067482 kubelet[3533]: I0430 12:50:47.067462 3533 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:50:47.079799 kubelet[3533]: I0430 12:50:47.079118 3533 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:50:47.088136 kubelet[3533]: I0430 12:50:47.088087 3533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:50:47.089591 kubelet[3533]: I0430 12:50:47.089287 3533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:50:47.089591 kubelet[3533]: I0430 12:50:47.089317 3533 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:50:47.089591 kubelet[3533]: I0430 12:50:47.089334 3533 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:50:47.089591 kubelet[3533]: E0430 12:50:47.089379 3533 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:50:47.132894 kubelet[3533]: I0430 12:50:47.132865 3533 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:50:47.132894 kubelet[3533]: I0430 12:50:47.132884 3533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:50:47.132894 kubelet[3533]: I0430 12:50:47.132903 3533 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:50:47.133094 kubelet[3533]: I0430 12:50:47.133076 3533 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:50:47.133123 kubelet[3533]: I0430 12:50:47.133086 3533 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:50:47.133123 kubelet[3533]: I0430 12:50:47.133104 3533 policy_none.go:49] "None policy: Start" Apr 30 12:50:47.133656 kubelet[3533]: I0430 12:50:47.133636 3533 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:50:47.133711 kubelet[3533]: I0430 12:50:47.133662 3533 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:50:47.133853 kubelet[3533]: I0430 12:50:47.133820 3533 state_mem.go:75] "Updated machine memory state" Apr 30 12:50:47.141780 kubelet[3533]: I0430 12:50:47.141758 3533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:50:47.142490 kubelet[3533]: I0430 12:50:47.142140 3533 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:50:47.142490 kubelet[3533]: I0430 12:50:47.142251 3533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:50:47.167835 kubelet[3533]: I0430 12:50:47.167771 3533 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-82" Apr 30 12:50:47.176146 kubelet[3533]: I0430 12:50:47.176062 3533 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-82" Apr 30 12:50:47.177238 kubelet[3533]: I0430 12:50:47.176405 3533 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-82" Apr 30 12:50:47.190753 kubelet[3533]: I0430 12:50:47.189593 3533 topology_manager.go:215] "Topology Admit Handler" podUID="c6dbd436fdaca72d621389c589501f9b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-82" Apr 30 12:50:47.190753 kubelet[3533]: I0430 12:50:47.189813 3533 topology_manager.go:215] "Topology Admit Handler" podUID="b6290bee8f5c6eaaf6fd8803fb5752ea" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:47.190753 kubelet[3533]: I0430 12:50:47.190466 3533 topology_manager.go:215] "Topology Admit Handler" podUID="5d10e2aca4cd4775a20a4af8ac37ad8d" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-82" Apr 30 12:50:47.273223 sudo[3567]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:50:47.273667 sudo[3567]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:50:47.364252 kubelet[3533]: I0430 12:50:47.364197 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6dbd436fdaca72d621389c589501f9b-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-82\" (UID: \"c6dbd436fdaca72d621389c589501f9b\") " pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:47.364252 kubelet[3533]: I0430 12:50:47.364250 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6dbd436fdaca72d621389c589501f9b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-82\" (UID: \"c6dbd436fdaca72d621389c589501f9b\") " pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:47.364724 kubelet[3533]: I0430 12:50:47.364278 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:47.364724 kubelet[3533]: I0430 12:50:47.364304 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:47.364724 kubelet[3533]: I0430 12:50:47.364332 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6dbd436fdaca72d621389c589501f9b-ca-certs\") pod \"kube-apiserver-ip-172-31-19-82\" (UID: \"c6dbd436fdaca72d621389c589501f9b\") " pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:47.364724 kubelet[3533]: I0430 12:50:47.364353 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:47.364724 kubelet[3533]: I0430 12:50:47.364378 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:47.364940 kubelet[3533]: I0430 12:50:47.364403 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d10e2aca4cd4775a20a4af8ac37ad8d-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-82\" (UID: \"5d10e2aca4cd4775a20a4af8ac37ad8d\") " pod="kube-system/kube-scheduler-ip-172-31-19-82" Apr 30 12:50:47.364940 kubelet[3533]: I0430 12:50:47.364430 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6290bee8f5c6eaaf6fd8803fb5752ea-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-82\" (UID: \"b6290bee8f5c6eaaf6fd8803fb5752ea\") " pod="kube-system/kube-controller-manager-ip-172-31-19-82" Apr 30 12:50:47.929086 sudo[3567]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:48.051349 kubelet[3533]: I0430 12:50:48.051310 3533 apiserver.go:52] "Watching apiserver" Apr 30 12:50:48.145710 kubelet[3533]: E0430 12:50:48.145536 3533 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-82\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-82" Apr 30 12:50:48.166178 kubelet[3533]: I0430 12:50:48.165906 3533 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:50:48.193446 kubelet[3533]: I0430 12:50:48.191237 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-82" podStartSLOduration=1.19121433 podStartE2EDuration="1.19121433s" podCreationTimestamp="2025-04-30 12:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:48.180615675 +0000 UTC m=+1.205681363" watchObservedRunningTime="2025-04-30 12:50:48.19121433 +0000 UTC m=+1.216280008" Apr 30 12:50:48.195130 kubelet[3533]: I0430 12:50:48.194919 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-82" podStartSLOduration=1.19488972 podStartE2EDuration="1.19488972s" podCreationTimestamp="2025-04-30 12:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:48.193793274 +0000 UTC m=+1.218858952" watchObservedRunningTime="2025-04-30 12:50:48.19488972 +0000 UTC m=+1.219955404" Apr 30 12:50:48.218815 kubelet[3533]: I0430 12:50:48.218756 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-82" podStartSLOduration=1.218735908 podStartE2EDuration="1.218735908s" podCreationTimestamp="2025-04-30 12:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:50:48.206411965 +0000 UTC m=+1.231477654" watchObservedRunningTime="2025-04-30 12:50:48.218735908 +0000 UTC m=+1.243801600" Apr 30 12:50:49.573496 sudo[2270]: pam_unix(sudo:session): session closed for user root Apr 30 12:50:49.610864 sshd[2269]: Connection closed by 147.75.109.163 port 49340 Apr 30 12:50:49.612283 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Apr 30 12:50:49.615434 systemd[1]: sshd@8-172.31.19.82:22-147.75.109.163:49340.service: Deactivated successfully. Apr 30 12:50:49.617826 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:50:49.622221 systemd[1]: session-9.scope: Consumed 5.767s CPU time, 230.7M memory peak. Apr 30 12:50:49.625048 systemd-logind[1889]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:50:49.626719 systemd-logind[1889]: Removed session 9. Apr 30 12:51:00.909181 kubelet[3533]: I0430 12:51:00.909132 3533 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:51:00.919477 containerd[1911]: time="2025-04-30T12:51:00.919434929Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:51:00.920219 kubelet[3533]: I0430 12:51:00.919658 3533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:51:01.633052 kubelet[3533]: I0430 12:51:01.631703 3533 topology_manager.go:215] "Topology Admit Handler" podUID="ff077a52-3ba3-4cbd-b20a-d11fa184bc39" podNamespace="kube-system" podName="kube-proxy-k7m4c" Apr 30 12:51:01.650934 kubelet[3533]: I0430 12:51:01.650893 3533 topology_manager.go:215] "Topology Admit Handler" podUID="ea2903db-f092-4c14-859b-407746c8ad61" podNamespace="kube-system" podName="cilium-cbltt" Apr 30 12:51:01.669224 systemd[1]: Created slice kubepods-burstable-podea2903db_f092_4c14_859b_407746c8ad61.slice - libcontainer container kubepods-burstable-podea2903db_f092_4c14_859b_407746c8ad61.slice. Apr 30 12:51:01.690058 systemd[1]: Created slice kubepods-besteffort-podff077a52_3ba3_4cbd_b20a_d11fa184bc39.slice - libcontainer container kubepods-besteffort-podff077a52_3ba3_4cbd_b20a_d11fa184bc39.slice. Apr 30 12:51:01.755559 kubelet[3533]: I0430 12:51:01.755506 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff077a52-3ba3-4cbd-b20a-d11fa184bc39-kube-proxy\") pod \"kube-proxy-k7m4c\" (UID: \"ff077a52-3ba3-4cbd-b20a-d11fa184bc39\") " pod="kube-system/kube-proxy-k7m4c" Apr 30 12:51:01.755559 kubelet[3533]: I0430 12:51:01.755572 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-etc-cni-netd\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.755800 kubelet[3533]: I0430 12:51:01.755602 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-lib-modules\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.755800 kubelet[3533]: I0430 12:51:01.755623 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-xtables-lock\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.755800 kubelet[3533]: I0430 12:51:01.755644 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea2903db-f092-4c14-859b-407746c8ad61-clustermesh-secrets\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.755800 kubelet[3533]: I0430 12:51:01.755664 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-hubble-tls\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.755800 kubelet[3533]: I0430 12:51:01.755709 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxlc\" (UniqueName: \"kubernetes.io/projected/ff077a52-3ba3-4cbd-b20a-d11fa184bc39-kube-api-access-7nxlc\") pod \"kube-proxy-k7m4c\" (UID: \"ff077a52-3ba3-4cbd-b20a-d11fa184bc39\") " pod="kube-system/kube-proxy-k7m4c" Apr 30 12:51:01.755800 kubelet[3533]: I0430 12:51:01.755733 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-hostproc\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756045 kubelet[3533]: I0430 12:51:01.755755 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-kernel\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756045 kubelet[3533]: I0430 12:51:01.755779 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m9s\" (UniqueName: \"kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-kube-api-access-v7m9s\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756045 kubelet[3533]: I0430 12:51:01.755808 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff077a52-3ba3-4cbd-b20a-d11fa184bc39-xtables-lock\") pod \"kube-proxy-k7m4c\" (UID: \"ff077a52-3ba3-4cbd-b20a-d11fa184bc39\") " pod="kube-system/kube-proxy-k7m4c" Apr 30 12:51:01.756045 kubelet[3533]: I0430 12:51:01.756000 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-run\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756246 kubelet[3533]: I0430 12:51:01.756030 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea2903db-f092-4c14-859b-407746c8ad61-cilium-config-path\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756246 kubelet[3533]: I0430 12:51:01.756084 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-cgroup\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756246 kubelet[3533]: I0430 12:51:01.756141 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff077a52-3ba3-4cbd-b20a-d11fa184bc39-lib-modules\") pod \"kube-proxy-k7m4c\" (UID: \"ff077a52-3ba3-4cbd-b20a-d11fa184bc39\") " pod="kube-system/kube-proxy-k7m4c" Apr 30 12:51:01.756246 kubelet[3533]: I0430 12:51:01.756206 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-bpf-maps\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756246 kubelet[3533]: I0430 12:51:01.756232 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-net\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.756605 kubelet[3533]: I0430 12:51:01.756285 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cni-path\") pod \"cilium-cbltt\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " pod="kube-system/cilium-cbltt" Apr 30 12:51:01.945295 kubelet[3533]: I0430 12:51:01.943882 3533 topology_manager.go:215] "Topology Admit Handler" podUID="42444ad5-7189-4126-9d0d-c9e898dc3811" podNamespace="kube-system" podName="cilium-operator-599987898-pfdqs" Apr 30 12:51:01.958151 kubelet[3533]: I0430 12:51:01.957981 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42444ad5-7189-4126-9d0d-c9e898dc3811-cilium-config-path\") pod \"cilium-operator-599987898-pfdqs\" (UID: \"42444ad5-7189-4126-9d0d-c9e898dc3811\") " pod="kube-system/cilium-operator-599987898-pfdqs" Apr 30 12:51:01.958151 kubelet[3533]: I0430 12:51:01.958092 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fv2\" (UniqueName: \"kubernetes.io/projected/42444ad5-7189-4126-9d0d-c9e898dc3811-kube-api-access-k7fv2\") pod \"cilium-operator-599987898-pfdqs\" (UID: \"42444ad5-7189-4126-9d0d-c9e898dc3811\") " pod="kube-system/cilium-operator-599987898-pfdqs" Apr 30 12:51:01.962079 systemd[1]: Created slice kubepods-besteffort-pod42444ad5_7189_4126_9d0d_c9e898dc3811.slice - libcontainer container kubepods-besteffort-pod42444ad5_7189_4126_9d0d_c9e898dc3811.slice. Apr 30 12:51:01.989701 containerd[1911]: time="2025-04-30T12:51:01.989660992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbltt,Uid:ea2903db-f092-4c14-859b-407746c8ad61,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:02.010122 containerd[1911]: time="2025-04-30T12:51:02.009702943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7m4c,Uid:ff077a52-3ba3-4cbd-b20a-d11fa184bc39,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:02.073142 containerd[1911]: time="2025-04-30T12:51:02.070828190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:02.073142 containerd[1911]: time="2025-04-30T12:51:02.070915965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:02.073142 containerd[1911]: time="2025-04-30T12:51:02.070939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:02.073142 containerd[1911]: time="2025-04-30T12:51:02.071042892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:02.090963 containerd[1911]: time="2025-04-30T12:51:02.090793198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:02.091359 containerd[1911]: time="2025-04-30T12:51:02.091309078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:02.091511 containerd[1911]: time="2025-04-30T12:51:02.091480422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:02.091768 containerd[1911]: time="2025-04-30T12:51:02.091731672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:02.108871 systemd[1]: Started cri-containerd-6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff.scope - libcontainer container 6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff. Apr 30 12:51:02.135447 systemd[1]: Started cri-containerd-59033fbc741d45b4e5268c581f45a3406e6e09504a3a0b9ff2f78d248e4fa682.scope - libcontainer container 59033fbc741d45b4e5268c581f45a3406e6e09504a3a0b9ff2f78d248e4fa682. Apr 30 12:51:02.154948 containerd[1911]: time="2025-04-30T12:51:02.154368102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbltt,Uid:ea2903db-f092-4c14-859b-407746c8ad61,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\"" Apr 30 12:51:02.161933 containerd[1911]: time="2025-04-30T12:51:02.161886273Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:51:02.177259 containerd[1911]: time="2025-04-30T12:51:02.177139771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7m4c,Uid:ff077a52-3ba3-4cbd-b20a-d11fa184bc39,Namespace:kube-system,Attempt:0,} returns sandbox id \"59033fbc741d45b4e5268c581f45a3406e6e09504a3a0b9ff2f78d248e4fa682\"" Apr 30 12:51:02.181953 containerd[1911]: time="2025-04-30T12:51:02.181920643Z" level=info msg="CreateContainer within sandbox \"59033fbc741d45b4e5268c581f45a3406e6e09504a3a0b9ff2f78d248e4fa682\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:51:02.217283 containerd[1911]: time="2025-04-30T12:51:02.217126005Z" level=info msg="CreateContainer within sandbox \"59033fbc741d45b4e5268c581f45a3406e6e09504a3a0b9ff2f78d248e4fa682\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6244e9f09945710caac7d7ba76ddd83fdf733c71f480f909dbee315532c52337\"" Apr 30 12:51:02.219763 containerd[1911]: time="2025-04-30T12:51:02.218395447Z" level=info msg="StartContainer for \"6244e9f09945710caac7d7ba76ddd83fdf733c71f480f909dbee315532c52337\"" Apr 30 12:51:02.247406 systemd[1]: Started cri-containerd-6244e9f09945710caac7d7ba76ddd83fdf733c71f480f909dbee315532c52337.scope - libcontainer container 6244e9f09945710caac7d7ba76ddd83fdf733c71f480f909dbee315532c52337. Apr 30 12:51:02.271882 containerd[1911]: time="2025-04-30T12:51:02.271775632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pfdqs,Uid:42444ad5-7189-4126-9d0d-c9e898dc3811,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:02.285926 containerd[1911]: time="2025-04-30T12:51:02.285860408Z" level=info msg="StartContainer for \"6244e9f09945710caac7d7ba76ddd83fdf733c71f480f909dbee315532c52337\" returns successfully" Apr 30 12:51:02.311684 containerd[1911]: time="2025-04-30T12:51:02.311537231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:02.311684 containerd[1911]: time="2025-04-30T12:51:02.311612055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:02.311684 containerd[1911]: time="2025-04-30T12:51:02.311633018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:02.312048 containerd[1911]: time="2025-04-30T12:51:02.311754207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:02.335400 systemd[1]: Started cri-containerd-c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c.scope - libcontainer container c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c. Apr 30 12:51:02.384606 containerd[1911]: time="2025-04-30T12:51:02.384472843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pfdqs,Uid:42444ad5-7189-4126-9d0d-c9e898dc3811,Namespace:kube-system,Attempt:0,} returns sandbox id \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\"" Apr 30 12:51:03.149000 kubelet[3533]: I0430 12:51:03.148895 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k7m4c" podStartSLOduration=2.148879788 podStartE2EDuration="2.148879788s" podCreationTimestamp="2025-04-30 12:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:03.148690784 +0000 UTC m=+16.173756472" watchObservedRunningTime="2025-04-30 12:51:03.148879788 +0000 UTC m=+16.173945476" Apr 30 12:51:09.782521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642156327.mount: Deactivated successfully. Apr 30 12:51:12.279960 containerd[1911]: time="2025-04-30T12:51:12.279903220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:12.282130 containerd[1911]: time="2025-04-30T12:51:12.282095555Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 12:51:12.284937 containerd[1911]: time="2025-04-30T12:51:12.283980945Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:12.285448 containerd[1911]: time="2025-04-30T12:51:12.285416424Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.123479914s" Apr 30 12:51:12.285448 containerd[1911]: time="2025-04-30T12:51:12.285450682Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 12:51:12.287262 containerd[1911]: time="2025-04-30T12:51:12.287223305Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:51:12.288948 containerd[1911]: time="2025-04-30T12:51:12.288926629Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:51:12.346083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160627539.mount: Deactivated successfully. Apr 30 12:51:12.350589 containerd[1911]: time="2025-04-30T12:51:12.350541555Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\"" Apr 30 12:51:12.351850 containerd[1911]: time="2025-04-30T12:51:12.351302648Z" level=info msg="StartContainer for \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\"" Apr 30 12:51:12.466570 systemd[1]: Started cri-containerd-c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11.scope - libcontainer container c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11. Apr 30 12:51:12.501983 containerd[1911]: time="2025-04-30T12:51:12.501915528Z" level=info msg="StartContainer for \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\" returns successfully" Apr 30 12:51:12.510692 systemd[1]: cri-containerd-c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11.scope: Deactivated successfully. Apr 30 12:51:12.702901 containerd[1911]: time="2025-04-30T12:51:12.690694937Z" level=info msg="shim disconnected" id=c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11 namespace=k8s.io Apr 30 12:51:12.702901 containerd[1911]: time="2025-04-30T12:51:12.702902097Z" level=warning msg="cleaning up after shim disconnected" id=c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11 namespace=k8s.io Apr 30 12:51:12.703148 containerd[1911]: time="2025-04-30T12:51:12.702916484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:51:13.202504 containerd[1911]: time="2025-04-30T12:51:13.202246098Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:51:13.242998 containerd[1911]: time="2025-04-30T12:51:13.242334726Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\"" Apr 30 12:51:13.243147 containerd[1911]: time="2025-04-30T12:51:13.243030605Z" level=info msg="StartContainer for \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\"" Apr 30 12:51:13.278401 systemd[1]: Started cri-containerd-06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff.scope - libcontainer container 06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff. Apr 30 12:51:13.311459 containerd[1911]: time="2025-04-30T12:51:13.311341685Z" level=info msg="StartContainer for \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\" returns successfully" Apr 30 12:51:13.323805 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:51:13.324303 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:51:13.324859 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:51:13.332493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:51:13.332696 systemd[1]: cri-containerd-06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff.scope: Deactivated successfully. Apr 30 12:51:13.343869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11-rootfs.mount: Deactivated successfully. Apr 30 12:51:13.343975 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:51:13.359758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff-rootfs.mount: Deactivated successfully. Apr 30 12:51:13.381068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:51:13.383385 containerd[1911]: time="2025-04-30T12:51:13.383127285Z" level=info msg="shim disconnected" id=06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff namespace=k8s.io Apr 30 12:51:13.383385 containerd[1911]: time="2025-04-30T12:51:13.383181326Z" level=warning msg="cleaning up after shim disconnected" id=06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff namespace=k8s.io Apr 30 12:51:13.383385 containerd[1911]: time="2025-04-30T12:51:13.383191347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:51:13.730383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922477632.mount: Deactivated successfully. Apr 30 12:51:14.207519 containerd[1911]: time="2025-04-30T12:51:14.206243981Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:51:14.242883 containerd[1911]: time="2025-04-30T12:51:14.240211615Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\"" Apr 30 12:51:14.242883 containerd[1911]: time="2025-04-30T12:51:14.241309411Z" level=info msg="StartContainer for \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\"" Apr 30 12:51:14.299386 systemd[1]: Started cri-containerd-27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b.scope - libcontainer container 27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b. Apr 30 12:51:14.363567 containerd[1911]: time="2025-04-30T12:51:14.363525454Z" level=info msg="StartContainer for \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\" returns successfully" Apr 30 12:51:14.364083 systemd[1]: cri-containerd-27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b.scope: Deactivated successfully. Apr 30 12:51:14.408810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b-rootfs.mount: Deactivated successfully. Apr 30 12:51:14.446547 containerd[1911]: time="2025-04-30T12:51:14.446490395Z" level=info msg="shim disconnected" id=27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b namespace=k8s.io Apr 30 12:51:14.446547 containerd[1911]: time="2025-04-30T12:51:14.446540316Z" level=warning msg="cleaning up after shim disconnected" id=27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b namespace=k8s.io Apr 30 12:51:14.446547 containerd[1911]: time="2025-04-30T12:51:14.446548192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:51:14.875970 containerd[1911]: time="2025-04-30T12:51:14.875892044Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:14.877831 containerd[1911]: time="2025-04-30T12:51:14.877788788Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 12:51:14.880087 containerd[1911]: time="2025-04-30T12:51:14.880059596Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:51:14.882733 containerd[1911]: time="2025-04-30T12:51:14.882695701Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.595437859s" Apr 30 12:51:14.882848 containerd[1911]: time="2025-04-30T12:51:14.882733863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 12:51:14.885269 containerd[1911]: time="2025-04-30T12:51:14.885097195Z" level=info msg="CreateContainer within sandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:51:14.909561 containerd[1911]: time="2025-04-30T12:51:14.909426603Z" level=info msg="CreateContainer within sandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\"" Apr 30 12:51:14.911289 containerd[1911]: time="2025-04-30T12:51:14.911184965Z" level=info msg="StartContainer for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\"" Apr 30 12:51:14.945371 systemd[1]: Started cri-containerd-fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7.scope - libcontainer container fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7. Apr 30 12:51:14.974017 containerd[1911]: time="2025-04-30T12:51:14.973976969Z" level=info msg="StartContainer for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" returns successfully" Apr 30 12:51:15.208140 containerd[1911]: time="2025-04-30T12:51:15.207643813Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:51:15.233042 containerd[1911]: time="2025-04-30T12:51:15.232994934Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\"" Apr 30 12:51:15.234069 containerd[1911]: time="2025-04-30T12:51:15.234019649Z" level=info msg="StartContainer for \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\"" Apr 30 12:51:15.285408 systemd[1]: Started cri-containerd-997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284.scope - libcontainer container 997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284. Apr 30 12:51:15.347641 containerd[1911]: time="2025-04-30T12:51:15.347594133Z" level=info msg="StartContainer for \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\" returns successfully" Apr 30 12:51:15.351991 systemd[1]: cri-containerd-997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284.scope: Deactivated successfully. Apr 30 12:51:15.398152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284-rootfs.mount: Deactivated successfully. Apr 30 12:51:15.409940 containerd[1911]: time="2025-04-30T12:51:15.409294924Z" level=info msg="shim disconnected" id=997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284 namespace=k8s.io Apr 30 12:51:15.409940 containerd[1911]: time="2025-04-30T12:51:15.409358064Z" level=warning msg="cleaning up after shim disconnected" id=997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284 namespace=k8s.io Apr 30 12:51:15.409940 containerd[1911]: time="2025-04-30T12:51:15.409372035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:51:16.219242 containerd[1911]: time="2025-04-30T12:51:16.217998986Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:51:16.237020 kubelet[3533]: I0430 12:51:16.236957 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pfdqs" podStartSLOduration=2.7392153759999998 podStartE2EDuration="15.236933712s" podCreationTimestamp="2025-04-30 12:51:01 +0000 UTC" firstStartedPulling="2025-04-30 12:51:02.385908939 +0000 UTC m=+15.410974615" lastFinishedPulling="2025-04-30 12:51:14.883627268 +0000 UTC m=+27.908692951" observedRunningTime="2025-04-30 12:51:15.239743307 +0000 UTC m=+28.264808995" watchObservedRunningTime="2025-04-30 12:51:16.236933712 +0000 UTC m=+29.261999398" Apr 30 12:51:16.251629 containerd[1911]: time="2025-04-30T12:51:16.251591520Z" level=info msg="CreateContainer within sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\"" Apr 30 12:51:16.252366 containerd[1911]: time="2025-04-30T12:51:16.252248398Z" level=info msg="StartContainer for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\"" Apr 30 12:51:16.291423 systemd[1]: Started cri-containerd-ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b.scope - libcontainer container ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b. Apr 30 12:51:16.326378 containerd[1911]: time="2025-04-30T12:51:16.326305980Z" level=info msg="StartContainer for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" returns successfully" Apr 30 12:51:16.343485 systemd[1]: run-containerd-runc-k8s.io-ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b-runc.wy5TYY.mount: Deactivated successfully. Apr 30 12:51:16.548294 kubelet[3533]: I0430 12:51:16.547562 3533 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 12:51:16.576185 kubelet[3533]: I0430 12:51:16.575908 3533 topology_manager.go:215] "Topology Admit Handler" podUID="3b086cfd-8449-43f2-9a04-708e8231e294" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fmq5d" Apr 30 12:51:16.580073 kubelet[3533]: I0430 12:51:16.579376 3533 topology_manager.go:215] "Topology Admit Handler" podUID="33c8cb89-33cc-4dad-ac7d-67e0eea56cfb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vhv8t" Apr 30 12:51:16.586813 systemd[1]: Created slice kubepods-burstable-pod3b086cfd_8449_43f2_9a04_708e8231e294.slice - libcontainer container kubepods-burstable-pod3b086cfd_8449_43f2_9a04_708e8231e294.slice. Apr 30 12:51:16.593117 systemd[1]: Created slice kubepods-burstable-pod33c8cb89_33cc_4dad_ac7d_67e0eea56cfb.slice - libcontainer container kubepods-burstable-pod33c8cb89_33cc_4dad_ac7d_67e0eea56cfb.slice. Apr 30 12:51:16.666697 kubelet[3533]: I0430 12:51:16.666550 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlbh\" (UniqueName: \"kubernetes.io/projected/33c8cb89-33cc-4dad-ac7d-67e0eea56cfb-kube-api-access-pdlbh\") pod \"coredns-7db6d8ff4d-vhv8t\" (UID: \"33c8cb89-33cc-4dad-ac7d-67e0eea56cfb\") " pod="kube-system/coredns-7db6d8ff4d-vhv8t" Apr 30 12:51:16.666697 kubelet[3533]: I0430 12:51:16.666598 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b086cfd-8449-43f2-9a04-708e8231e294-config-volume\") pod \"coredns-7db6d8ff4d-fmq5d\" (UID: \"3b086cfd-8449-43f2-9a04-708e8231e294\") " pod="kube-system/coredns-7db6d8ff4d-fmq5d" Apr 30 12:51:16.666697 kubelet[3533]: I0430 12:51:16.666622 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqs6g\" (UniqueName: \"kubernetes.io/projected/3b086cfd-8449-43f2-9a04-708e8231e294-kube-api-access-nqs6g\") pod \"coredns-7db6d8ff4d-fmq5d\" (UID: \"3b086cfd-8449-43f2-9a04-708e8231e294\") " pod="kube-system/coredns-7db6d8ff4d-fmq5d" Apr 30 12:51:16.666697 kubelet[3533]: I0430 12:51:16.666642 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33c8cb89-33cc-4dad-ac7d-67e0eea56cfb-config-volume\") pod \"coredns-7db6d8ff4d-vhv8t\" (UID: \"33c8cb89-33cc-4dad-ac7d-67e0eea56cfb\") " pod="kube-system/coredns-7db6d8ff4d-vhv8t" Apr 30 12:51:16.891430 containerd[1911]: time="2025-04-30T12:51:16.891389517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmq5d,Uid:3b086cfd-8449-43f2-9a04-708e8231e294,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:16.896390 containerd[1911]: time="2025-04-30T12:51:16.896210838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhv8t,Uid:33c8cb89-33cc-4dad-ac7d-67e0eea56cfb,Namespace:kube-system,Attempt:0,}" Apr 30 12:51:18.939067 systemd-networkd[1824]: cilium_host: Link UP Apr 30 12:51:18.939522 systemd-networkd[1824]: cilium_net: Link UP Apr 30 12:51:18.940305 systemd-networkd[1824]: cilium_net: Gained carrier Apr 30 12:51:18.940544 systemd-networkd[1824]: cilium_host: Gained carrier Apr 30 12:51:18.941012 (udev-worker)[4357]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:18.942396 (udev-worker)[4317]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:19.084061 systemd-networkd[1824]: cilium_vxlan: Link UP Apr 30 12:51:19.084072 systemd-networkd[1824]: cilium_vxlan: Gained carrier Apr 30 12:51:19.199321 systemd-networkd[1824]: cilium_host: Gained IPv6LL Apr 30 12:51:19.616999 kernel: NET: Registered PF_ALG protocol family Apr 30 12:51:19.936381 systemd-networkd[1824]: cilium_net: Gained IPv6LL Apr 30 12:51:20.191454 systemd-networkd[1824]: cilium_vxlan: Gained IPv6LL Apr 30 12:51:20.321267 systemd-networkd[1824]: lxc_health: Link UP Apr 30 12:51:20.326608 (udev-worker)[4367]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:20.328074 systemd-networkd[1824]: lxc_health: Gained carrier Apr 30 12:51:20.540989 systemd-networkd[1824]: lxcd8819407308d: Link UP Apr 30 12:51:20.545265 kernel: eth0: renamed from tmpe9fb7 Apr 30 12:51:20.554854 (udev-worker)[4369]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:51:20.558477 systemd-networkd[1824]: lxcd8819407308d: Gained carrier Apr 30 12:51:20.558697 systemd-networkd[1824]: lxc7fe28e79f5bb: Link UP Apr 30 12:51:20.567291 kernel: eth0: renamed from tmpbac57 Apr 30 12:51:20.577784 systemd-networkd[1824]: lxc7fe28e79f5bb: Gained carrier Apr 30 12:51:20.887302 systemd[1]: Started sshd@9-172.31.19.82:22-147.75.109.163:57512.service - OpenSSH per-connection server daemon (147.75.109.163:57512). Apr 30 12:51:21.208477 sshd[4706]: Accepted publickey for core from 147.75.109.163 port 57512 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:21.211504 sshd-session[4706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:21.242023 systemd-logind[1889]: New session 10 of user core. Apr 30 12:51:21.249323 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:51:21.599369 systemd-networkd[1824]: lxc7fe28e79f5bb: Gained IPv6LL Apr 30 12:51:21.920688 systemd-networkd[1824]: lxc_health: Gained IPv6LL Apr 30 12:51:22.023345 kubelet[3533]: I0430 12:51:22.023266 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cbltt" podStartSLOduration=10.896290745 podStartE2EDuration="21.02324091s" podCreationTimestamp="2025-04-30 12:51:01 +0000 UTC" firstStartedPulling="2025-04-30 12:51:02.159934749 +0000 UTC m=+15.185000418" lastFinishedPulling="2025-04-30 12:51:12.286884899 +0000 UTC m=+25.311950583" observedRunningTime="2025-04-30 12:51:17.306059138 +0000 UTC m=+30.331124827" watchObservedRunningTime="2025-04-30 12:51:22.02324091 +0000 UTC m=+35.048306603" Apr 30 12:51:22.303406 systemd-networkd[1824]: lxcd8819407308d: Gained IPv6LL Apr 30 12:51:22.452191 sshd[4710]: Connection closed by 147.75.109.163 port 57512 Apr 30 12:51:22.453445 sshd-session[4706]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:22.460942 systemd-logind[1889]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:51:22.463486 systemd[1]: sshd@9-172.31.19.82:22-147.75.109.163:57512.service: Deactivated successfully. Apr 30 12:51:22.467345 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:51:22.469655 systemd-logind[1889]: Removed session 10. Apr 30 12:51:24.418285 ntpd[1882]: Listen normally on 8 cilium_host 192.168.0.105:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 8 cilium_host 192.168.0.105:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 9 cilium_net [fe80::e8b3:32ff:fe13:3b26%4]:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 10 cilium_host [fe80::ac71:33ff:fe2a:6b70%5]:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 11 cilium_vxlan [fe80::7c84:d9ff:fee1:688d%6]:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 12 lxc_health [fe80::f038:4fff:fe2b:3fa5%8]:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 13 lxcd8819407308d [fe80::ec02:7fff:fe15:56ea%10]:123 Apr 30 12:51:24.419136 ntpd[1882]: 30 Apr 12:51:24 ntpd[1882]: Listen normally on 14 lxc7fe28e79f5bb [fe80::849f:4fff:fe43:54bd%12]:123 Apr 30 12:51:24.418388 ntpd[1882]: Listen normally on 9 cilium_net [fe80::e8b3:32ff:fe13:3b26%4]:123 Apr 30 12:51:24.418447 ntpd[1882]: Listen normally on 10 cilium_host [fe80::ac71:33ff:fe2a:6b70%5]:123 Apr 30 12:51:24.418489 ntpd[1882]: Listen normally on 11 cilium_vxlan [fe80::7c84:d9ff:fee1:688d%6]:123 Apr 30 12:51:24.418531 ntpd[1882]: Listen normally on 12 lxc_health [fe80::f038:4fff:fe2b:3fa5%8]:123 Apr 30 12:51:24.418580 ntpd[1882]: Listen normally on 13 lxcd8819407308d [fe80::ec02:7fff:fe15:56ea%10]:123 Apr 30 12:51:24.418619 ntpd[1882]: Listen normally on 14 lxc7fe28e79f5bb [fe80::849f:4fff:fe43:54bd%12]:123 Apr 30 12:51:25.668746 containerd[1911]: time="2025-04-30T12:51:25.668471051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:25.668746 containerd[1911]: time="2025-04-30T12:51:25.668537188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:25.668746 containerd[1911]: time="2025-04-30T12:51:25.668559876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:25.668746 containerd[1911]: time="2025-04-30T12:51:25.668647700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:25.724964 containerd[1911]: time="2025-04-30T12:51:25.724860264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:51:25.728163 containerd[1911]: time="2025-04-30T12:51:25.724936839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:51:25.728163 containerd[1911]: time="2025-04-30T12:51:25.724959300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:25.732289 containerd[1911]: time="2025-04-30T12:51:25.727119366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:51:25.736405 systemd[1]: Started cri-containerd-e9fb76d7ac06fe977ff8e86d355cb35deeba82b4eaacf8d4285c40e4e2ba99fe.scope - libcontainer container e9fb76d7ac06fe977ff8e86d355cb35deeba82b4eaacf8d4285c40e4e2ba99fe. Apr 30 12:51:25.776402 systemd[1]: Started cri-containerd-bac574503636c32408446397648a7fb83beddd94bf78c7b8332599e99ba8b7db.scope - libcontainer container bac574503636c32408446397648a7fb83beddd94bf78c7b8332599e99ba8b7db. Apr 30 12:51:25.870001 containerd[1911]: time="2025-04-30T12:51:25.869901625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fmq5d,Uid:3b086cfd-8449-43f2-9a04-708e8231e294,Namespace:kube-system,Attempt:0,} returns sandbox id \"bac574503636c32408446397648a7fb83beddd94bf78c7b8332599e99ba8b7db\"" Apr 30 12:51:25.879004 containerd[1911]: time="2025-04-30T12:51:25.878962549Z" level=info msg="CreateContainer within sandbox \"bac574503636c32408446397648a7fb83beddd94bf78c7b8332599e99ba8b7db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:51:25.881277 containerd[1911]: time="2025-04-30T12:51:25.879568031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhv8t,Uid:33c8cb89-33cc-4dad-ac7d-67e0eea56cfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9fb76d7ac06fe977ff8e86d355cb35deeba82b4eaacf8d4285c40e4e2ba99fe\"" Apr 30 12:51:25.887069 containerd[1911]: time="2025-04-30T12:51:25.886898959Z" level=info msg="CreateContainer within sandbox \"e9fb76d7ac06fe977ff8e86d355cb35deeba82b4eaacf8d4285c40e4e2ba99fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:51:25.935048 containerd[1911]: time="2025-04-30T12:51:25.934945287Z" level=info msg="CreateContainer within sandbox \"e9fb76d7ac06fe977ff8e86d355cb35deeba82b4eaacf8d4285c40e4e2ba99fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c023c8268d56d1c13d8e64e215371ec434063dfb56051173431a7635752a1fba\"" Apr 30 12:51:25.935782 containerd[1911]: time="2025-04-30T12:51:25.935575729Z" level=info msg="CreateContainer within sandbox \"bac574503636c32408446397648a7fb83beddd94bf78c7b8332599e99ba8b7db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48168129c5570fdaa3a7791c1daddee415f50aa4c01dae2a2d01231aaba5976a\"" Apr 30 12:51:25.936436 containerd[1911]: time="2025-04-30T12:51:25.936309597Z" level=info msg="StartContainer for \"c023c8268d56d1c13d8e64e215371ec434063dfb56051173431a7635752a1fba\"" Apr 30 12:51:25.936889 containerd[1911]: time="2025-04-30T12:51:25.936833368Z" level=info msg="StartContainer for \"48168129c5570fdaa3a7791c1daddee415f50aa4c01dae2a2d01231aaba5976a\"" Apr 30 12:51:25.984446 systemd[1]: Started cri-containerd-48168129c5570fdaa3a7791c1daddee415f50aa4c01dae2a2d01231aaba5976a.scope - libcontainer container 48168129c5570fdaa3a7791c1daddee415f50aa4c01dae2a2d01231aaba5976a. Apr 30 12:51:25.993730 systemd[1]: Started cri-containerd-c023c8268d56d1c13d8e64e215371ec434063dfb56051173431a7635752a1fba.scope - libcontainer container c023c8268d56d1c13d8e64e215371ec434063dfb56051173431a7635752a1fba. Apr 30 12:51:26.050914 containerd[1911]: time="2025-04-30T12:51:26.050794398Z" level=info msg="StartContainer for \"48168129c5570fdaa3a7791c1daddee415f50aa4c01dae2a2d01231aaba5976a\" returns successfully" Apr 30 12:51:26.050914 containerd[1911]: time="2025-04-30T12:51:26.050865672Z" level=info msg="StartContainer for \"c023c8268d56d1c13d8e64e215371ec434063dfb56051173431a7635752a1fba\" returns successfully" Apr 30 12:51:26.289716 kubelet[3533]: I0430 12:51:26.289534 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vhv8t" podStartSLOduration=25.289519658 podStartE2EDuration="25.289519658s" podCreationTimestamp="2025-04-30 12:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:26.288435475 +0000 UTC m=+39.313501164" watchObservedRunningTime="2025-04-30 12:51:26.289519658 +0000 UTC m=+39.314585345" Apr 30 12:51:26.304553 kubelet[3533]: I0430 12:51:26.304221 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fmq5d" podStartSLOduration=25.304200446 podStartE2EDuration="25.304200446s" podCreationTimestamp="2025-04-30 12:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:51:26.300518935 +0000 UTC m=+39.325584657" watchObservedRunningTime="2025-04-30 12:51:26.304200446 +0000 UTC m=+39.329266136" Apr 30 12:51:26.679440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022338773.mount: Deactivated successfully. Apr 30 12:51:27.501615 systemd[1]: Started sshd@10-172.31.19.82:22-147.75.109.163:50196.service - OpenSSH per-connection server daemon (147.75.109.163:50196). Apr 30 12:51:27.794403 sshd[4909]: Accepted publickey for core from 147.75.109.163 port 50196 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:27.795771 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:27.800265 systemd-logind[1889]: New session 11 of user core. Apr 30 12:51:27.809370 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:51:28.133499 sshd[4911]: Connection closed by 147.75.109.163 port 50196 Apr 30 12:51:28.134257 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:28.137857 systemd[1]: sshd@10-172.31.19.82:22-147.75.109.163:50196.service: Deactivated successfully. Apr 30 12:51:28.139627 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:51:28.140405 systemd-logind[1889]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:51:28.141513 systemd-logind[1889]: Removed session 11. Apr 30 12:51:33.185617 systemd[1]: Started sshd@11-172.31.19.82:22-147.75.109.163:50206.service - OpenSSH per-connection server daemon (147.75.109.163:50206). Apr 30 12:51:33.437364 sshd[4927]: Accepted publickey for core from 147.75.109.163 port 50206 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:33.439085 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:33.444392 systemd-logind[1889]: New session 12 of user core. Apr 30 12:51:33.450369 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:51:33.707031 sshd[4929]: Connection closed by 147.75.109.163 port 50206 Apr 30 12:51:33.708786 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:33.712104 systemd[1]: sshd@11-172.31.19.82:22-147.75.109.163:50206.service: Deactivated successfully. Apr 30 12:51:33.714914 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:51:33.718568 systemd-logind[1889]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:51:33.720754 systemd-logind[1889]: Removed session 12. Apr 30 12:51:38.758521 systemd[1]: Started sshd@12-172.31.19.82:22-147.75.109.163:60152.service - OpenSSH per-connection server daemon (147.75.109.163:60152). Apr 30 12:51:39.028222 sshd[4941]: Accepted publickey for core from 147.75.109.163 port 60152 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:39.029375 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:39.033948 systemd-logind[1889]: New session 13 of user core. Apr 30 12:51:39.041389 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:51:39.304695 sshd[4943]: Connection closed by 147.75.109.163 port 60152 Apr 30 12:51:39.305472 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:39.308425 systemd[1]: sshd@12-172.31.19.82:22-147.75.109.163:60152.service: Deactivated successfully. Apr 30 12:51:39.310843 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:51:39.312644 systemd-logind[1889]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:51:39.314494 systemd-logind[1889]: Removed session 13. Apr 30 12:51:39.351269 systemd[1]: Started sshd@13-172.31.19.82:22-147.75.109.163:60160.service - OpenSSH per-connection server daemon (147.75.109.163:60160). Apr 30 12:51:39.601263 sshd[4955]: Accepted publickey for core from 147.75.109.163 port 60160 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:39.602668 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:39.607739 systemd-logind[1889]: New session 14 of user core. Apr 30 12:51:39.609396 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:51:39.906716 sshd[4957]: Connection closed by 147.75.109.163 port 60160 Apr 30 12:51:39.908196 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:39.911111 systemd-logind[1889]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:51:39.913376 systemd[1]: sshd@13-172.31.19.82:22-147.75.109.163:60160.service: Deactivated successfully. Apr 30 12:51:39.916250 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:51:39.917988 systemd-logind[1889]: Removed session 14. Apr 30 12:51:39.960512 systemd[1]: Started sshd@14-172.31.19.82:22-147.75.109.163:60174.service - OpenSSH per-connection server daemon (147.75.109.163:60174). Apr 30 12:51:40.211517 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 60174 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:40.213115 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:40.219741 systemd-logind[1889]: New session 15 of user core. Apr 30 12:51:40.225445 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:51:40.470333 sshd[4969]: Connection closed by 147.75.109.163 port 60174 Apr 30 12:51:40.471022 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:40.475128 systemd[1]: sshd@14-172.31.19.82:22-147.75.109.163:60174.service: Deactivated successfully. Apr 30 12:51:40.477184 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:51:40.478470 systemd-logind[1889]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:51:40.479937 systemd-logind[1889]: Removed session 15. Apr 30 12:51:45.522519 systemd[1]: Started sshd@15-172.31.19.82:22-147.75.109.163:60180.service - OpenSSH per-connection server daemon (147.75.109.163:60180). Apr 30 12:51:45.775449 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 60180 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:45.776665 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:45.781580 systemd-logind[1889]: New session 16 of user core. Apr 30 12:51:45.788420 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:51:46.025356 sshd[4985]: Connection closed by 147.75.109.163 port 60180 Apr 30 12:51:46.026017 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:46.029139 systemd[1]: sshd@15-172.31.19.82:22-147.75.109.163:60180.service: Deactivated successfully. Apr 30 12:51:46.031159 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:51:46.032760 systemd-logind[1889]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:51:46.033883 systemd-logind[1889]: Removed session 16. Apr 30 12:51:51.076569 systemd[1]: Started sshd@16-172.31.19.82:22-147.75.109.163:43108.service - OpenSSH per-connection server daemon (147.75.109.163:43108). Apr 30 12:51:51.338581 sshd[4998]: Accepted publickey for core from 147.75.109.163 port 43108 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:51.339949 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:51.344523 systemd-logind[1889]: New session 17 of user core. Apr 30 12:51:51.349374 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:51:51.588849 sshd[5000]: Connection closed by 147.75.109.163 port 43108 Apr 30 12:51:51.589719 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:51.592522 systemd[1]: sshd@16-172.31.19.82:22-147.75.109.163:43108.service: Deactivated successfully. Apr 30 12:51:51.594652 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:51:51.596249 systemd-logind[1889]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:51:51.597488 systemd-logind[1889]: Removed session 17. Apr 30 12:51:51.639492 systemd[1]: Started sshd@17-172.31.19.82:22-147.75.109.163:43112.service - OpenSSH per-connection server daemon (147.75.109.163:43112). Apr 30 12:51:51.885831 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 43112 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:51.887127 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:51.891516 systemd-logind[1889]: New session 18 of user core. Apr 30 12:51:51.894445 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:51:52.556315 sshd[5014]: Connection closed by 147.75.109.163 port 43112 Apr 30 12:51:52.557238 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:52.563559 systemd[1]: sshd@17-172.31.19.82:22-147.75.109.163:43112.service: Deactivated successfully. Apr 30 12:51:52.565596 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:51:52.567010 systemd-logind[1889]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:51:52.568324 systemd-logind[1889]: Removed session 18. Apr 30 12:51:52.608501 systemd[1]: Started sshd@18-172.31.19.82:22-147.75.109.163:43120.service - OpenSSH per-connection server daemon (147.75.109.163:43120). Apr 30 12:51:52.877061 sshd[5024]: Accepted publickey for core from 147.75.109.163 port 43120 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:52.878539 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:52.882766 systemd-logind[1889]: New session 19 of user core. Apr 30 12:51:52.889358 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:51:54.730391 sshd[5026]: Connection closed by 147.75.109.163 port 43120 Apr 30 12:51:54.731080 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:54.736374 systemd[1]: sshd@18-172.31.19.82:22-147.75.109.163:43120.service: Deactivated successfully. Apr 30 12:51:54.738890 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:51:54.740102 systemd-logind[1889]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:51:54.741472 systemd-logind[1889]: Removed session 19. Apr 30 12:51:54.781485 systemd[1]: Started sshd@19-172.31.19.82:22-147.75.109.163:43130.service - OpenSSH per-connection server daemon (147.75.109.163:43130). Apr 30 12:51:55.042853 sshd[5043]: Accepted publickey for core from 147.75.109.163 port 43130 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:55.044418 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:55.049232 systemd-logind[1889]: New session 20 of user core. Apr 30 12:51:55.052322 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:51:55.558582 sshd[5045]: Connection closed by 147.75.109.163 port 43130 Apr 30 12:51:55.559257 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:55.563080 systemd[1]: sshd@19-172.31.19.82:22-147.75.109.163:43130.service: Deactivated successfully. Apr 30 12:51:55.565358 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:51:55.566455 systemd-logind[1889]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:51:55.567529 systemd-logind[1889]: Removed session 20. Apr 30 12:51:55.612476 systemd[1]: Started sshd@20-172.31.19.82:22-147.75.109.163:43134.service - OpenSSH per-connection server daemon (147.75.109.163:43134). Apr 30 12:51:55.863790 sshd[5054]: Accepted publickey for core from 147.75.109.163 port 43134 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:51:55.865426 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:51:55.871183 systemd-logind[1889]: New session 21 of user core. Apr 30 12:51:55.877398 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:51:56.133271 sshd[5056]: Connection closed by 147.75.109.163 port 43134 Apr 30 12:51:56.134163 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Apr 30 12:51:56.137926 systemd[1]: sshd@20-172.31.19.82:22-147.75.109.163:43134.service: Deactivated successfully. Apr 30 12:51:56.139899 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:51:56.140651 systemd-logind[1889]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:51:56.142001 systemd-logind[1889]: Removed session 21. Apr 30 12:52:01.190654 systemd[1]: Started sshd@21-172.31.19.82:22-147.75.109.163:35654.service - OpenSSH per-connection server daemon (147.75.109.163:35654). Apr 30 12:52:01.444354 sshd[5070]: Accepted publickey for core from 147.75.109.163 port 35654 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:01.446464 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:01.453681 systemd-logind[1889]: New session 22 of user core. Apr 30 12:52:01.458397 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:52:01.728841 sshd[5072]: Connection closed by 147.75.109.163 port 35654 Apr 30 12:52:01.729880 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:01.740086 systemd[1]: sshd@21-172.31.19.82:22-147.75.109.163:35654.service: Deactivated successfully. Apr 30 12:52:01.750874 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:52:01.751916 systemd-logind[1889]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:52:01.753100 systemd-logind[1889]: Removed session 22. Apr 30 12:52:06.776448 systemd[1]: Started sshd@22-172.31.19.82:22-147.75.109.163:35662.service - OpenSSH per-connection server daemon (147.75.109.163:35662). Apr 30 12:52:07.041468 sshd[5087]: Accepted publickey for core from 147.75.109.163 port 35662 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:07.042804 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:07.047818 systemd-logind[1889]: New session 23 of user core. Apr 30 12:52:07.052340 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:52:07.338092 sshd[5089]: Connection closed by 147.75.109.163 port 35662 Apr 30 12:52:07.338778 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:07.342496 systemd[1]: sshd@22-172.31.19.82:22-147.75.109.163:35662.service: Deactivated successfully. Apr 30 12:52:07.344375 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:52:07.345264 systemd-logind[1889]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:52:07.346611 systemd-logind[1889]: Removed session 23. Apr 30 12:52:12.385273 systemd[1]: Started sshd@23-172.31.19.82:22-147.75.109.163:36304.service - OpenSSH per-connection server daemon (147.75.109.163:36304). Apr 30 12:52:12.637493 sshd[5101]: Accepted publickey for core from 147.75.109.163 port 36304 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:12.639053 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:12.644129 systemd-logind[1889]: New session 24 of user core. Apr 30 12:52:12.649442 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:52:12.885952 sshd[5103]: Connection closed by 147.75.109.163 port 36304 Apr 30 12:52:12.886387 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:12.889966 systemd[1]: sshd@23-172.31.19.82:22-147.75.109.163:36304.service: Deactivated successfully. Apr 30 12:52:12.891788 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:52:12.892449 systemd-logind[1889]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:52:12.893685 systemd-logind[1889]: Removed session 24. Apr 30 12:52:12.934330 systemd[1]: Started sshd@24-172.31.19.82:22-147.75.109.163:36318.service - OpenSSH per-connection server daemon (147.75.109.163:36318). Apr 30 12:52:13.186033 sshd[5114]: Accepted publickey for core from 147.75.109.163 port 36318 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:13.187366 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:13.191831 systemd-logind[1889]: New session 25 of user core. Apr 30 12:52:13.198375 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 12:52:14.815855 systemd[1]: run-containerd-runc-k8s.io-ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b-runc.2n4gRM.mount: Deactivated successfully. Apr 30 12:52:14.842812 containerd[1911]: time="2025-04-30T12:52:14.842021188Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:52:14.895405 containerd[1911]: time="2025-04-30T12:52:14.895033748Z" level=info msg="StopContainer for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" with timeout 2 (s)" Apr 30 12:52:14.895405 containerd[1911]: time="2025-04-30T12:52:14.895161816Z" level=info msg="StopContainer for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" with timeout 30 (s)" Apr 30 12:52:14.896669 containerd[1911]: time="2025-04-30T12:52:14.896634244Z" level=info msg="Stop container \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" with signal terminated" Apr 30 12:52:14.897236 containerd[1911]: time="2025-04-30T12:52:14.897016470Z" level=info msg="Stop container \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" with signal terminated" Apr 30 12:52:14.904830 systemd-networkd[1824]: lxc_health: Link DOWN Apr 30 12:52:14.904838 systemd-networkd[1824]: lxc_health: Lost carrier Apr 30 12:52:14.914092 systemd[1]: cri-containerd-fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7.scope: Deactivated successfully. Apr 30 12:52:14.931703 systemd[1]: cri-containerd-ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b.scope: Deactivated successfully. Apr 30 12:52:14.932079 systemd[1]: cri-containerd-ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b.scope: Consumed 8.270s CPU time, 198.4M memory peak, 74.3M read from disk, 13.3M written to disk. Apr 30 12:52:14.951414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7-rootfs.mount: Deactivated successfully. Apr 30 12:52:14.965441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b-rootfs.mount: Deactivated successfully. Apr 30 12:52:14.981699 containerd[1911]: time="2025-04-30T12:52:14.981625771Z" level=info msg="shim disconnected" id=ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b namespace=k8s.io Apr 30 12:52:14.981957 containerd[1911]: time="2025-04-30T12:52:14.981762550Z" level=warning msg="cleaning up after shim disconnected" id=ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b namespace=k8s.io Apr 30 12:52:14.981957 containerd[1911]: time="2025-04-30T12:52:14.981777894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:14.982462 containerd[1911]: time="2025-04-30T12:52:14.982186536Z" level=info msg="shim disconnected" id=fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7 namespace=k8s.io Apr 30 12:52:14.982462 containerd[1911]: time="2025-04-30T12:52:14.982232027Z" level=warning msg="cleaning up after shim disconnected" id=fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7 namespace=k8s.io Apr 30 12:52:14.982462 containerd[1911]: time="2025-04-30T12:52:14.982241873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:15.004205 containerd[1911]: time="2025-04-30T12:52:15.003425468Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:52:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:52:15.008840 containerd[1911]: time="2025-04-30T12:52:15.008676024Z" level=info msg="StopContainer for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" returns successfully" Apr 30 12:52:15.009301 containerd[1911]: time="2025-04-30T12:52:15.008801253Z" level=info msg="StopContainer for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" returns successfully" Apr 30 12:52:15.016335 containerd[1911]: time="2025-04-30T12:52:15.016195940Z" level=info msg="StopPodSandbox for \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\"" Apr 30 12:52:15.017212 containerd[1911]: time="2025-04-30T12:52:15.016521850Z" level=info msg="StopPodSandbox for \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\"" Apr 30 12:52:15.027680 containerd[1911]: time="2025-04-30T12:52:15.023098330Z" level=info msg="Container to stop \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:15.028450 containerd[1911]: time="2025-04-30T12:52:15.018019758Z" level=info msg="Container to stop \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:15.028528 containerd[1911]: time="2025-04-30T12:52:15.028457924Z" level=info msg="Container to stop \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:15.028528 containerd[1911]: time="2025-04-30T12:52:15.028470073Z" level=info msg="Container to stop \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:15.028528 containerd[1911]: time="2025-04-30T12:52:15.028478972Z" level=info msg="Container to stop \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:15.028528 containerd[1911]: time="2025-04-30T12:52:15.028487886Z" level=info msg="Container to stop \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:52:15.030582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c-shm.mount: Deactivated successfully. Apr 30 12:52:15.030864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff-shm.mount: Deactivated successfully. Apr 30 12:52:15.039505 systemd[1]: cri-containerd-6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff.scope: Deactivated successfully. Apr 30 12:52:15.048898 systemd[1]: cri-containerd-c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c.scope: Deactivated successfully. Apr 30 12:52:15.083954 containerd[1911]: time="2025-04-30T12:52:15.083461394Z" level=info msg="shim disconnected" id=6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff namespace=k8s.io Apr 30 12:52:15.083954 containerd[1911]: time="2025-04-30T12:52:15.083512590Z" level=warning msg="cleaning up after shim disconnected" id=6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff namespace=k8s.io Apr 30 12:52:15.083954 containerd[1911]: time="2025-04-30T12:52:15.083521465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:15.083954 containerd[1911]: time="2025-04-30T12:52:15.083667899Z" level=info msg="shim disconnected" id=c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c namespace=k8s.io Apr 30 12:52:15.083954 containerd[1911]: time="2025-04-30T12:52:15.083688744Z" level=warning msg="cleaning up after shim disconnected" id=c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c namespace=k8s.io Apr 30 12:52:15.083954 containerd[1911]: time="2025-04-30T12:52:15.083695114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:15.100192 containerd[1911]: time="2025-04-30T12:52:15.100119317Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:52:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:52:15.101273 containerd[1911]: time="2025-04-30T12:52:15.101241553Z" level=info msg="TearDown network for sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" successfully" Apr 30 12:52:15.101273 containerd[1911]: time="2025-04-30T12:52:15.101270882Z" level=info msg="StopPodSandbox for \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" returns successfully" Apr 30 12:52:15.103450 containerd[1911]: time="2025-04-30T12:52:15.103339849Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:52:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:52:15.104912 containerd[1911]: time="2025-04-30T12:52:15.104886631Z" level=info msg="TearDown network for sandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" successfully" Apr 30 12:52:15.104912 containerd[1911]: time="2025-04-30T12:52:15.104909112Z" level=info msg="StopPodSandbox for \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" returns successfully" Apr 30 12:52:15.260302 kubelet[3533]: I0430 12:52:15.260249 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-xtables-lock\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260302 kubelet[3533]: I0430 12:52:15.260298 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-net\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260721 kubelet[3533]: I0430 12:52:15.260328 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-hubble-tls\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260721 kubelet[3533]: I0430 12:52:15.260342 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-lib-modules\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260721 kubelet[3533]: I0430 12:52:15.260360 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7fv2\" (UniqueName: \"kubernetes.io/projected/42444ad5-7189-4126-9d0d-c9e898dc3811-kube-api-access-k7fv2\") pod \"42444ad5-7189-4126-9d0d-c9e898dc3811\" (UID: \"42444ad5-7189-4126-9d0d-c9e898dc3811\") " Apr 30 12:52:15.260721 kubelet[3533]: I0430 12:52:15.260378 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea2903db-f092-4c14-859b-407746c8ad61-cilium-config-path\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260721 kubelet[3533]: I0430 12:52:15.260393 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cni-path\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260721 kubelet[3533]: I0430 12:52:15.260408 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-kernel\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260876 kubelet[3533]: I0430 12:52:15.260422 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-run\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260876 kubelet[3533]: I0430 12:52:15.260443 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea2903db-f092-4c14-859b-407746c8ad61-clustermesh-secrets\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260876 kubelet[3533]: I0430 12:52:15.260460 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7m9s\" (UniqueName: \"kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-kube-api-access-v7m9s\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260876 kubelet[3533]: I0430 12:52:15.260475 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-hostproc\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260876 kubelet[3533]: I0430 12:52:15.260489 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-bpf-maps\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.260876 kubelet[3533]: I0430 12:52:15.260504 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42444ad5-7189-4126-9d0d-c9e898dc3811-cilium-config-path\") pod \"42444ad5-7189-4126-9d0d-c9e898dc3811\" (UID: \"42444ad5-7189-4126-9d0d-c9e898dc3811\") " Apr 30 12:52:15.261037 kubelet[3533]: I0430 12:52:15.260518 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-etc-cni-netd\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.261037 kubelet[3533]: I0430 12:52:15.260531 3533 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-cgroup\") pod \"ea2903db-f092-4c14-859b-407746c8ad61\" (UID: \"ea2903db-f092-4c14-859b-407746c8ad61\") " Apr 30 12:52:15.262275 kubelet[3533]: I0430 12:52:15.260611 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.262275 kubelet[3533]: I0430 12:52:15.260699 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.262275 kubelet[3533]: I0430 12:52:15.262029 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.262275 kubelet[3533]: I0430 12:52:15.262056 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.262275 kubelet[3533]: I0430 12:52:15.262065 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.278427 kubelet[3533]: I0430 12:52:15.278140 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-hostproc" (OuterVolumeSpecName: "hostproc") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.278427 kubelet[3533]: I0430 12:52:15.278244 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.279056 kubelet[3533]: I0430 12:52:15.279028 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.280350 kubelet[3533]: I0430 12:52:15.280323 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.281187 kubelet[3533]: I0430 12:52:15.280524 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2903db-f092-4c14-859b-407746c8ad61-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 12:52:15.283011 kubelet[3533]: I0430 12:52:15.282984 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42444ad5-7189-4126-9d0d-c9e898dc3811-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42444ad5-7189-4126-9d0d-c9e898dc3811" (UID: "42444ad5-7189-4126-9d0d-c9e898dc3811"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:52:15.284795 kubelet[3533]: I0430 12:52:15.284764 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cni-path" (OuterVolumeSpecName: "cni-path") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:52:15.284881 kubelet[3533]: I0430 12:52:15.284809 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:52:15.284881 kubelet[3533]: I0430 12:52:15.284835 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-kube-api-access-v7m9s" (OuterVolumeSpecName: "kube-api-access-v7m9s") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "kube-api-access-v7m9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:52:15.284881 kubelet[3533]: I0430 12:52:15.284853 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42444ad5-7189-4126-9d0d-c9e898dc3811-kube-api-access-k7fv2" (OuterVolumeSpecName: "kube-api-access-k7fv2") pod "42444ad5-7189-4126-9d0d-c9e898dc3811" (UID: "42444ad5-7189-4126-9d0d-c9e898dc3811"). InnerVolumeSpecName "kube-api-access-k7fv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:52:15.285897 kubelet[3533]: I0430 12:52:15.285861 3533 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2903db-f092-4c14-859b-407746c8ad61-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea2903db-f092-4c14-859b-407746c8ad61" (UID: "ea2903db-f092-4c14-859b-407746c8ad61"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365358 3533 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-kernel\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365404 3533 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea2903db-f092-4c14-859b-407746c8ad61-cilium-config-path\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365418 3533 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cni-path\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365430 3533 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea2903db-f092-4c14-859b-407746c8ad61-clustermesh-secrets\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365442 3533 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v7m9s\" (UniqueName: \"kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-kube-api-access-v7m9s\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365453 3533 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-run\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365466 3533 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-hostproc\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.365481 kubelet[3533]: I0430 12:52:15.365477 3533 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-etc-cni-netd\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365488 3533 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-cilium-cgroup\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365575 3533 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-bpf-maps\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365588 3533 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42444ad5-7189-4126-9d0d-c9e898dc3811-cilium-config-path\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365606 3533 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea2903db-f092-4c14-859b-407746c8ad61-hubble-tls\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365617 3533 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-lib-modules\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365627 3533 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-xtables-lock\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365638 3533 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea2903db-f092-4c14-859b-407746c8ad61-host-proc-sys-net\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.366587 kubelet[3533]: I0430 12:52:15.365650 3533 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k7fv2\" (UniqueName: \"kubernetes.io/projected/42444ad5-7189-4126-9d0d-c9e898dc3811-kube-api-access-k7fv2\") on node \"ip-172-31-19-82\" DevicePath \"\"" Apr 30 12:52:15.389721 kubelet[3533]: I0430 12:52:15.389162 3533 scope.go:117] "RemoveContainer" containerID="fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7" Apr 30 12:52:15.396676 containerd[1911]: time="2025-04-30T12:52:15.396362084Z" level=info msg="RemoveContainer for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\"" Apr 30 12:52:15.397637 systemd[1]: Removed slice kubepods-besteffort-pod42444ad5_7189_4126_9d0d_c9e898dc3811.slice - libcontainer container kubepods-besteffort-pod42444ad5_7189_4126_9d0d_c9e898dc3811.slice. Apr 30 12:52:15.406195 systemd[1]: Removed slice kubepods-burstable-podea2903db_f092_4c14_859b_407746c8ad61.slice - libcontainer container kubepods-burstable-podea2903db_f092_4c14_859b_407746c8ad61.slice. Apr 30 12:52:15.406331 systemd[1]: kubepods-burstable-podea2903db_f092_4c14_859b_407746c8ad61.slice: Consumed 8.363s CPU time, 198.7M memory peak, 74.3M read from disk, 13.3M written to disk. Apr 30 12:52:15.408995 containerd[1911]: time="2025-04-30T12:52:15.408953353Z" level=info msg="RemoveContainer for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" returns successfully" Apr 30 12:52:15.415427 kubelet[3533]: I0430 12:52:15.415396 3533 scope.go:117] "RemoveContainer" containerID="fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7" Apr 30 12:52:15.417262 containerd[1911]: time="2025-04-30T12:52:15.417202489Z" level=error msg="ContainerStatus for \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\": not found" Apr 30 12:52:15.434549 kubelet[3533]: E0430 12:52:15.434485 3533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\": not found" containerID="fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7" Apr 30 12:52:15.438988 kubelet[3533]: I0430 12:52:15.438874 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7"} err="failed to get container status \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbc22512b5cbc70bc7185815d8ce75fae793d660e231b103eb86b7c66c3117f7\": not found" Apr 30 12:52:15.438988 kubelet[3533]: I0430 12:52:15.438991 3533 scope.go:117] "RemoveContainer" containerID="ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b" Apr 30 12:52:15.440462 containerd[1911]: time="2025-04-30T12:52:15.440402532Z" level=info msg="RemoveContainer for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\"" Apr 30 12:52:15.445742 containerd[1911]: time="2025-04-30T12:52:15.445587701Z" level=info msg="RemoveContainer for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" returns successfully" Apr 30 12:52:15.446159 kubelet[3533]: I0430 12:52:15.445927 3533 scope.go:117] "RemoveContainer" containerID="997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284" Apr 30 12:52:15.447402 containerd[1911]: time="2025-04-30T12:52:15.447366540Z" level=info msg="RemoveContainer for \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\"" Apr 30 12:52:15.452502 containerd[1911]: time="2025-04-30T12:52:15.452474682Z" level=info msg="RemoveContainer for \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\" returns successfully" Apr 30 12:52:15.453370 kubelet[3533]: I0430 12:52:15.453339 3533 scope.go:117] "RemoveContainer" containerID="27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b" Apr 30 12:52:15.454365 containerd[1911]: time="2025-04-30T12:52:15.454343567Z" level=info msg="RemoveContainer for \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\"" Apr 30 12:52:15.459490 containerd[1911]: time="2025-04-30T12:52:15.459461979Z" level=info msg="RemoveContainer for \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\" returns successfully" Apr 30 12:52:15.459927 kubelet[3533]: I0430 12:52:15.459702 3533 scope.go:117] "RemoveContainer" containerID="06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff" Apr 30 12:52:15.461269 containerd[1911]: time="2025-04-30T12:52:15.460903939Z" level=info msg="RemoveContainer for \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\"" Apr 30 12:52:15.466262 containerd[1911]: time="2025-04-30T12:52:15.466120024Z" level=info msg="RemoveContainer for \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\" returns successfully" Apr 30 12:52:15.466495 kubelet[3533]: I0430 12:52:15.466465 3533 scope.go:117] "RemoveContainer" containerID="c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11" Apr 30 12:52:15.472869 containerd[1911]: time="2025-04-30T12:52:15.472704412Z" level=info msg="RemoveContainer for \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\"" Apr 30 12:52:15.479998 containerd[1911]: time="2025-04-30T12:52:15.479956626Z" level=info msg="RemoveContainer for \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\" returns successfully" Apr 30 12:52:15.480515 containerd[1911]: time="2025-04-30T12:52:15.480423987Z" level=error msg="ContainerStatus for \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\": not found" Apr 30 12:52:15.480582 kubelet[3533]: I0430 12:52:15.480218 3533 scope.go:117] "RemoveContainer" containerID="ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b" Apr 30 12:52:15.480582 kubelet[3533]: E0430 12:52:15.480538 3533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\": not found" containerID="ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b" Apr 30 12:52:15.480582 kubelet[3533]: I0430 12:52:15.480559 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b"} err="failed to get container status \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad4cf358be058d46582f9f11bafdb5d60b5121160ea0a45fda1f0d6b216f9e9b\": not found" Apr 30 12:52:15.480582 kubelet[3533]: I0430 12:52:15.480577 3533 scope.go:117] "RemoveContainer" containerID="997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284" Apr 30 12:52:15.480970 containerd[1911]: time="2025-04-30T12:52:15.480697655Z" level=error msg="ContainerStatus for \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\": not found" Apr 30 12:52:15.481127 kubelet[3533]: E0430 12:52:15.480765 3533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\": not found" containerID="997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284" Apr 30 12:52:15.481127 kubelet[3533]: I0430 12:52:15.480779 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284"} err="failed to get container status \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\": rpc error: code = NotFound desc = an error occurred when try to find container \"997f6290923b3ec615c2ea47018c4cdb06f599368f317435e0339d46e2adc284\": not found" Apr 30 12:52:15.481127 kubelet[3533]: I0430 12:52:15.480791 3533 scope.go:117] "RemoveContainer" containerID="27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b" Apr 30 12:52:15.481127 kubelet[3533]: E0430 12:52:15.481047 3533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\": not found" containerID="27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b" Apr 30 12:52:15.481127 kubelet[3533]: I0430 12:52:15.481062 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b"} err="failed to get container status \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\": rpc error: code = NotFound desc = an error occurred when try to find container \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\": not found" Apr 30 12:52:15.481127 kubelet[3533]: I0430 12:52:15.481076 3533 scope.go:117] "RemoveContainer" containerID="06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff" Apr 30 12:52:15.481606 containerd[1911]: time="2025-04-30T12:52:15.480955101Z" level=error msg="ContainerStatus for \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27a3ef6a1a118668a4469c63b4aff28ba73db44470d3df433a3ce45ef6d5b77b\": not found" Apr 30 12:52:15.481606 containerd[1911]: time="2025-04-30T12:52:15.481202911Z" level=error msg="ContainerStatus for \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\": not found" Apr 30 12:52:15.481606 containerd[1911]: time="2025-04-30T12:52:15.481427721Z" level=error msg="ContainerStatus for \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\": not found" Apr 30 12:52:15.481726 kubelet[3533]: E0430 12:52:15.481286 3533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\": not found" containerID="06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff" Apr 30 12:52:15.481726 kubelet[3533]: I0430 12:52:15.481301 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff"} err="failed to get container status \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"06231ce81f2012075ee872f1811bb9d6c1e67cdc07536a63e87c1e18495834ff\": not found" Apr 30 12:52:15.481726 kubelet[3533]: I0430 12:52:15.481313 3533 scope.go:117] "RemoveContainer" containerID="c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11" Apr 30 12:52:15.481726 kubelet[3533]: E0430 12:52:15.481555 3533 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\": not found" containerID="c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11" Apr 30 12:52:15.481726 kubelet[3533]: I0430 12:52:15.481571 3533 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11"} err="failed to get container status \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6f7a681c9033378964cb6bf41200b7768e812631bf6cf64ea9e97e95cc9fe11\": not found" Apr 30 12:52:15.810105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c-rootfs.mount: Deactivated successfully. Apr 30 12:52:15.810249 systemd[1]: var-lib-kubelet-pods-42444ad5\x2d7189\x2d4126\x2d9d0d\x2dc9e898dc3811-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk7fv2.mount: Deactivated successfully. Apr 30 12:52:15.810807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff-rootfs.mount: Deactivated successfully. Apr 30 12:52:15.810910 systemd[1]: var-lib-kubelet-pods-ea2903db\x2df092\x2d4c14\x2d859b\x2d407746c8ad61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7m9s.mount: Deactivated successfully. Apr 30 12:52:15.811002 systemd[1]: var-lib-kubelet-pods-ea2903db\x2df092\x2d4c14\x2d859b\x2d407746c8ad61-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:52:15.811095 systemd[1]: var-lib-kubelet-pods-ea2903db\x2df092\x2d4c14\x2d859b\x2d407746c8ad61-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:52:16.733669 sshd[5116]: Connection closed by 147.75.109.163 port 36318 Apr 30 12:52:16.735345 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:16.739604 systemd[1]: sshd@24-172.31.19.82:22-147.75.109.163:36318.service: Deactivated successfully. Apr 30 12:52:16.741963 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 12:52:16.742944 systemd-logind[1889]: Session 25 logged out. Waiting for processes to exit. Apr 30 12:52:16.744152 systemd-logind[1889]: Removed session 25. Apr 30 12:52:16.788517 systemd[1]: Started sshd@25-172.31.19.82:22-147.75.109.163:36320.service - OpenSSH per-connection server daemon (147.75.109.163:36320). Apr 30 12:52:17.052695 sshd[5278]: Accepted publickey for core from 147.75.109.163 port 36320 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:17.054404 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:17.060478 systemd-logind[1889]: New session 26 of user core. Apr 30 12:52:17.066381 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 12:52:17.092538 kubelet[3533]: I0430 12:52:17.092494 3533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42444ad5-7189-4126-9d0d-c9e898dc3811" path="/var/lib/kubelet/pods/42444ad5-7189-4126-9d0d-c9e898dc3811/volumes" Apr 30 12:52:17.093131 kubelet[3533]: I0430 12:52:17.093093 3533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2903db-f092-4c14-859b-407746c8ad61" path="/var/lib/kubelet/pods/ea2903db-f092-4c14-859b-407746c8ad61/volumes" Apr 30 12:52:17.166329 kubelet[3533]: E0430 12:52:17.166287 3533 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:52:17.418072 ntpd[1882]: Deleting interface #12 lxc_health, fe80::f038:4fff:fe2b:3fa5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=53 secs Apr 30 12:52:17.418627 ntpd[1882]: 30 Apr 12:52:17 ntpd[1882]: Deleting interface #12 lxc_health, fe80::f038:4fff:fe2b:3fa5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=53 secs Apr 30 12:52:17.648035 sshd[5281]: Connection closed by 147.75.109.163 port 36320 Apr 30 12:52:17.651436 sshd-session[5278]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:17.655641 systemd[1]: sshd@25-172.31.19.82:22-147.75.109.163:36320.service: Deactivated successfully. Apr 30 12:52:17.662673 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 12:52:17.665086 systemd-logind[1889]: Session 26 logged out. Waiting for processes to exit. Apr 30 12:52:17.669868 systemd-logind[1889]: Removed session 26. Apr 30 12:52:17.705619 systemd[1]: Started sshd@26-172.31.19.82:22-147.75.109.163:42678.service - OpenSSH per-connection server daemon (147.75.109.163:42678). Apr 30 12:52:17.710360 kubelet[3533]: I0430 12:52:17.704767 3533 topology_manager.go:215] "Topology Admit Handler" podUID="2f76ec96-7574-4bda-9a6b-74c64638cd6f" podNamespace="kube-system" podName="cilium-j5fn2" Apr 30 12:52:17.718133 kubelet[3533]: E0430 12:52:17.718085 3533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea2903db-f092-4c14-859b-407746c8ad61" containerName="mount-cgroup" Apr 30 12:52:17.718714 kubelet[3533]: E0430 12:52:17.718311 3533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea2903db-f092-4c14-859b-407746c8ad61" containerName="apply-sysctl-overwrites" Apr 30 12:52:17.718714 kubelet[3533]: E0430 12:52:17.718333 3533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42444ad5-7189-4126-9d0d-c9e898dc3811" containerName="cilium-operator" Apr 30 12:52:17.718714 kubelet[3533]: E0430 12:52:17.718342 3533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea2903db-f092-4c14-859b-407746c8ad61" containerName="clean-cilium-state" Apr 30 12:52:17.718714 kubelet[3533]: E0430 12:52:17.718354 3533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea2903db-f092-4c14-859b-407746c8ad61" containerName="mount-bpf-fs" Apr 30 12:52:17.718714 kubelet[3533]: E0430 12:52:17.718363 3533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea2903db-f092-4c14-859b-407746c8ad61" containerName="cilium-agent" Apr 30 12:52:17.718714 kubelet[3533]: I0430 12:52:17.718419 3533 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2903db-f092-4c14-859b-407746c8ad61" containerName="cilium-agent" Apr 30 12:52:17.718714 kubelet[3533]: I0430 12:52:17.718429 3533 memory_manager.go:354] "RemoveStaleState removing state" podUID="42444ad5-7189-4126-9d0d-c9e898dc3811" containerName="cilium-operator" Apr 30 12:52:17.776482 systemd[1]: Created slice kubepods-burstable-pod2f76ec96_7574_4bda_9a6b_74c64638cd6f.slice - libcontainer container kubepods-burstable-pod2f76ec96_7574_4bda_9a6b_74c64638cd6f.slice. Apr 30 12:52:17.783220 kubelet[3533]: I0430 12:52:17.782161 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f76ec96-7574-4bda-9a6b-74c64638cd6f-cilium-ipsec-secrets\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783220 kubelet[3533]: I0430 12:52:17.782249 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-bpf-maps\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783220 kubelet[3533]: I0430 12:52:17.782268 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-cilium-cgroup\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783220 kubelet[3533]: I0430 12:52:17.782285 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-host-proc-sys-kernel\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783220 kubelet[3533]: I0430 12:52:17.782299 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-lib-modules\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783220 kubelet[3533]: I0430 12:52:17.782314 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f76ec96-7574-4bda-9a6b-74c64638cd6f-cilium-config-path\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783517 kubelet[3533]: I0430 12:52:17.782330 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-889n6\" (UniqueName: \"kubernetes.io/projected/2f76ec96-7574-4bda-9a6b-74c64638cd6f-kube-api-access-889n6\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783517 kubelet[3533]: I0430 12:52:17.782347 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-hostproc\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783517 kubelet[3533]: I0430 12:52:17.782362 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-xtables-lock\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783517 kubelet[3533]: I0430 12:52:17.782377 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f76ec96-7574-4bda-9a6b-74c64638cd6f-clustermesh-secrets\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783517 kubelet[3533]: I0430 12:52:17.782392 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-host-proc-sys-net\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783517 kubelet[3533]: I0430 12:52:17.782408 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-etc-cni-netd\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783659 kubelet[3533]: I0430 12:52:17.782424 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-cilium-run\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783659 kubelet[3533]: I0430 12:52:17.782442 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f76ec96-7574-4bda-9a6b-74c64638cd6f-cni-path\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.783659 kubelet[3533]: I0430 12:52:17.782458 3533 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f76ec96-7574-4bda-9a6b-74c64638cd6f-hubble-tls\") pod \"cilium-j5fn2\" (UID: \"2f76ec96-7574-4bda-9a6b-74c64638cd6f\") " pod="kube-system/cilium-j5fn2" Apr 30 12:52:17.985079 sshd[5293]: Accepted publickey for core from 147.75.109.163 port 42678 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:17.987087 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:17.991999 systemd-logind[1889]: New session 27 of user core. Apr 30 12:52:17.998344 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 12:52:18.086634 containerd[1911]: time="2025-04-30T12:52:18.086592320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j5fn2,Uid:2f76ec96-7574-4bda-9a6b-74c64638cd6f,Namespace:kube-system,Attempt:0,}" Apr 30 12:52:18.117489 containerd[1911]: time="2025-04-30T12:52:18.117360571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:52:18.117489 containerd[1911]: time="2025-04-30T12:52:18.117412729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:52:18.117489 containerd[1911]: time="2025-04-30T12:52:18.117426946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:18.117692 containerd[1911]: time="2025-04-30T12:52:18.117602160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:52:18.138384 systemd[1]: Started cri-containerd-b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862.scope - libcontainer container b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862. Apr 30 12:52:18.161033 containerd[1911]: time="2025-04-30T12:52:18.161003084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j5fn2,Uid:2f76ec96-7574-4bda-9a6b-74c64638cd6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\"" Apr 30 12:52:18.168427 containerd[1911]: time="2025-04-30T12:52:18.168385093Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:52:18.176999 sshd[5299]: Connection closed by 147.75.109.163 port 42678 Apr 30 12:52:18.177113 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:18.179951 systemd[1]: sshd@26-172.31.19.82:22-147.75.109.163:42678.service: Deactivated successfully. Apr 30 12:52:18.182310 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 12:52:18.184364 systemd-logind[1889]: Session 27 logged out. Waiting for processes to exit. Apr 30 12:52:18.185887 systemd-logind[1889]: Removed session 27. Apr 30 12:52:18.189647 containerd[1911]: time="2025-04-30T12:52:18.189462608Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae\"" Apr 30 12:52:18.190347 containerd[1911]: time="2025-04-30T12:52:18.190252970Z" level=info msg="StartContainer for \"f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae\"" Apr 30 12:52:18.228398 systemd[1]: Started cri-containerd-f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae.scope - libcontainer container f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae. Apr 30 12:52:18.235552 systemd[1]: Started sshd@27-172.31.19.82:22-147.75.109.163:42692.service - OpenSSH per-connection server daemon (147.75.109.163:42692). Apr 30 12:52:18.260843 containerd[1911]: time="2025-04-30T12:52:18.260805608Z" level=info msg="StartContainer for \"f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae\" returns successfully" Apr 30 12:52:18.277740 systemd[1]: cri-containerd-f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae.scope: Deactivated successfully. Apr 30 12:52:18.278448 systemd[1]: cri-containerd-f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae.scope: Consumed 20ms CPU time, 9.7M memory peak, 3.2M read from disk. Apr 30 12:52:18.328360 containerd[1911]: time="2025-04-30T12:52:18.328113675Z" level=info msg="shim disconnected" id=f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae namespace=k8s.io Apr 30 12:52:18.328360 containerd[1911]: time="2025-04-30T12:52:18.328199678Z" level=warning msg="cleaning up after shim disconnected" id=f1c227416bfc863059a10fc2a4a109f84681d20fe2d15876691f4dbfaf8245ae namespace=k8s.io Apr 30 12:52:18.328360 containerd[1911]: time="2025-04-30T12:52:18.328209895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:18.406500 containerd[1911]: time="2025-04-30T12:52:18.406338871Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:52:18.431135 containerd[1911]: time="2025-04-30T12:52:18.431088735Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267\"" Apr 30 12:52:18.431772 containerd[1911]: time="2025-04-30T12:52:18.431594468Z" level=info msg="StartContainer for \"606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267\"" Apr 30 12:52:18.460409 systemd[1]: Started cri-containerd-606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267.scope - libcontainer container 606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267. Apr 30 12:52:18.489583 containerd[1911]: time="2025-04-30T12:52:18.489022194Z" level=info msg="StartContainer for \"606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267\" returns successfully" Apr 30 12:52:18.491481 sshd[5362]: Accepted publickey for core from 147.75.109.163 port 42692 ssh2: RSA SHA256:hWFbLZfpyLdN3yuB7pBWwnRO+bXJlsyzaawWuSZBTyk Apr 30 12:52:18.495664 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:52:18.502016 systemd-logind[1889]: New session 28 of user core. Apr 30 12:52:18.505542 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 12:52:18.506652 systemd[1]: cri-containerd-606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267.scope: Deactivated successfully. Apr 30 12:52:18.506993 systemd[1]: cri-containerd-606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267.scope: Consumed 18ms CPU time, 7.4M memory peak, 2.1M read from disk. Apr 30 12:52:18.555862 containerd[1911]: time="2025-04-30T12:52:18.555779190Z" level=info msg="shim disconnected" id=606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267 namespace=k8s.io Apr 30 12:52:18.555862 containerd[1911]: time="2025-04-30T12:52:18.555853846Z" level=warning msg="cleaning up after shim disconnected" id=606470d5c0a25354dad0aa881f3d3f6d6eb49766499390b8bd9260656c80c267 namespace=k8s.io Apr 30 12:52:18.555862 containerd[1911]: time="2025-04-30T12:52:18.555865042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:18.802850 kubelet[3533]: I0430 12:52:18.802714 3533 setters.go:580] "Node became not ready" node="ip-172-31-19-82" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:52:18Z","lastTransitionTime":"2025-04-30T12:52:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:52:19.413075 containerd[1911]: time="2025-04-30T12:52:19.412932181Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:52:19.437549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630203631.mount: Deactivated successfully. Apr 30 12:52:19.441618 containerd[1911]: time="2025-04-30T12:52:19.441578049Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f\"" Apr 30 12:52:19.442201 containerd[1911]: time="2025-04-30T12:52:19.442064807Z" level=info msg="StartContainer for \"305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f\"" Apr 30 12:52:19.473344 systemd[1]: Started cri-containerd-305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f.scope - libcontainer container 305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f. Apr 30 12:52:19.506998 containerd[1911]: time="2025-04-30T12:52:19.506950185Z" level=info msg="StartContainer for \"305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f\" returns successfully" Apr 30 12:52:19.513896 systemd[1]: cri-containerd-305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f.scope: Deactivated successfully. Apr 30 12:52:19.552471 containerd[1911]: time="2025-04-30T12:52:19.552368101Z" level=info msg="shim disconnected" id=305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f namespace=k8s.io Apr 30 12:52:19.552471 containerd[1911]: time="2025-04-30T12:52:19.552418236Z" level=warning msg="cleaning up after shim disconnected" id=305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f namespace=k8s.io Apr 30 12:52:19.552471 containerd[1911]: time="2025-04-30T12:52:19.552425944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:19.888453 systemd[1]: run-containerd-runc-k8s.io-305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f-runc.NbrdOO.mount: Deactivated successfully. Apr 30 12:52:19.888556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-305766b4d1a6f3504fe012d7fc20dd0d36a73e2529de9033b1850866d9b4d72f-rootfs.mount: Deactivated successfully. Apr 30 12:52:20.416372 containerd[1911]: time="2025-04-30T12:52:20.416340803Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:52:20.436460 containerd[1911]: time="2025-04-30T12:52:20.436413849Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd\"" Apr 30 12:52:20.437770 containerd[1911]: time="2025-04-30T12:52:20.437243432Z" level=info msg="StartContainer for \"e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd\"" Apr 30 12:52:20.468095 systemd[1]: run-containerd-runc-k8s.io-e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd-runc.PRSopN.mount: Deactivated successfully. Apr 30 12:52:20.474395 systemd[1]: Started cri-containerd-e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd.scope - libcontainer container e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd. Apr 30 12:52:20.500786 systemd[1]: cri-containerd-e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd.scope: Deactivated successfully. Apr 30 12:52:20.504718 containerd[1911]: time="2025-04-30T12:52:20.504596042Z" level=info msg="StartContainer for \"e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd\" returns successfully" Apr 30 12:52:20.534253 containerd[1911]: time="2025-04-30T12:52:20.534197369Z" level=info msg="shim disconnected" id=e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd namespace=k8s.io Apr 30 12:52:20.534253 containerd[1911]: time="2025-04-30T12:52:20.534244019Z" level=warning msg="cleaning up after shim disconnected" id=e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd namespace=k8s.io Apr 30 12:52:20.534253 containerd[1911]: time="2025-04-30T12:52:20.534252426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:20.888325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0fc5549b4b7dd3fa30cbf8a5e820cf39812f53634f19374707e79af6291c0fd-rootfs.mount: Deactivated successfully. Apr 30 12:52:21.422216 containerd[1911]: time="2025-04-30T12:52:21.422014023Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:52:21.446796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307475193.mount: Deactivated successfully. Apr 30 12:52:21.460135 containerd[1911]: time="2025-04-30T12:52:21.460087646Z" level=info msg="CreateContainer within sandbox \"b0fbda9ea2e66c7683ed60d52f7b72c29451799469203b4abcc3f5b0a10dd862\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3\"" Apr 30 12:52:21.460864 containerd[1911]: time="2025-04-30T12:52:21.460837179Z" level=info msg="StartContainer for \"052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3\"" Apr 30 12:52:21.498376 systemd[1]: Started cri-containerd-052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3.scope - libcontainer container 052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3. Apr 30 12:52:21.540056 containerd[1911]: time="2025-04-30T12:52:21.539986586Z" level=info msg="StartContainer for \"052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3\" returns successfully" Apr 30 12:52:22.200203 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 12:52:22.439422 kubelet[3533]: I0430 12:52:22.438831 3533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j5fn2" podStartSLOduration=5.438814192 podStartE2EDuration="5.438814192s" podCreationTimestamp="2025-04-30 12:52:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:52:22.437866382 +0000 UTC m=+95.462932066" watchObservedRunningTime="2025-04-30 12:52:22.438814192 +0000 UTC m=+95.463879882" Apr 30 12:52:23.019863 systemd[1]: run-containerd-runc-k8s.io-052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3-runc.cvdWWH.mount: Deactivated successfully. Apr 30 12:52:25.070618 (udev-worker)[6147]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:52:25.072559 systemd-networkd[1824]: lxc_health: Link UP Apr 30 12:52:25.073491 systemd-networkd[1824]: lxc_health: Gained carrier Apr 30 12:52:25.211197 systemd[1]: run-containerd-runc-k8s.io-052a0ae3aff5ee1a9a62ffa47bd58928d307e1ec9848770602888829f1f871f3-runc.EaxAcW.mount: Deactivated successfully. Apr 30 12:52:25.330641 kubelet[3533]: E0430 12:52:25.330550 3533 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47566->127.0.0.1:39777: write tcp 127.0.0.1:47566->127.0.0.1:39777: write: broken pipe Apr 30 12:52:26.177257 systemd-networkd[1824]: lxc_health: Gained IPv6LL Apr 30 12:52:28.418148 ntpd[1882]: Listen normally on 15 lxc_health [fe80::f0ae:14ff:feaf:8552%14]:123 Apr 30 12:52:28.419341 ntpd[1882]: 30 Apr 12:52:28 ntpd[1882]: Listen normally on 15 lxc_health [fe80::f0ae:14ff:feaf:8552%14]:123 Apr 30 12:52:32.065612 sshd[5444]: Connection closed by 147.75.109.163 port 42692 Apr 30 12:52:32.067667 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Apr 30 12:52:32.070743 systemd[1]: sshd@27-172.31.19.82:22-147.75.109.163:42692.service: Deactivated successfully. Apr 30 12:52:32.072871 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 12:52:32.074360 systemd-logind[1889]: Session 28 logged out. Waiting for processes to exit. Apr 30 12:52:32.075806 systemd-logind[1889]: Removed session 28. Apr 30 12:52:45.913650 systemd[1]: cri-containerd-37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249.scope: Deactivated successfully. Apr 30 12:52:45.913932 systemd[1]: cri-containerd-37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249.scope: Consumed 2.448s CPU time, 77.5M memory peak, 30.7M read from disk. Apr 30 12:52:45.935513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249-rootfs.mount: Deactivated successfully. Apr 30 12:52:45.962322 containerd[1911]: time="2025-04-30T12:52:45.962102216Z" level=info msg="shim disconnected" id=37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249 namespace=k8s.io Apr 30 12:52:45.962322 containerd[1911]: time="2025-04-30T12:52:45.962163182Z" level=warning msg="cleaning up after shim disconnected" id=37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249 namespace=k8s.io Apr 30 12:52:45.962322 containerd[1911]: time="2025-04-30T12:52:45.962183942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:46.493256 kubelet[3533]: I0430 12:52:46.493225 3533 scope.go:117] "RemoveContainer" containerID="37b44401d79489211770663490f621f3c42cd3f77227d609fd7bce6fe9877249" Apr 30 12:52:46.496335 containerd[1911]: time="2025-04-30T12:52:46.496297780Z" level=info msg="CreateContainer within sandbox \"7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 12:52:46.520631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203867635.mount: Deactivated successfully. Apr 30 12:52:46.527951 containerd[1911]: time="2025-04-30T12:52:46.527893242Z" level=info msg="CreateContainer within sandbox \"7283bfede085c607eb7eb36359f59d46568c4fb6459b32d5ba2f566f2d6befab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c47017e5ab37754aaca385cc7ad12bdfb01090ec30d92625b6e7fcd1cbfd42a3\"" Apr 30 12:52:46.528437 containerd[1911]: time="2025-04-30T12:52:46.528408898Z" level=info msg="StartContainer for \"c47017e5ab37754aaca385cc7ad12bdfb01090ec30d92625b6e7fcd1cbfd42a3\"" Apr 30 12:52:46.562387 systemd[1]: Started cri-containerd-c47017e5ab37754aaca385cc7ad12bdfb01090ec30d92625b6e7fcd1cbfd42a3.scope - libcontainer container c47017e5ab37754aaca385cc7ad12bdfb01090ec30d92625b6e7fcd1cbfd42a3. Apr 30 12:52:46.609118 containerd[1911]: time="2025-04-30T12:52:46.609074088Z" level=info msg="StartContainer for \"c47017e5ab37754aaca385cc7ad12bdfb01090ec30d92625b6e7fcd1cbfd42a3\" returns successfully" Apr 30 12:52:47.113069 containerd[1911]: time="2025-04-30T12:52:47.112282129Z" level=info msg="StopPodSandbox for \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\"" Apr 30 12:52:47.113069 containerd[1911]: time="2025-04-30T12:52:47.112532471Z" level=info msg="TearDown network for sandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" successfully" Apr 30 12:52:47.113069 containerd[1911]: time="2025-04-30T12:52:47.112550575Z" level=info msg="StopPodSandbox for \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" returns successfully" Apr 30 12:52:47.128067 containerd[1911]: time="2025-04-30T12:52:47.126348935Z" level=info msg="RemovePodSandbox for \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\"" Apr 30 12:52:47.130008 containerd[1911]: time="2025-04-30T12:52:47.129949933Z" level=info msg="Forcibly stopping sandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\"" Apr 30 12:52:47.130361 containerd[1911]: time="2025-04-30T12:52:47.130283419Z" level=info msg="TearDown network for sandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" successfully" Apr 30 12:52:47.140271 containerd[1911]: time="2025-04-30T12:52:47.140224451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:52:47.140620 containerd[1911]: time="2025-04-30T12:52:47.140477260Z" level=info msg="RemovePodSandbox \"c19d8054af02944b1e25e9f4d1043756ecdfe8129e3eae1b6452e577c7a0797c\" returns successfully" Apr 30 12:52:47.141271 containerd[1911]: time="2025-04-30T12:52:47.141060220Z" level=info msg="StopPodSandbox for \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\"" Apr 30 12:52:47.141271 containerd[1911]: time="2025-04-30T12:52:47.141150639Z" level=info msg="TearDown network for sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" successfully" Apr 30 12:52:47.141824 containerd[1911]: time="2025-04-30T12:52:47.141163657Z" level=info msg="StopPodSandbox for \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" returns successfully" Apr 30 12:52:47.142423 containerd[1911]: time="2025-04-30T12:52:47.142218449Z" level=info msg="RemovePodSandbox for \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\"" Apr 30 12:52:47.142423 containerd[1911]: time="2025-04-30T12:52:47.142246760Z" level=info msg="Forcibly stopping sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\"" Apr 30 12:52:47.142423 containerd[1911]: time="2025-04-30T12:52:47.142347370Z" level=info msg="TearDown network for sandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" successfully" Apr 30 12:52:47.148323 containerd[1911]: time="2025-04-30T12:52:47.148158982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:52:47.148323 containerd[1911]: time="2025-04-30T12:52:47.148280389Z" level=info msg="RemovePodSandbox \"6cb4934b7510e498b99dd792f32a4593247af06272fb726db3c1de4cadb350ff\" returns successfully" Apr 30 12:52:49.689465 kubelet[3533]: E0430 12:52:49.689409 3533 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 12:52:50.493132 systemd[1]: cri-containerd-33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235.scope: Deactivated successfully. Apr 30 12:52:50.493757 systemd[1]: cri-containerd-33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235.scope: Consumed 1.226s CPU time, 27.6M memory peak, 9M read from disk. Apr 30 12:52:50.515247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235-rootfs.mount: Deactivated successfully. Apr 30 12:52:50.541304 containerd[1911]: time="2025-04-30T12:52:50.541222153Z" level=info msg="shim disconnected" id=33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235 namespace=k8s.io Apr 30 12:52:50.541882 containerd[1911]: time="2025-04-30T12:52:50.541299006Z" level=warning msg="cleaning up after shim disconnected" id=33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235 namespace=k8s.io Apr 30 12:52:50.541882 containerd[1911]: time="2025-04-30T12:52:50.541329426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:52:51.506699 kubelet[3533]: I0430 12:52:51.506664 3533 scope.go:117] "RemoveContainer" containerID="33d89f22806facc5c1c9d777b95929c93c4240441b38697ecf0856b430b78235" Apr 30 12:52:51.509117 containerd[1911]: time="2025-04-30T12:52:51.509075277Z" level=info msg="CreateContainer within sandbox \"5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 12:52:51.532063 containerd[1911]: time="2025-04-30T12:52:51.532012986Z" level=info msg="CreateContainer within sandbox \"5ae65ec8e7f72452e82e916b0b168119fa1d7f51d6854e626009bdb8f3c19a81\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bf9a0f323e172cb75963f274b503578ae3012fa85bc9ad44aaffd301e3979863\"" Apr 30 12:52:51.532557 containerd[1911]: time="2025-04-30T12:52:51.532472184Z" level=info msg="StartContainer for \"bf9a0f323e172cb75963f274b503578ae3012fa85bc9ad44aaffd301e3979863\"" Apr 30 12:52:51.562336 systemd[1]: Started cri-containerd-bf9a0f323e172cb75963f274b503578ae3012fa85bc9ad44aaffd301e3979863.scope - libcontainer container bf9a0f323e172cb75963f274b503578ae3012fa85bc9ad44aaffd301e3979863. Apr 30 12:52:51.604507 containerd[1911]: time="2025-04-30T12:52:51.604461148Z" level=info msg="StartContainer for \"bf9a0f323e172cb75963f274b503578ae3012fa85bc9ad44aaffd301e3979863\" returns successfully" Apr 30 12:52:59.690000 kubelet[3533]: E0430 12:52:59.689672 3533 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"