Mar 25 01:33:19.971207 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:33:19.971248 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:33:19.971268 kernel: BIOS-provided physical RAM map: Mar 25 01:33:19.971280 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:33:19.971292 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 25 01:33:19.971305 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 25 01:33:19.971320 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 25 01:33:19.971333 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 25 01:33:19.971345 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 25 01:33:19.971358 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 25 01:33:19.972420 kernel: NX (Execute Disable) protection: active Mar 25 01:33:19.972448 kernel: APIC: Static calls initialized Mar 25 01:33:19.972463 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 25 01:33:19.972476 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 25 01:33:19.972492 kernel: extended physical RAM map: Mar 25 01:33:19.972507 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:33:19.972527 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 25 01:33:19.972542 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 25 01:33:19.972555 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 25 01:33:19.972569 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 25 01:33:19.972583 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 25 01:33:19.972676 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 25 01:33:19.972693 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 25 01:33:19.972707 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 25 01:33:19.972722 kernel: efi: EFI v2.7 by EDK II Mar 25 01:33:19.972736 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Mar 25 01:33:19.972755 kernel: secureboot: Secure boot disabled Mar 25 01:33:19.972769 kernel: SMBIOS 2.7 present. Mar 25 01:33:19.972782 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 25 01:33:19.972796 kernel: Hypervisor detected: KVM Mar 25 01:33:19.972810 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:33:19.972823 kernel: kvm-clock: using sched offset of 4056373512 cycles Mar 25 01:33:19.972837 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:33:19.972851 kernel: tsc: Detected 2499.998 MHz processor Mar 25 01:33:19.972864 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:33:19.972877 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:33:19.972890 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 25 01:33:19.972907 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 25 01:33:19.972920 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:33:19.972989 kernel: Using GB pages for direct mapping Mar 25 01:33:19.973010 kernel: ACPI: Early table checksum verification disabled Mar 25 01:33:19.973024 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 25 01:33:19.973038 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 25 01:33:19.973055 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 25 01:33:19.973071 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 25 01:33:19.973087 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 25 01:33:19.973101 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 25 01:33:19.973117 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 25 01:33:19.973133 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 25 01:33:19.973147 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 25 01:33:19.973162 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 25 01:33:19.973179 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 25 01:33:19.973192 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 25 01:33:19.973298 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 25 01:33:19.973312 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 25 01:33:19.976403 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 25 01:33:19.976431 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 25 01:33:19.976447 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 25 01:33:19.976462 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 25 01:33:19.976477 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 25 01:33:19.976497 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 25 01:33:19.976511 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 25 01:33:19.976525 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 25 01:33:19.976539 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 25 01:33:19.976553 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 25 01:33:19.976569 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 25 01:33:19.976583 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 25 01:33:19.976598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 25 01:33:19.976612 kernel: NUMA: Initialized distance table, cnt=1 Mar 25 01:33:19.976629 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Mar 25 01:33:19.976644 kernel: Zone ranges: Mar 25 01:33:19.976659 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:33:19.976673 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 25 01:33:19.976688 kernel: Normal empty Mar 25 01:33:19.976702 kernel: Movable zone start for each node Mar 25 01:33:19.976717 kernel: Early memory node ranges Mar 25 01:33:19.976730 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 25 01:33:19.976745 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 25 01:33:19.976763 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 25 01:33:19.976779 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 25 01:33:19.976793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:33:19.976807 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 25 01:33:19.976833 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 25 01:33:19.976847 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 25 01:33:19.976861 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 25 01:33:19.976874 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:33:19.976887 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 25 01:33:19.976905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:33:19.976919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:33:19.976933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:33:19.976948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:33:19.976961 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:33:19.976975 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 25 01:33:19.976988 kernel: TSC deadline timer available Mar 25 01:33:19.977001 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 01:33:19.977015 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 01:33:19.977028 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 25 01:33:19.977043 kernel: Booting paravirtualized kernel on KVM Mar 25 01:33:19.977054 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:33:19.977066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 01:33:19.977078 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 01:33:19.977090 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 01:33:19.977102 kernel: pcpu-alloc: [0] 0 1 Mar 25 01:33:19.977115 kernel: kvm-guest: PV spinlocks enabled Mar 25 01:33:19.977127 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:33:19.977147 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:33:19.977161 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:33:19.977175 kernel: random: crng init done Mar 25 01:33:19.977189 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:33:19.977202 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 25 01:33:19.977217 kernel: Fallback order for Node 0: 0 Mar 25 01:33:19.977231 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 25 01:33:19.977244 kernel: Policy zone: DMA32 Mar 25 01:33:19.977262 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:33:19.977277 kernel: Memory: 1870488K/2037804K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 167060K reserved, 0K cma-reserved) Mar 25 01:33:19.977292 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:33:19.977307 kernel: Kernel/User page tables isolation: enabled Mar 25 01:33:19.977322 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:33:19.977349 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:33:19.977368 kernel: Dynamic Preempt: voluntary Mar 25 01:33:19.979441 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:33:19.979464 kernel: rcu: RCU event tracing is enabled. Mar 25 01:33:19.979482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:33:19.979498 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:33:19.979514 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:33:19.979534 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:33:19.979550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:33:19.979567 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:33:19.979584 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 25 01:33:19.979600 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:33:19.979619 kernel: Console: colour dummy device 80x25 Mar 25 01:33:19.979634 kernel: printk: console [tty0] enabled Mar 25 01:33:19.979650 kernel: printk: console [ttyS0] enabled Mar 25 01:33:19.979674 kernel: ACPI: Core revision 20230628 Mar 25 01:33:19.979690 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 25 01:33:19.979705 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:33:19.979722 kernel: x2apic enabled Mar 25 01:33:19.979738 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:33:19.979754 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 25 01:33:19.979773 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 25 01:33:19.979923 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 25 01:33:19.979944 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 25 01:33:19.979960 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:33:19.979976 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:33:19.979993 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:33:19.980009 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:33:19.980024 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 25 01:33:19.980040 kernel: RETBleed: Vulnerable Mar 25 01:33:19.980056 kernel: Speculative Store Bypass: Vulnerable Mar 25 01:33:19.980075 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:33:19.980091 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:33:19.980106 kernel: GDS: Unknown: Dependent on hypervisor status Mar 25 01:33:19.980122 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:33:19.980137 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:33:19.980153 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:33:19.980168 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 25 01:33:19.980183 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 25 01:33:19.980199 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 25 01:33:19.980214 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 25 01:33:19.980229 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 25 01:33:19.980248 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 25 01:33:19.980263 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:33:19.980278 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 25 01:33:19.980293 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 25 01:33:19.980309 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 25 01:33:19.980324 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 25 01:33:19.980339 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 25 01:33:19.980354 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 25 01:33:19.980369 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 25 01:33:19.980397 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:33:19.980412 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:33:19.980428 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:33:19.980446 kernel: landlock: Up and running. Mar 25 01:33:19.980461 kernel: SELinux: Initializing. Mar 25 01:33:19.980477 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 01:33:19.980492 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 01:33:19.980507 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 25 01:33:19.980523 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:33:19.980538 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:33:19.980554 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:33:19.980571 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 25 01:33:19.980589 kernel: signal: max sigframe size: 3632 Mar 25 01:33:19.980605 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:33:19.980621 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:33:19.980636 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 25 01:33:19.980652 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:33:19.980668 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:33:19.980683 kernel: .... node #0, CPUs: #1 Mar 25 01:33:19.980700 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 25 01:33:19.980717 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 25 01:33:19.980735 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:33:19.980751 kernel: smpboot: Max logical packages: 1 Mar 25 01:33:19.980767 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 25 01:33:19.980782 kernel: devtmpfs: initialized Mar 25 01:33:19.980797 kernel: x86/mm: Memory block size: 128MB Mar 25 01:33:19.980812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 25 01:33:19.980828 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:33:19.980844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:33:19.980860 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:33:19.980878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:33:19.980894 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:33:19.980909 kernel: audit: type=2000 audit(1742866399.867:1): state=initialized audit_enabled=0 res=1 Mar 25 01:33:19.980925 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:33:19.980940 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:33:19.980957 kernel: cpuidle: using governor menu Mar 25 01:33:19.980973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:33:19.980990 kernel: dca service started, version 1.12.1 Mar 25 01:33:19.981005 kernel: PCI: Using configuration type 1 for base access Mar 25 01:33:19.981025 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:33:19.981040 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:33:19.981055 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:33:19.981071 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:33:19.981086 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:33:19.981101 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:33:19.981117 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:33:19.981132 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:33:19.981147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:33:19.981166 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 25 01:33:19.981181 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:33:19.981197 kernel: ACPI: Interpreter enabled Mar 25 01:33:19.981212 kernel: ACPI: PM: (supports S0 S5) Mar 25 01:33:19.981228 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:33:19.981243 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:33:19.981333 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 01:33:19.981354 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 25 01:33:19.981370 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:33:19.982979 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:33:19.983249 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 25 01:33:19.983421 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 25 01:33:19.983442 kernel: acpiphp: Slot [3] registered Mar 25 01:33:19.983456 kernel: acpiphp: Slot [4] registered Mar 25 01:33:19.983469 kernel: acpiphp: Slot [5] registered Mar 25 01:33:19.983482 kernel: acpiphp: Slot [6] registered Mar 25 01:33:19.983502 kernel: acpiphp: Slot [7] registered Mar 25 01:33:19.983518 kernel: acpiphp: Slot [8] registered Mar 25 01:33:19.983721 kernel: acpiphp: Slot [9] registered Mar 25 01:33:19.983739 kernel: acpiphp: Slot [10] registered Mar 25 01:33:19.983755 kernel: acpiphp: Slot [11] registered Mar 25 01:33:19.983771 kernel: acpiphp: Slot [12] registered Mar 25 01:33:19.983787 kernel: acpiphp: Slot [13] registered Mar 25 01:33:19.983803 kernel: acpiphp: Slot [14] registered Mar 25 01:33:19.983818 kernel: acpiphp: Slot [15] registered Mar 25 01:33:19.983834 kernel: acpiphp: Slot [16] registered Mar 25 01:33:19.983853 kernel: acpiphp: Slot [17] registered Mar 25 01:33:19.983869 kernel: acpiphp: Slot [18] registered Mar 25 01:33:19.983884 kernel: acpiphp: Slot [19] registered Mar 25 01:33:19.983900 kernel: acpiphp: Slot [20] registered Mar 25 01:33:19.983916 kernel: acpiphp: Slot [21] registered Mar 25 01:33:19.983932 kernel: acpiphp: Slot [22] registered Mar 25 01:33:19.983948 kernel: acpiphp: Slot [23] registered Mar 25 01:33:19.983963 kernel: acpiphp: Slot [24] registered Mar 25 01:33:19.983979 kernel: acpiphp: Slot [25] registered Mar 25 01:33:19.983997 kernel: acpiphp: Slot [26] registered Mar 25 01:33:19.984013 kernel: acpiphp: Slot [27] registered Mar 25 01:33:19.984029 kernel: acpiphp: Slot [28] registered Mar 25 01:33:19.984045 kernel: acpiphp: Slot [29] registered Mar 25 01:33:19.984061 kernel: acpiphp: Slot [30] registered Mar 25 01:33:19.984077 kernel: acpiphp: Slot [31] registered Mar 25 01:33:19.984092 kernel: PCI host bridge to bus 0000:00 Mar 25 01:33:19.984254 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:33:19.984390 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:33:19.985572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:33:19.985707 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 25 01:33:19.985913 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 25 01:33:19.986036 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:33:19.986190 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 25 01:33:19.986342 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 25 01:33:19.988620 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 25 01:33:19.988786 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 25 01:33:19.988917 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 25 01:33:19.989115 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 25 01:33:19.989247 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 25 01:33:19.989371 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 25 01:33:19.989518 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 25 01:33:19.989648 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 25 01:33:19.989786 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 25 01:33:19.989968 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 25 01:33:19.990108 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 25 01:33:19.990245 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 25 01:33:19.992401 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 01:33:19.992594 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 25 01:33:19.992964 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 25 01:33:19.993510 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 25 01:33:19.993722 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 25 01:33:20.001308 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:33:20.001346 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:33:20.001362 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:33:20.001413 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:33:20.001433 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 25 01:33:20.001446 kernel: iommu: Default domain type: Translated Mar 25 01:33:20.001460 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:33:20.001472 kernel: efivars: Registered efivars operations Mar 25 01:33:20.001486 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:33:20.001498 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:33:20.001511 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 25 01:33:20.001523 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 25 01:33:20.001536 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 25 01:33:20.001723 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 25 01:33:20.001876 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 25 01:33:20.002014 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 01:33:20.002034 kernel: vgaarb: loaded Mar 25 01:33:20.002049 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 25 01:33:20.002065 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 25 01:33:20.002079 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:33:20.002094 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:33:20.002109 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:33:20.002130 kernel: pnp: PnP ACPI init Mar 25 01:33:20.002144 kernel: pnp: PnP ACPI: found 5 devices Mar 25 01:33:20.002159 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:33:20.002175 kernel: NET: Registered PF_INET protocol family Mar 25 01:33:20.002190 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:33:20.002206 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 25 01:33:20.002221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:33:20.002236 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:33:20.002251 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 25 01:33:20.002271 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 25 01:33:20.002286 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 01:33:20.002301 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 01:33:20.002316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:33:20.002330 kernel: NET: Registered PF_XDP protocol family Mar 25 01:33:20.002539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:33:20.002667 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:33:20.002786 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:33:20.002978 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 25 01:33:20.003160 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 25 01:33:20.003483 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 25 01:33:20.003503 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:33:20.003517 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 25 01:33:20.003560 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 25 01:33:20.003574 kernel: clocksource: Switched to clocksource tsc Mar 25 01:33:20.003588 kernel: Initialise system trusted keyrings Mar 25 01:33:20.003601 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 25 01:33:20.003655 kernel: Key type asymmetric registered Mar 25 01:33:20.003674 kernel: Asymmetric key parser 'x509' registered Mar 25 01:33:20.003687 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:33:20.003748 kernel: io scheduler mq-deadline registered Mar 25 01:33:20.003762 kernel: io scheduler kyber registered Mar 25 01:33:20.003775 kernel: io scheduler bfq registered Mar 25 01:33:20.003789 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:33:20.003831 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:33:20.003845 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:33:20.003863 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:33:20.003877 kernel: i8042: Warning: Keylock active Mar 25 01:33:20.003919 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:33:20.004008 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:33:20.004286 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 25 01:33:20.004431 kernel: rtc_cmos 00:00: registered as rtc0 Mar 25 01:33:20.004552 kernel: rtc_cmos 00:00: setting system clock to 2025-03-25T01:33:19 UTC (1742866399) Mar 25 01:33:20.004672 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 25 01:33:20.004694 kernel: intel_pstate: CPU model not supported Mar 25 01:33:20.004708 kernel: efifb: probing for efifb Mar 25 01:33:20.004723 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 25 01:33:20.004759 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 25 01:33:20.004776 kernel: efifb: scrolling: redraw Mar 25 01:33:20.004792 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:33:20.004807 kernel: Console: switching to colour frame buffer device 100x37 Mar 25 01:33:20.004832 kernel: fb0: EFI VGA frame buffer device Mar 25 01:33:20.004846 kernel: pstore: Using crash dump compression: deflate Mar 25 01:33:20.004864 kernel: pstore: Registered efi_pstore as persistent store backend Mar 25 01:33:20.004878 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:33:20.004892 kernel: Segment Routing with IPv6 Mar 25 01:33:20.004906 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:33:20.004920 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:33:20.004935 kernel: Key type dns_resolver registered Mar 25 01:33:20.004948 kernel: IPI shorthand broadcast: enabled Mar 25 01:33:20.004961 kernel: sched_clock: Marking stable (569003165, 143928360)->(808225298, -95293773) Mar 25 01:33:20.004975 kernel: registered taskstats version 1 Mar 25 01:33:20.004992 kernel: Loading compiled-in X.509 certificates Mar 25 01:33:20.005059 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:33:20.005074 kernel: Key type .fscrypt registered Mar 25 01:33:20.005088 kernel: Key type fscrypt-provisioning registered Mar 25 01:33:20.005102 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:33:20.005117 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:33:20.005131 kernel: ima: No architecture policies found Mar 25 01:33:20.005148 kernel: clk: Disabling unused clocks Mar 25 01:33:20.005165 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:33:20.005179 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:33:20.005192 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:33:20.005206 kernel: Run /init as init process Mar 25 01:33:20.005220 kernel: with arguments: Mar 25 01:33:20.005233 kernel: /init Mar 25 01:33:20.005247 kernel: with environment: Mar 25 01:33:20.005261 kernel: HOME=/ Mar 25 01:33:20.005274 kernel: TERM=linux Mar 25 01:33:20.005291 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:33:20.005306 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:33:20.005325 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:33:20.005341 systemd[1]: Detected virtualization amazon. Mar 25 01:33:20.005355 systemd[1]: Detected architecture x86-64. Mar 25 01:33:20.005371 systemd[1]: Running in initrd. Mar 25 01:33:20.006465 systemd[1]: No hostname configured, using default hostname. Mar 25 01:33:20.006484 systemd[1]: Hostname set to . Mar 25 01:33:20.006499 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:33:20.006564 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:33:20.006619 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:33:20.006708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:33:20.006933 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:33:20.006952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:33:20.006967 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:33:20.006983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:33:20.007000 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:33:20.007015 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:33:20.007031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:33:20.007051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:33:20.007065 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:33:20.007081 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:33:20.007097 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:33:20.007113 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:33:20.007129 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:33:20.007146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:33:20.007162 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:33:20.007178 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:33:20.007197 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:33:20.007214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:33:20.007230 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:33:20.007246 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:33:20.007262 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:33:20.007278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:33:20.007295 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:33:20.007311 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:33:20.007330 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:33:20.007347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:33:20.007363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:33:20.007394 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:33:20.007411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:33:20.007461 systemd-journald[179]: Collecting audit messages is disabled. Mar 25 01:33:20.007504 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:33:20.007521 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:33:20.007538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:20.007557 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:33:20.007573 systemd-journald[179]: Journal started Mar 25 01:33:20.007606 systemd-journald[179]: Runtime Journal (/run/log/journal/ec218c595b7f5197d64da623d89e4694) is 4.7M, max 38.1M, 33.3M free. Mar 25 01:33:20.014290 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 25 01:33:20.014347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:33:20.016405 systemd-modules-load[181]: Inserted module 'overlay' Mar 25 01:33:20.024372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:33:20.026941 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:33:20.051946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:33:20.065206 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:33:20.069557 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:33:20.067858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:33:20.075121 kernel: Bridge firewalling registered Mar 25 01:33:20.072412 systemd-modules-load[181]: Inserted module 'br_netfilter' Mar 25 01:33:20.076598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:33:20.082746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:33:20.088591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:33:20.091813 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:33:20.113950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:33:20.127561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:33:20.130672 dracut-cmdline[209]: dracut-dracut-053 Mar 25 01:33:20.132747 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:33:20.212147 systemd-resolved[222]: Positive Trust Anchors: Mar 25 01:33:20.212170 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:33:20.212233 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:33:20.225281 systemd-resolved[222]: Defaulting to hostname 'linux'. Mar 25 01:33:20.226829 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:33:20.227613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:33:20.251406 kernel: SCSI subsystem initialized Mar 25 01:33:20.262444 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:33:20.279409 kernel: iscsi: registered transport (tcp) Mar 25 01:33:20.302894 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:33:20.302974 kernel: QLogic iSCSI HBA Driver Mar 25 01:33:20.343623 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:33:20.346013 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:33:20.385485 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:33:20.385564 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:33:20.386395 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:33:20.431047 kernel: raid6: avx512x4 gen() 15834 MB/s Mar 25 01:33:20.448407 kernel: raid6: avx512x2 gen() 16105 MB/s Mar 25 01:33:20.466409 kernel: raid6: avx512x1 gen() 16340 MB/s Mar 25 01:33:20.485643 kernel: raid6: avx2x4 gen() 11897 MB/s Mar 25 01:33:20.503411 kernel: raid6: avx2x2 gen() 10687 MB/s Mar 25 01:33:20.522394 kernel: raid6: avx2x1 gen() 12628 MB/s Mar 25 01:33:20.522465 kernel: raid6: using algorithm avx512x1 gen() 16340 MB/s Mar 25 01:33:20.541737 kernel: raid6: .... xor() 20532 MB/s, rmw enabled Mar 25 01:33:20.541815 kernel: raid6: using avx512x2 recovery algorithm Mar 25 01:33:20.573412 kernel: xor: automatically using best checksumming function avx Mar 25 01:33:20.756406 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:33:20.766943 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:33:20.768971 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:33:20.797706 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 25 01:33:20.803570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:33:20.806828 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:33:20.833189 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 25 01:33:20.862279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:33:20.864231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:33:20.932162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:33:20.937137 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:33:20.975829 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:33:20.979309 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:33:20.981310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:33:20.982147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:33:20.985702 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:33:21.011485 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:33:21.033047 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:33:21.045859 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 25 01:33:21.079185 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 25 01:33:21.079402 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:33:21.079426 kernel: AES CTR mode by8 optimization enabled Mar 25 01:33:21.079445 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 25 01:33:21.079630 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:23:93:62:60:b3 Mar 25 01:33:21.071819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:33:21.072096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:33:21.072970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:33:21.116836 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 25 01:33:21.117087 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 25 01:33:21.073760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:33:21.120992 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 25 01:33:21.074025 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:21.075191 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:33:21.078028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:33:21.085809 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:33:21.131704 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:33:21.131740 kernel: GPT:9289727 != 16777215 Mar 25 01:33:21.131760 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:33:21.131780 kernel: GPT:9289727 != 16777215 Mar 25 01:33:21.131799 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:33:21.131826 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:33:21.097146 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:33:21.133358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:21.140032 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:33:21.175339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:33:21.206416 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (458) Mar 25 01:33:21.233410 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (450) Mar 25 01:33:21.235294 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 25 01:33:21.261433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 25 01:33:21.263969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 25 01:33:21.270197 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:33:21.291579 disk-uuid[626]: Primary Header is updated. Mar 25 01:33:21.291579 disk-uuid[626]: Secondary Entries is updated. Mar 25 01:33:21.291579 disk-uuid[626]: Secondary Header is updated. Mar 25 01:33:21.312411 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:33:21.568797 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 25 01:33:21.579877 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:33:22.317400 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:33:22.318736 disk-uuid[632]: The operation has completed successfully. Mar 25 01:33:22.469637 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:33:22.469755 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:33:22.525516 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:33:22.539861 sh[890]: Success Mar 25 01:33:22.570400 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 25 01:33:22.713276 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:33:22.722977 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:33:22.735494 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:33:22.759124 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:33:22.759193 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:22.759214 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:33:22.762967 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:33:22.763024 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:33:22.790409 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 25 01:33:22.794047 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:33:22.795474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:33:22.797111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:33:22.801600 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:33:22.848264 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:22.848338 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:22.848361 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:33:22.856472 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:33:22.864429 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:22.867546 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:33:22.871665 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:33:22.926120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:33:22.929141 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:33:22.965909 systemd-networkd[1079]: lo: Link UP Mar 25 01:33:22.965921 systemd-networkd[1079]: lo: Gained carrier Mar 25 01:33:22.967752 systemd-networkd[1079]: Enumeration completed Mar 25 01:33:22.968355 systemd-networkd[1079]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:22.968361 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:33:22.970189 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:33:22.971611 systemd[1]: Reached target network.target - Network. Mar 25 01:33:22.971954 systemd-networkd[1079]: eth0: Link UP Mar 25 01:33:22.971958 systemd-networkd[1079]: eth0: Gained carrier Mar 25 01:33:22.971966 systemd-networkd[1079]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:22.984629 systemd-networkd[1079]: eth0: DHCPv4 address 172.31.17.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:33:23.239122 ignition[1028]: Ignition 2.20.0 Mar 25 01:33:23.239140 ignition[1028]: Stage: fetch-offline Mar 25 01:33:23.239368 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:23.239397 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:23.240060 ignition[1028]: Ignition finished successfully Mar 25 01:33:23.242567 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:33:23.244258 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:33:23.272410 ignition[1090]: Ignition 2.20.0 Mar 25 01:33:23.272424 ignition[1090]: Stage: fetch Mar 25 01:33:23.272821 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:23.272835 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:23.272966 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:23.281122 ignition[1090]: PUT result: OK Mar 25 01:33:23.284374 ignition[1090]: parsed url from cmdline: "" Mar 25 01:33:23.284425 ignition[1090]: no config URL provided Mar 25 01:33:23.284435 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:33:23.284451 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:33:23.284472 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:23.294493 ignition[1090]: PUT result: OK Mar 25 01:33:23.294580 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 25 01:33:23.295592 ignition[1090]: GET result: OK Mar 25 01:33:23.295686 ignition[1090]: parsing config with SHA512: a7605064905ea0fcf2a82d25ab5e5dafbb0b387bf759775b6732973311ce619ae2bb6dfac151a4e3593ee3a449d2623a16943031d6cef12505359e3826d8d87b Mar 25 01:33:23.300436 unknown[1090]: fetched base config from "system" Mar 25 01:33:23.300452 unknown[1090]: fetched base config from "system" Mar 25 01:33:23.300459 unknown[1090]: fetched user config from "aws" Mar 25 01:33:23.301173 ignition[1090]: fetch: fetch complete Mar 25 01:33:23.301181 ignition[1090]: fetch: fetch passed Mar 25 01:33:23.301237 ignition[1090]: Ignition finished successfully Mar 25 01:33:23.304444 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:33:23.305842 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:33:23.333311 ignition[1096]: Ignition 2.20.0 Mar 25 01:33:23.333324 ignition[1096]: Stage: kargs Mar 25 01:33:23.333783 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:23.333792 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:23.333993 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:23.334995 ignition[1096]: PUT result: OK Mar 25 01:33:23.337303 ignition[1096]: kargs: kargs passed Mar 25 01:33:23.337358 ignition[1096]: Ignition finished successfully Mar 25 01:33:23.338690 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:33:23.340302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:33:23.364772 ignition[1102]: Ignition 2.20.0 Mar 25 01:33:23.364783 ignition[1102]: Stage: disks Mar 25 01:33:23.365097 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:23.365106 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:23.365196 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:23.365981 ignition[1102]: PUT result: OK Mar 25 01:33:23.368555 ignition[1102]: disks: disks passed Mar 25 01:33:23.368612 ignition[1102]: Ignition finished successfully Mar 25 01:33:23.369928 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:33:23.370810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:33:23.371208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:33:23.371779 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:33:23.372351 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:33:23.373087 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:33:23.374711 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:33:23.426589 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:33:23.429645 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:33:23.431889 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:33:23.550434 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:33:23.551963 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:33:23.553696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:33:23.564189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:33:23.567479 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:33:23.569938 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:33:23.571300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:33:23.571342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:33:23.580254 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:33:23.582313 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:33:23.601405 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1129) Mar 25 01:33:23.606309 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:23.606391 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:23.606413 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:33:23.620427 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:33:23.622609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:33:23.884698 initrd-setup-root[1156]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:33:23.892126 initrd-setup-root[1163]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:33:23.898175 initrd-setup-root[1170]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:33:23.903566 initrd-setup-root[1177]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:33:24.077514 systemd-networkd[1079]: eth0: Gained IPv6LL Mar 25 01:33:24.143263 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:33:24.145670 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:33:24.148515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:33:24.164318 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:33:24.166755 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:24.202504 ignition[1244]: INFO : Ignition 2.20.0 Mar 25 01:33:24.203585 ignition[1244]: INFO : Stage: mount Mar 25 01:33:24.207396 ignition[1244]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:24.207396 ignition[1244]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:24.207396 ignition[1244]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:24.207396 ignition[1244]: INFO : PUT result: OK Mar 25 01:33:24.211302 ignition[1244]: INFO : mount: mount passed Mar 25 01:33:24.211302 ignition[1244]: INFO : Ignition finished successfully Mar 25 01:33:24.209344 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:33:24.213587 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:33:24.215002 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:33:24.231836 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:33:24.270932 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1257) Mar 25 01:33:24.271020 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:24.274301 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:24.274356 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:33:24.280403 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:33:24.283121 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:33:24.308558 ignition[1274]: INFO : Ignition 2.20.0 Mar 25 01:33:24.308558 ignition[1274]: INFO : Stage: files Mar 25 01:33:24.310873 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:24.310873 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:24.310873 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:24.310873 ignition[1274]: INFO : PUT result: OK Mar 25 01:33:24.313778 ignition[1274]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:33:24.326267 ignition[1274]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:33:24.326267 ignition[1274]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:33:24.351715 ignition[1274]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:33:24.356356 ignition[1274]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:33:24.356356 ignition[1274]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:33:24.356053 unknown[1274]: wrote ssh authorized keys file for user: core Mar 25 01:33:24.359221 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 25 01:33:24.360642 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 25 01:33:24.468940 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:33:24.688910 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 25 01:33:24.690154 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:33:24.690154 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 25 01:33:25.199938 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:33:25.435792 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:33:25.437732 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:33:25.437732 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:33:25.437732 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:33:25.443453 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 25 01:33:25.714267 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:33:26.108699 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:33:26.108699 ignition[1274]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:33:26.110936 ignition[1274]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:33:26.112298 ignition[1274]: INFO : files: files passed Mar 25 01:33:26.112298 ignition[1274]: INFO : Ignition finished successfully Mar 25 01:33:26.116433 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:33:26.121567 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:33:26.123173 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:33:26.141681 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:33:26.142795 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:33:26.149400 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:33:26.151617 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:33:26.153554 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:33:26.156016 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:33:26.156713 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:33:26.159159 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:33:26.218175 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:33:26.218324 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:33:26.220306 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:33:26.221458 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:33:26.222258 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:33:26.224618 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:33:26.249566 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:33:26.252907 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:33:26.280387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:33:26.281270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:33:26.282230 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:33:26.283341 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:33:26.283715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:33:26.284877 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:33:26.285927 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:33:26.286777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:33:26.287707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:33:26.288758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:33:26.289509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:33:26.290450 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:33:26.291423 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:33:26.292746 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:33:26.293483 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:33:26.294170 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:33:26.294353 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:33:26.295578 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:33:26.296580 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:33:26.297239 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:33:26.297462 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:33:26.298288 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:33:26.298736 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:33:26.300000 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:33:26.300226 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:33:26.300943 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:33:26.301144 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:33:26.305670 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:33:26.309668 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:33:26.310181 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:33:26.310645 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:33:26.311502 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:33:26.311677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:33:26.324290 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:33:26.324440 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:33:26.341719 ignition[1328]: INFO : Ignition 2.20.0 Mar 25 01:33:26.341719 ignition[1328]: INFO : Stage: umount Mar 25 01:33:26.341719 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:26.341719 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:33:26.341719 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:33:26.351298 ignition[1328]: INFO : PUT result: OK Mar 25 01:33:26.351298 ignition[1328]: INFO : umount: umount passed Mar 25 01:33:26.353474 ignition[1328]: INFO : Ignition finished successfully Mar 25 01:33:26.355346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:33:26.356323 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:33:26.356482 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:33:26.357957 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:33:26.358066 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:33:26.358882 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:33:26.358943 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:33:26.359649 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:33:26.359712 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:33:26.360314 systemd[1]: Stopped target network.target - Network. Mar 25 01:33:26.361808 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:33:26.361881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:33:26.362472 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:33:26.363088 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:33:26.366601 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:33:26.367038 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:33:26.368129 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:33:26.368855 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:33:26.368927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:33:26.369653 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:33:26.369777 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:33:26.370322 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:33:26.370473 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:33:26.371264 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:33:26.371325 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:33:26.372073 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:33:26.372685 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:33:26.376075 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:33:26.376273 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:33:26.381940 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:33:26.382821 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:33:26.382937 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:33:26.385505 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:33:26.386052 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:33:26.386186 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:33:26.388936 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:33:26.389763 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:33:26.389838 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:33:26.392807 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:33:26.393321 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:33:26.393425 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:33:26.394018 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:33:26.394076 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:33:26.394809 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:33:26.394867 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:33:26.395683 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:33:26.400695 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:33:26.412758 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:33:26.412962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:33:26.414674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:33:26.414757 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:33:26.417551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:33:26.417607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:33:26.418135 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:33:26.418204 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:33:26.420454 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:33:26.420504 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:33:26.420941 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:33:26.420983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:33:26.425310 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:33:26.426557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:33:26.426636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:33:26.429400 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 01:33:26.429466 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:33:26.430061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:33:26.430122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:33:26.430884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:33:26.430940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:26.436681 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:33:26.436770 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:33:26.446282 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:33:26.447004 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:33:26.474626 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:33:26.474764 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:33:26.476323 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:33:26.476842 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:33:26.477047 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:33:26.479155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:33:26.502244 systemd[1]: Switching root. Mar 25 01:33:26.539418 systemd-journald[179]: Journal stopped Mar 25 01:33:28.306041 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 25 01:33:28.306134 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:33:28.306160 kernel: SELinux: policy capability open_perms=1 Mar 25 01:33:28.306178 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:33:28.306201 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:33:28.306282 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:33:28.306312 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:33:28.306332 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:33:28.306361 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:33:28.309435 kernel: audit: type=1403 audit(1742866406.829:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:33:28.309536 systemd[1]: Successfully loaded SELinux policy in 45.830ms. Mar 25 01:33:28.309580 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 18.954ms. Mar 25 01:33:28.309602 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:33:28.309621 systemd[1]: Detected virtualization amazon. Mar 25 01:33:28.309645 systemd[1]: Detected architecture x86-64. Mar 25 01:33:28.309666 systemd[1]: Detected first boot. Mar 25 01:33:28.309694 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:33:28.309714 zram_generator::config[1373]: No configuration found. Mar 25 01:33:28.309735 kernel: Guest personality initialized and is inactive Mar 25 01:33:28.309758 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:33:28.309777 kernel: Initialized host personality Mar 25 01:33:28.309797 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:33:28.309819 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:33:28.309845 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:33:28.309866 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:33:28.309885 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:33:28.309903 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:33:28.309923 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:33:28.309945 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:33:28.309964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:33:28.309986 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:33:28.310008 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:33:28.310028 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:33:28.310049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:33:28.310070 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:33:28.310090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:33:28.310110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:33:28.310134 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:33:28.310155 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:33:28.310177 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:33:28.310198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:33:28.310219 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:33:28.310241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:33:28.310261 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:33:28.310285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:33:28.310305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:33:28.310326 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:33:28.310346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:33:28.310368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:33:28.310436 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:33:28.310458 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:33:28.310567 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:33:28.310595 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:33:28.310621 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:33:28.310643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:33:28.310663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:33:28.310684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:33:28.310704 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:33:28.310725 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:33:28.310745 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:33:28.310766 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:33:28.310787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:28.310812 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:33:28.310833 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:33:28.310854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:33:28.310875 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:33:28.310895 systemd[1]: Reached target machines.target - Containers. Mar 25 01:33:28.310915 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:33:28.310936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:33:28.310957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:33:28.310978 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:33:28.311001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:33:28.311020 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:33:28.311041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:33:28.311060 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:33:28.311081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:33:28.311101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:33:28.311162 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:33:28.311186 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:33:28.311210 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:33:28.311230 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:33:28.311251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:33:28.311272 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:33:28.311294 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:33:28.311314 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:33:28.311335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:33:28.311356 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:33:28.329428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:33:28.329472 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:33:28.329492 systemd[1]: Stopped verity-setup.service. Mar 25 01:33:28.329513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:28.329532 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:33:28.329556 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:33:28.329576 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:33:28.329594 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:33:28.329613 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:33:28.329632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:33:28.329653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:33:28.329674 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:33:28.329692 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:33:28.329712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:33:28.329731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:33:28.329750 kernel: fuse: init (API version 7.39) Mar 25 01:33:28.329768 kernel: loop: module loaded Mar 25 01:33:28.329785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:33:28.329803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:33:28.329821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:33:28.329914 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:33:28.329938 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:33:28.329957 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:33:28.329975 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:33:28.329994 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:33:28.330012 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:33:28.330030 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:33:28.330049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:33:28.330071 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:33:28.330090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:33:28.330109 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:33:28.330127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:33:28.330182 systemd-journald[1459]: Collecting audit messages is disabled. Mar 25 01:33:28.330220 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:33:28.330238 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:33:28.330259 systemd-journald[1459]: Journal started Mar 25 01:33:28.330294 systemd-journald[1459]: Runtime Journal (/run/log/journal/ec218c595b7f5197d64da623d89e4694) is 4.7M, max 38.1M, 33.3M free. Mar 25 01:33:27.796394 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:33:27.807544 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 25 01:33:27.807971 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:33:28.354530 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:33:28.344538 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:33:28.345806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:33:28.346446 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:33:28.348060 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:33:28.348344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:33:28.351852 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:33:28.359814 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:33:28.364316 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:33:28.368441 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:33:28.369632 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:33:28.418584 kernel: ACPI: bus type drm_connector registered Mar 25 01:33:28.437683 kernel: loop0: detected capacity change from 0 to 109808 Mar 25 01:33:28.421940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:33:28.423315 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:33:28.423653 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:33:28.440919 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:33:28.442678 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:33:28.457557 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:33:28.464973 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:33:28.467513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:33:28.513498 systemd-journald[1459]: Time spent on flushing to /var/log/journal/ec218c595b7f5197d64da623d89e4694 is 31.397ms for 1013 entries. Mar 25 01:33:28.513498 systemd-journald[1459]: System Journal (/var/log/journal/ec218c595b7f5197d64da623d89e4694) is 8M, max 195.6M, 187.6M free. Mar 25 01:33:28.551316 systemd-journald[1459]: Received client request to flush runtime journal. Mar 25 01:33:28.510989 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:33:28.523255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:33:28.527601 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:33:28.554540 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:33:28.561263 systemd-tmpfiles[1489]: ACLs are not supported, ignoring. Mar 25 01:33:28.561290 systemd-tmpfiles[1489]: ACLs are not supported, ignoring. Mar 25 01:33:28.576877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:33:28.580401 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:33:28.582587 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:33:28.594646 udevadm[1519]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:33:28.610532 kernel: loop1: detected capacity change from 0 to 218376 Mar 25 01:33:28.684883 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:33:28.688647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:33:28.730457 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Mar 25 01:33:28.730486 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Mar 25 01:33:28.737957 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:33:28.809875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:33:28.812502 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:33:28.842109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:33:28.857711 kernel: loop2: detected capacity change from 0 to 64352 Mar 25 01:33:29.015871 kernel: loop3: detected capacity change from 0 to 151640 Mar 25 01:33:29.175442 kernel: loop4: detected capacity change from 0 to 109808 Mar 25 01:33:29.203616 kernel: loop5: detected capacity change from 0 to 218376 Mar 25 01:33:29.246622 kernel: loop6: detected capacity change from 0 to 64352 Mar 25 01:33:29.261408 kernel: loop7: detected capacity change from 0 to 151640 Mar 25 01:33:29.290965 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 25 01:33:29.293458 (sd-merge)[1536]: Merged extensions into '/usr'. Mar 25 01:33:29.304320 systemd[1]: Reload requested from client PID 1488 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:33:29.304341 systemd[1]: Reloading... Mar 25 01:33:29.418431 zram_generator::config[1560]: No configuration found. Mar 25 01:33:29.604870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:33:29.708233 systemd[1]: Reloading finished in 403 ms. Mar 25 01:33:29.726619 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:33:29.727725 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:33:29.739608 systemd[1]: Starting ensure-sysext.service... Mar 25 01:33:29.743529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:33:29.747546 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:33:29.773930 ldconfig[1481]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:33:29.784443 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:33:29.787055 systemd[1]: Reload requested from client PID 1616 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:33:29.787186 systemd[1]: Reloading... Mar 25 01:33:29.803885 systemd-udevd[1618]: Using default interface naming scheme 'v255'. Mar 25 01:33:29.805579 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:33:29.806012 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:33:29.809845 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:33:29.810151 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Mar 25 01:33:29.810247 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Mar 25 01:33:29.817647 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:33:29.817660 systemd-tmpfiles[1617]: Skipping /boot Mar 25 01:33:29.838159 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:33:29.838372 systemd-tmpfiles[1617]: Skipping /boot Mar 25 01:33:29.927401 zram_generator::config[1655]: No configuration found. Mar 25 01:33:30.019852 (udev-worker)[1658]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:33:30.119816 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1672) Mar 25 01:33:30.200153 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 25 01:33:30.222457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:33:30.225402 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:33:30.256427 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 25 01:33:30.276403 kernel: ACPI: button: Sleep Button [SLPF] Mar 25 01:33:30.297470 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 25 01:33:30.314067 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Mar 25 01:33:30.391415 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:33:30.412051 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:33:30.412581 systemd[1]: Reloading finished in 622 ms. Mar 25 01:33:30.423341 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:33:30.440249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:33:30.468243 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:33:30.473537 systemd[1]: Finished ensure-sysext.service. Mar 25 01:33:30.498333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:33:30.499054 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:30.500452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:33:30.505912 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:33:30.506953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:33:30.514435 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:33:30.518469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:33:30.521123 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:33:30.525014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:33:30.530222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:33:30.531672 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:33:30.534695 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:33:30.535811 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:33:30.538643 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:33:30.548598 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:33:30.552149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:33:30.561074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:33:30.562364 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:33:30.568622 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:33:30.594150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:33:30.594951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:30.596082 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:33:30.598602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:33:30.600159 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:33:30.608837 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:33:30.609066 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:33:30.610147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:33:30.610373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:33:30.619224 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:33:30.626647 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:33:30.627268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:33:30.634815 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:33:30.635942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:33:30.636194 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:33:30.639229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:33:30.651477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:33:30.669412 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:33:30.676355 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:33:30.684729 lvm[1837]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:33:30.682834 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:33:30.687318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:33:30.692645 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:33:30.731145 augenrules[1859]: No rules Mar 25 01:33:30.735114 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:33:30.735413 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:33:30.736544 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:33:30.738567 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:33:30.765586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:30.766510 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:33:30.854717 systemd-networkd[1821]: lo: Link UP Mar 25 01:33:30.854728 systemd-networkd[1821]: lo: Gained carrier Mar 25 01:33:30.856584 systemd-networkd[1821]: Enumeration completed Mar 25 01:33:30.856719 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:33:30.859923 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:30.859936 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:33:30.860813 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:33:30.863761 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:33:30.871441 systemd-networkd[1821]: eth0: Link UP Mar 25 01:33:30.871855 systemd-networkd[1821]: eth0: Gained carrier Mar 25 01:33:30.872014 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:30.882344 systemd-resolved[1825]: Positive Trust Anchors: Mar 25 01:33:30.882363 systemd-resolved[1825]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:33:30.882551 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.17.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:33:30.885487 systemd-resolved[1825]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:33:30.891421 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:33:30.892757 systemd-resolved[1825]: Defaulting to hostname 'linux'. Mar 25 01:33:30.894574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:33:30.895096 systemd[1]: Reached target network.target - Network. Mar 25 01:33:30.895527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:33:30.895913 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:33:30.896568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:33:30.896971 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:33:30.897499 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:33:30.897957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:33:30.898304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:33:30.898670 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:33:30.898709 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:33:30.899050 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:33:30.900535 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:33:30.902328 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:33:30.905354 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:33:30.905905 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:33:30.906272 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:33:30.910438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:33:30.911886 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:33:30.913007 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:33:30.913520 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:33:30.913885 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:33:30.914285 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:33:30.914322 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:33:30.915422 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:33:30.917702 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:33:30.920555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:33:30.924471 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:33:30.930989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:33:30.931706 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:33:30.934620 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:33:30.937054 systemd[1]: Started ntpd.service - Network Time Service. Mar 25 01:33:30.940478 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:33:30.944497 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 25 01:33:30.954178 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:33:30.969755 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:33:30.993164 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:33:30.996686 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:33:31.002952 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:33:31.005622 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:33:31.009744 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:33:31.018033 jq[1886]: false Mar 25 01:33:31.034559 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:33:31.034895 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:33:31.060754 extend-filesystems[1887]: Found loop4 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found loop5 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found loop6 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found loop7 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p1 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p2 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p3 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found usr Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p4 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p6 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p7 Mar 25 01:33:31.060754 extend-filesystems[1887]: Found nvme0n1p9 Mar 25 01:33:31.060754 extend-filesystems[1887]: Checking size of /dev/nvme0n1p9 Mar 25 01:33:31.131505 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: ---------------------------------------------------- Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: corporation. Support and training for ntp-4 are Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: available at https://www.nwtime.org/support Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: ---------------------------------------------------- Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: proto: precision = 0.063 usec (-24) Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: basedate set to 2025-03-12 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: gps base set to 2025-03-16 (week 2358) Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Listen normally on 3 eth0 172.31.17.232:123 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Listen normally on 4 lo [::1]:123 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: bind(21) AF_INET6 fe80::423:93ff:fe62:60b3%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: unable to create socket on eth0 (5) for fe80::423:93ff:fe62:60b3%2#123 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: failed to init interface for address fe80::423:93ff:fe62:60b3%2 Mar 25 01:33:31.131545 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: Listening on routing socket on fd #21 for interface updates Mar 25 01:33:31.088004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:33:31.065733 ntpd[1889]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:33:31.158148 jq[1899]: true Mar 25 01:33:31.158286 extend-filesystems[1887]: Resized partition /dev/nvme0n1p9 Mar 25 01:33:31.159686 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:31.159686 ntpd[1889]: 25 Mar 01:33:31 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:31.088276 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:33:31.065761 ntpd[1889]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:33:31.160094 extend-filesystems[1918]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:33:31.100677 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:33:31.065772 ntpd[1889]: ---------------------------------------------------- Mar 25 01:33:31.133812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:33:31.065782 ntpd[1889]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:33:31.166215 jq[1912]: true Mar 25 01:33:31.133880 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:33:31.065792 ntpd[1889]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:33:31.134465 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:33:31.065803 ntpd[1889]: corporation. Support and training for ntp-4 are Mar 25 01:33:31.134494 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:33:31.065812 ntpd[1889]: available at https://www.nwtime.org/support Mar 25 01:33:31.065822 ntpd[1889]: ---------------------------------------------------- Mar 25 01:33:31.092872 ntpd[1889]: proto: precision = 0.063 usec (-24) Mar 25 01:33:31.094010 dbus-daemon[1885]: [system] SELinux support is enabled Mar 25 01:33:31.096711 ntpd[1889]: basedate set to 2025-03-12 Mar 25 01:33:31.172938 update_engine[1898]: I20250325 01:33:31.168535 1898 main.cc:92] Flatcar Update Engine starting Mar 25 01:33:31.167869 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 25 01:33:31.096733 ntpd[1889]: gps base set to 2025-03-16 (week 2358) Mar 25 01:33:31.170181 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:33:31.116641 ntpd[1889]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:33:31.171452 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:33:31.116698 ntpd[1889]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:33:31.117166 ntpd[1889]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:33:31.117206 ntpd[1889]: Listen normally on 3 eth0 172.31.17.232:123 Mar 25 01:33:31.117246 ntpd[1889]: Listen normally on 4 lo [::1]:123 Mar 25 01:33:31.117292 ntpd[1889]: bind(21) AF_INET6 fe80::423:93ff:fe62:60b3%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:33:31.117314 ntpd[1889]: unable to create socket on eth0 (5) for fe80::423:93ff:fe62:60b3%2#123 Mar 25 01:33:31.117329 ntpd[1889]: failed to init interface for address fe80::423:93ff:fe62:60b3%2 Mar 25 01:33:31.117367 ntpd[1889]: Listening on routing socket on fd #21 for interface updates Mar 25 01:33:31.118680 dbus-daemon[1885]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 25 01:33:31.140432 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:31.140466 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:31.142255 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 01:33:31.179094 (ntainerd)[1919]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:33:31.187681 update_engine[1898]: I20250325 01:33:31.183712 1898 update_check_scheduler.cc:74] Next update check in 7m55s Mar 25 01:33:31.202649 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:33:31.212538 tar[1909]: linux-amd64/LICENSE Mar 25 01:33:31.213630 tar[1909]: linux-amd64/helm Mar 25 01:33:31.222225 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:33:31.255268 systemd-logind[1897]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:33:31.281657 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 25 01:33:31.281695 extend-filesystems[1918]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 25 01:33:31.281695 extend-filesystems[1918]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:33:31.281695 extend-filesystems[1918]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 25 01:33:31.255302 systemd-logind[1897]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 25 01:33:31.285231 extend-filesystems[1887]: Resized filesystem in /dev/nvme0n1p9 Mar 25 01:33:31.255325 systemd-logind[1897]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:33:31.263594 systemd-logind[1897]: New seat seat0. Mar 25 01:33:31.264732 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:33:31.282704 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:33:31.282989 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:33:31.294727 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 25 01:33:31.399419 bash[1962]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:33:31.404907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:33:31.405293 coreos-metadata[1884]: Mar 25 01:33:31.379 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:33:31.411208 coreos-metadata[1884]: Mar 25 01:33:31.406 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 25 01:33:31.409743 systemd[1]: Starting sshkeys.service... Mar 25 01:33:31.411414 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1665) Mar 25 01:33:31.414270 coreos-metadata[1884]: Mar 25 01:33:31.414 INFO Fetch successful Mar 25 01:33:31.414419 coreos-metadata[1884]: Mar 25 01:33:31.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 25 01:33:31.418797 coreos-metadata[1884]: Mar 25 01:33:31.418 INFO Fetch successful Mar 25 01:33:31.418909 coreos-metadata[1884]: Mar 25 01:33:31.418 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 25 01:33:31.422997 coreos-metadata[1884]: Mar 25 01:33:31.421 INFO Fetch successful Mar 25 01:33:31.422997 coreos-metadata[1884]: Mar 25 01:33:31.421 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 25 01:33:31.422997 coreos-metadata[1884]: Mar 25 01:33:31.421 INFO Fetch successful Mar 25 01:33:31.422997 coreos-metadata[1884]: Mar 25 01:33:31.421 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 25 01:33:31.422997 coreos-metadata[1884]: Mar 25 01:33:31.422 INFO Fetch failed with 404: resource not found Mar 25 01:33:31.428625 coreos-metadata[1884]: Mar 25 01:33:31.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 25 01:33:31.429490 coreos-metadata[1884]: Mar 25 01:33:31.429 INFO Fetch successful Mar 25 01:33:31.429547 coreos-metadata[1884]: Mar 25 01:33:31.429 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 25 01:33:31.434710 coreos-metadata[1884]: Mar 25 01:33:31.433 INFO Fetch successful Mar 25 01:33:31.434710 coreos-metadata[1884]: Mar 25 01:33:31.433 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 25 01:33:31.436206 coreos-metadata[1884]: Mar 25 01:33:31.436 INFO Fetch successful Mar 25 01:33:31.436206 coreos-metadata[1884]: Mar 25 01:33:31.436 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 25 01:33:31.437369 coreos-metadata[1884]: Mar 25 01:33:31.437 INFO Fetch successful Mar 25 01:33:31.437488 coreos-metadata[1884]: Mar 25 01:33:31.437 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 25 01:33:31.438588 coreos-metadata[1884]: Mar 25 01:33:31.438 INFO Fetch successful Mar 25 01:33:31.502084 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 01:33:31.515496 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 01:33:31.568078 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 25 01:33:31.608206 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 25 01:33:31.608865 dbus-daemon[1885]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1928 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 25 01:33:31.624908 systemd[1]: Starting polkit.service - Authorization Manager... Mar 25 01:33:31.631233 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:33:31.632624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:33:31.684051 sshd_keygen[1931]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:33:31.688229 polkitd[2000]: Started polkitd version 121 Mar 25 01:33:31.703031 polkitd[2000]: Loading rules from directory /etc/polkit-1/rules.d Mar 25 01:33:31.707121 polkitd[2000]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 25 01:33:31.730774 polkitd[2000]: Finished loading, compiling and executing 2 rules Mar 25 01:33:31.731997 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 25 01:33:31.732632 systemd[1]: Started polkit.service - Authorization Manager. Mar 25 01:33:31.737710 polkitd[2000]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 25 01:33:31.759737 locksmithd[1937]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:33:31.772528 coreos-metadata[1979]: Mar 25 01:33:31.772 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:33:31.779626 coreos-metadata[1979]: Mar 25 01:33:31.777 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 25 01:33:31.780724 coreos-metadata[1979]: Mar 25 01:33:31.780 INFO Fetch successful Mar 25 01:33:31.780724 coreos-metadata[1979]: Mar 25 01:33:31.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 01:33:31.781709 coreos-metadata[1979]: Mar 25 01:33:31.781 INFO Fetch successful Mar 25 01:33:31.783179 unknown[1979]: wrote ssh authorized keys file for user: core Mar 25 01:33:31.791120 systemd-hostnamed[1928]: Hostname set to (transient) Mar 25 01:33:31.791264 systemd-resolved[1825]: System hostname changed to 'ip-172-31-17-232'. Mar 25 01:33:31.838344 update-ssh-keys[2065]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:33:31.843220 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:33:31.846633 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 01:33:31.876445 systemd[1]: Finished sshkeys.service. Mar 25 01:33:31.952968 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:33:31.999836 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:33:32.002560 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:33:32.005875 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:33:32.009623 containerd[1919]: time="2025-03-25T01:33:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:33:32.011412 containerd[1919]: time="2025-03-25T01:33:32.010674087Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:33:32.040647 containerd[1919]: time="2025-03-25T01:33:32.040596395Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.407µs" Mar 25 01:33:32.040760 containerd[1919]: time="2025-03-25T01:33:32.040651538Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:33:32.040760 containerd[1919]: time="2025-03-25T01:33:32.040679832Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:33:32.040902 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.040885862Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.040919973Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.040960987Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041046322Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041069613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041478619Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041507359Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041532228Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041552814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041677111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043174 containerd[1919]: time="2025-03-25T01:33:32.041918657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043701 containerd[1919]: time="2025-03-25T01:33:32.041966892Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:33:32.043701 containerd[1919]: time="2025-03-25T01:33:32.041984720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:33:32.043701 containerd[1919]: time="2025-03-25T01:33:32.042019851Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:33:32.043701 containerd[1919]: time="2025-03-25T01:33:32.042785357Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:33:32.043701 containerd[1919]: time="2025-03-25T01:33:32.042879415Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:33:32.046616 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047571284Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047680969Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047705653Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047724967Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047742913Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047758958Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047785961Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047804587Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047822285Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047839178Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047853871Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.047877952Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.048037372Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:33:32.049635 containerd[1919]: time="2025-03-25T01:33:32.048068349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048089288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048118211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048136118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048159457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048179605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048196432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048217237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048234385Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048252468Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048352352Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048396847Z" level=info msg="Start snapshots syncer" Mar 25 01:33:32.050175 containerd[1919]: time="2025-03-25T01:33:32.048425812Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:33:32.050631 containerd[1919]: time="2025-03-25T01:33:32.048834926Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:33:32.050631 containerd[1919]: time="2025-03-25T01:33:32.048920833Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049028940Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049150266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049197569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049224834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049256529Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049283009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049299176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049314440Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:33:32.050820 containerd[1919]: time="2025-03-25T01:33:32.049350126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:33:32.050890 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:33:32.052845 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:33:32.057445 containerd[1919]: time="2025-03-25T01:33:32.049369459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:33:32.057548 containerd[1919]: time="2025-03-25T01:33:32.057471807Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:33:32.057605 containerd[1919]: time="2025-03-25T01:33:32.057574642Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:33:32.057645 containerd[1919]: time="2025-03-25T01:33:32.057599202Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:33:32.057645 containerd[1919]: time="2025-03-25T01:33:32.057614978Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:33:32.057645 containerd[1919]: time="2025-03-25T01:33:32.057631454Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057644838Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057661660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057679321Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057704838Z" level=info msg="runtime interface created" Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057713414Z" level=info msg="created NRI interface" Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057726898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:33:32.057752 containerd[1919]: time="2025-03-25T01:33:32.057749909Z" level=info msg="Connect containerd service" Mar 25 01:33:32.057971 containerd[1919]: time="2025-03-25T01:33:32.057795682Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:33:32.059702 containerd[1919]: time="2025-03-25T01:33:32.058856162Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:33:32.066159 ntpd[1889]: bind(24) AF_INET6 fe80::423:93ff:fe62:60b3%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:33:32.066639 ntpd[1889]: 25 Mar 01:33:32 ntpd[1889]: bind(24) AF_INET6 fe80::423:93ff:fe62:60b3%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:33:32.066639 ntpd[1889]: 25 Mar 01:33:32 ntpd[1889]: unable to create socket on eth0 (6) for fe80::423:93ff:fe62:60b3%2#123 Mar 25 01:33:32.066639 ntpd[1889]: 25 Mar 01:33:32 ntpd[1889]: failed to init interface for address fe80::423:93ff:fe62:60b3%2 Mar 25 01:33:32.066480 ntpd[1889]: unable to create socket on eth0 (6) for fe80::423:93ff:fe62:60b3%2#123 Mar 25 01:33:32.066496 ntpd[1889]: failed to init interface for address fe80::423:93ff:fe62:60b3%2 Mar 25 01:33:32.389208 containerd[1919]: time="2025-03-25T01:33:32.389018701Z" level=info msg="Start subscribing containerd event" Mar 25 01:33:32.389780 containerd[1919]: time="2025-03-25T01:33:32.389635875Z" level=info msg="Start recovering state" Mar 25 01:33:32.389934 containerd[1919]: time="2025-03-25T01:33:32.389768590Z" level=info msg="Start event monitor" Mar 25 01:33:32.389934 containerd[1919]: time="2025-03-25T01:33:32.389887037Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:33:32.391395 containerd[1919]: time="2025-03-25T01:33:32.389916828Z" level=info msg="Start streaming server" Mar 25 01:33:32.391395 containerd[1919]: time="2025-03-25T01:33:32.390077950Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:33:32.391395 containerd[1919]: time="2025-03-25T01:33:32.390091353Z" level=info msg="runtime interface starting up..." Mar 25 01:33:32.391395 containerd[1919]: time="2025-03-25T01:33:32.390100828Z" level=info msg="starting plugins..." Mar 25 01:33:32.391395 containerd[1919]: time="2025-03-25T01:33:32.389589263Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:33:32.391395 containerd[1919]: time="2025-03-25T01:33:32.390204164Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:33:32.391617 tar[1909]: linux-amd64/README.md Mar 25 01:33:32.392160 containerd[1919]: time="2025-03-25T01:33:32.392124287Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:33:32.392484 containerd[1919]: time="2025-03-25T01:33:32.392445728Z" level=info msg="containerd successfully booted in 0.383652s" Mar 25 01:33:32.393303 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:33:32.409710 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:33:32.653516 systemd-networkd[1821]: eth0: Gained IPv6LL Mar 25 01:33:32.656418 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:33:32.657604 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:33:32.659563 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 25 01:33:32.664574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:32.666867 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:33:32.696400 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:33:32.748689 amazon-ssm-agent[2119]: Initializing new seelog logger Mar 25 01:33:32.749051 amazon-ssm-agent[2119]: New Seelog Logger Creation Complete Mar 25 01:33:32.749051 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.749051 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.749283 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 processing appconfig overrides Mar 25 01:33:32.749601 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.749601 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.749701 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 processing appconfig overrides Mar 25 01:33:32.749945 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.749945 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.750027 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 processing appconfig overrides Mar 25 01:33:32.750425 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO Proxy environment variables: Mar 25 01:33:32.753237 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.753237 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:33:32.753355 amazon-ssm-agent[2119]: 2025/03/25 01:33:32 processing appconfig overrides Mar 25 01:33:32.850472 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO https_proxy: Mar 25 01:33:32.948799 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO http_proxy: Mar 25 01:33:33.047127 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO no_proxy: Mar 25 01:33:33.147299 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO Checking if agent identity type OnPrem can be assumed Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO Checking if agent identity type EC2 can be assumed Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO Agent will take identity from EC2 Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] Starting Core Agent Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [Registrar] Starting registrar module Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:32 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:33 INFO [EC2Identity] EC2 registration was successful. Mar 25 01:33:33.220867 amazon-ssm-agent[2119]: 2025-03-25 01:33:33 INFO [CredentialRefresher] credentialRefresher has started Mar 25 01:33:33.221275 amazon-ssm-agent[2119]: 2025-03-25 01:33:33 INFO [CredentialRefresher] Starting credentials refresher loop Mar 25 01:33:33.221275 amazon-ssm-agent[2119]: 2025-03-25 01:33:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 25 01:33:33.244480 amazon-ssm-agent[2119]: 2025-03-25 01:33:33 INFO [CredentialRefresher] Next credential rotation will be in 31.749992875133334 minutes Mar 25 01:33:34.036788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:33:34.049817 systemd[1]: Started sshd@0-172.31.17.232:22-147.75.109.163:34556.service - OpenSSH per-connection server daemon (147.75.109.163:34556). Mar 25 01:33:34.242070 amazon-ssm-agent[2119]: 2025-03-25 01:33:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 25 01:33:34.301584 sshd[2139]: Accepted publickey for core from 147.75.109.163 port 34556 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:34.307502 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:34.334499 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:33:34.339164 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:33:34.342598 amazon-ssm-agent[2119]: 2025-03-25 01:33:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2143) started Mar 25 01:33:34.351365 systemd-logind[1897]: New session 1 of user core. Mar 25 01:33:34.383300 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:33:34.392688 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:33:34.410554 (systemd)[2152]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:33:34.417460 systemd-logind[1897]: New session c1 of user core. Mar 25 01:33:34.442764 amazon-ssm-agent[2119]: 2025-03-25 01:33:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 25 01:33:34.607354 systemd[2152]: Queued start job for default target default.target. Mar 25 01:33:34.612581 systemd[2152]: Created slice app.slice - User Application Slice. Mar 25 01:33:34.612620 systemd[2152]: Reached target paths.target - Paths. Mar 25 01:33:34.612819 systemd[2152]: Reached target timers.target - Timers. Mar 25 01:33:34.614552 systemd[2152]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:33:34.628289 systemd[2152]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:33:34.628450 systemd[2152]: Reached target sockets.target - Sockets. Mar 25 01:33:34.628502 systemd[2152]: Reached target basic.target - Basic System. Mar 25 01:33:34.628550 systemd[2152]: Reached target default.target - Main User Target. Mar 25 01:33:34.628590 systemd[2152]: Startup finished in 196ms. Mar 25 01:33:34.628890 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:33:34.637860 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:33:34.782785 systemd[1]: Started sshd@1-172.31.17.232:22-147.75.109.163:34572.service - OpenSSH per-connection server daemon (147.75.109.163:34572). Mar 25 01:33:34.959917 sshd[2168]: Accepted publickey for core from 147.75.109.163 port 34572 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:34.962028 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:34.976421 systemd-logind[1897]: New session 2 of user core. Mar 25 01:33:34.980572 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:33:35.041646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:35.042868 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:33:35.044544 systemd[1]: Startup finished in 717ms (kernel) + 7.107s (initrd) + 8.258s (userspace) = 16.083s. Mar 25 01:33:35.052106 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:33:35.066219 ntpd[1889]: Listen normally on 7 eth0 [fe80::423:93ff:fe62:60b3%2]:123 Mar 25 01:33:35.066808 ntpd[1889]: 25 Mar 01:33:35 ntpd[1889]: Listen normally on 7 eth0 [fe80::423:93ff:fe62:60b3%2]:123 Mar 25 01:33:35.101125 sshd[2170]: Connection closed by 147.75.109.163 port 34572 Mar 25 01:33:35.101806 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:35.105704 systemd[1]: sshd@1-172.31.17.232:22-147.75.109.163:34572.service: Deactivated successfully. Mar 25 01:33:35.108187 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:33:35.109103 systemd-logind[1897]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:33:35.110143 systemd-logind[1897]: Removed session 2. Mar 25 01:33:35.132217 systemd[1]: Started sshd@2-172.31.17.232:22-147.75.109.163:34586.service - OpenSSH per-connection server daemon (147.75.109.163:34586). Mar 25 01:33:35.302839 sshd[2186]: Accepted publickey for core from 147.75.109.163 port 34586 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:35.304298 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:35.310778 systemd-logind[1897]: New session 3 of user core. Mar 25 01:33:35.315579 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:33:35.430115 sshd[2191]: Connection closed by 147.75.109.163 port 34586 Mar 25 01:33:35.430782 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:35.434737 systemd[1]: sshd@2-172.31.17.232:22-147.75.109.163:34586.service: Deactivated successfully. Mar 25 01:33:35.436904 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:33:35.437914 systemd-logind[1897]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:33:35.439421 systemd-logind[1897]: Removed session 3. Mar 25 01:33:35.463643 systemd[1]: Started sshd@3-172.31.17.232:22-147.75.109.163:34600.service - OpenSSH per-connection server daemon (147.75.109.163:34600). Mar 25 01:33:35.643956 sshd[2198]: Accepted publickey for core from 147.75.109.163 port 34600 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:35.645414 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:35.650293 systemd-logind[1897]: New session 4 of user core. Mar 25 01:33:35.655534 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:33:35.780788 sshd[2200]: Connection closed by 147.75.109.163 port 34600 Mar 25 01:33:35.782580 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:35.785999 systemd[1]: sshd@3-172.31.17.232:22-147.75.109.163:34600.service: Deactivated successfully. Mar 25 01:33:35.788207 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:33:35.790251 systemd-logind[1897]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:33:35.792682 systemd-logind[1897]: Removed session 4. Mar 25 01:33:35.816990 systemd[1]: Started sshd@4-172.31.17.232:22-147.75.109.163:34610.service - OpenSSH per-connection server daemon (147.75.109.163:34610). Mar 25 01:33:36.001173 sshd[2206]: Accepted publickey for core from 147.75.109.163 port 34610 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:36.006047 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:36.011522 systemd-logind[1897]: New session 5 of user core. Mar 25 01:33:36.021753 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:33:36.161361 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:33:36.161932 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:36.192957 sudo[2210]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:36.216850 sshd[2209]: Connection closed by 147.75.109.163 port 34610 Mar 25 01:33:36.217611 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:36.223467 systemd[1]: sshd@4-172.31.17.232:22-147.75.109.163:34610.service: Deactivated successfully. Mar 25 01:33:36.227623 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:33:36.228755 systemd-logind[1897]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:33:36.230337 systemd-logind[1897]: Removed session 5. Mar 25 01:33:36.247713 systemd[1]: Started sshd@5-172.31.17.232:22-147.75.109.163:34620.service - OpenSSH per-connection server daemon (147.75.109.163:34620). Mar 25 01:33:36.261158 kubelet[2176]: E0325 01:33:36.261016 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:33:36.264322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:33:36.264555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:33:36.265861 systemd[1]: kubelet.service: Consumed 1.016s CPU time, 252.1M memory peak. Mar 25 01:33:36.424870 sshd[2216]: Accepted publickey for core from 147.75.109.163 port 34620 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:36.426475 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:36.431744 systemd-logind[1897]: New session 6 of user core. Mar 25 01:33:36.444578 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:33:36.539223 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:33:36.539633 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:36.543484 sudo[2221]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:36.548617 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:33:36.548969 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:36.560090 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:33:36.599652 augenrules[2243]: No rules Mar 25 01:33:36.601102 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:33:36.601367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:33:36.602500 sudo[2220]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:36.624651 sshd[2219]: Connection closed by 147.75.109.163 port 34620 Mar 25 01:33:36.625295 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:36.628325 systemd[1]: sshd@5-172.31.17.232:22-147.75.109.163:34620.service: Deactivated successfully. Mar 25 01:33:36.630313 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:33:36.631718 systemd-logind[1897]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:33:36.632861 systemd-logind[1897]: Removed session 6. Mar 25 01:33:36.658654 systemd[1]: Started sshd@6-172.31.17.232:22-147.75.109.163:34622.service - OpenSSH per-connection server daemon (147.75.109.163:34622). Mar 25 01:33:36.828430 sshd[2252]: Accepted publickey for core from 147.75.109.163 port 34622 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:33:36.829841 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:36.835055 systemd-logind[1897]: New session 7 of user core. Mar 25 01:33:36.845567 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:33:36.941081 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:33:36.941683 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:39.157767 systemd-resolved[1825]: Clock change detected. Flushing caches. Mar 25 01:33:39.249674 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:33:39.259758 (dockerd)[2273]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:33:39.910131 dockerd[2273]: time="2025-03-25T01:33:39.909873398Z" level=info msg="Starting up" Mar 25 01:33:39.914150 dockerd[2273]: time="2025-03-25T01:33:39.913994854Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:33:40.067744 systemd[1]: var-lib-docker-metacopy\x2dcheck15436866-merged.mount: Deactivated successfully. Mar 25 01:33:40.116138 dockerd[2273]: time="2025-03-25T01:33:40.116025989Z" level=info msg="Loading containers: start." Mar 25 01:33:40.342329 kernel: Initializing XFRM netlink socket Mar 25 01:33:40.343797 (udev-worker)[2298]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:33:40.466874 systemd-networkd[1821]: docker0: Link UP Mar 25 01:33:40.575709 dockerd[2273]: time="2025-03-25T01:33:40.575665757Z" level=info msg="Loading containers: done." Mar 25 01:33:40.596009 dockerd[2273]: time="2025-03-25T01:33:40.595888439Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:33:40.596009 dockerd[2273]: time="2025-03-25T01:33:40.595994244Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:33:40.596227 dockerd[2273]: time="2025-03-25T01:33:40.596145028Z" level=info msg="Daemon has completed initialization" Mar 25 01:33:40.649440 dockerd[2273]: time="2025-03-25T01:33:40.649381221Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:33:40.649915 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:33:41.812211 containerd[1919]: time="2025-03-25T01:33:41.812164703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 25 01:33:42.390080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186793297.mount: Deactivated successfully. Mar 25 01:33:44.432016 containerd[1919]: time="2025-03-25T01:33:44.431960395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:44.434668 containerd[1919]: time="2025-03-25T01:33:44.434590124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 25 01:33:44.435735 containerd[1919]: time="2025-03-25T01:33:44.435671967Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:44.439959 containerd[1919]: time="2025-03-25T01:33:44.439899797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:44.441977 containerd[1919]: time="2025-03-25T01:33:44.441130548Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 2.628919813s" Mar 25 01:33:44.441977 containerd[1919]: time="2025-03-25T01:33:44.441768608Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 25 01:33:44.443289 containerd[1919]: time="2025-03-25T01:33:44.443161358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 25 01:33:47.009701 containerd[1919]: time="2025-03-25T01:33:47.009645802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:47.011185 containerd[1919]: time="2025-03-25T01:33:47.011000092Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 25 01:33:47.012339 containerd[1919]: time="2025-03-25T01:33:47.012203066Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:47.014861 containerd[1919]: time="2025-03-25T01:33:47.014800826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:47.015904 containerd[1919]: time="2025-03-25T01:33:47.015759216Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 2.572410365s" Mar 25 01:33:47.015904 containerd[1919]: time="2025-03-25T01:33:47.015800704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 25 01:33:47.016718 containerd[1919]: time="2025-03-25T01:33:47.016688979Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 25 01:33:47.605990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:33:47.608211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:47.878678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:47.893120 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:33:47.955754 kubelet[2537]: E0325 01:33:47.955716 2537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:33:47.960097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:33:47.960299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:33:47.960704 systemd[1]: kubelet.service: Consumed 177ms CPU time, 103.8M memory peak. Mar 25 01:33:49.173622 containerd[1919]: time="2025-03-25T01:33:49.173565779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:49.175181 containerd[1919]: time="2025-03-25T01:33:49.174958513Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 25 01:33:49.176885 containerd[1919]: time="2025-03-25T01:33:49.176552298Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:49.180335 containerd[1919]: time="2025-03-25T01:33:49.180222763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:49.181653 containerd[1919]: time="2025-03-25T01:33:49.181192932Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 2.164469361s" Mar 25 01:33:49.181653 containerd[1919]: time="2025-03-25T01:33:49.181232144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 25 01:33:49.182073 containerd[1919]: time="2025-03-25T01:33:49.182051259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 25 01:33:50.762211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629058321.mount: Deactivated successfully. Mar 25 01:33:51.450285 containerd[1919]: time="2025-03-25T01:33:51.450229367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:51.451522 containerd[1919]: time="2025-03-25T01:33:51.451337795Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 25 01:33:51.453221 containerd[1919]: time="2025-03-25T01:33:51.452548565Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:51.454734 containerd[1919]: time="2025-03-25T01:33:51.454696671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:51.455479 containerd[1919]: time="2025-03-25T01:33:51.455436245Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 2.273201731s" Mar 25 01:33:51.455603 containerd[1919]: time="2025-03-25T01:33:51.455582526Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 25 01:33:51.456206 containerd[1919]: time="2025-03-25T01:33:51.456168862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 25 01:33:52.176688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517929097.mount: Deactivated successfully. Mar 25 01:33:53.560651 containerd[1919]: time="2025-03-25T01:33:53.560600265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:53.561947 containerd[1919]: time="2025-03-25T01:33:53.561786240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 25 01:33:53.563961 containerd[1919]: time="2025-03-25T01:33:53.563458649Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:53.567106 containerd[1919]: time="2025-03-25T01:33:53.567069960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:53.568497 containerd[1919]: time="2025-03-25T01:33:53.568364859Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.111927664s" Mar 25 01:33:53.568589 containerd[1919]: time="2025-03-25T01:33:53.568504171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 25 01:33:53.569264 containerd[1919]: time="2025-03-25T01:33:53.569133603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 01:33:54.020549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289295528.mount: Deactivated successfully. Mar 25 01:33:54.027406 containerd[1919]: time="2025-03-25T01:33:54.027351950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:54.028244 containerd[1919]: time="2025-03-25T01:33:54.028102690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 25 01:33:54.031326 containerd[1919]: time="2025-03-25T01:33:54.029770043Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:54.035371 containerd[1919]: time="2025-03-25T01:33:54.035333170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:54.037519 containerd[1919]: time="2025-03-25T01:33:54.037485780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.316911ms" Mar 25 01:33:54.037674 containerd[1919]: time="2025-03-25T01:33:54.037520005Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 25 01:33:54.038086 containerd[1919]: time="2025-03-25T01:33:54.038059248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 25 01:33:54.660066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490021971.mount: Deactivated successfully. Mar 25 01:33:57.801454 containerd[1919]: time="2025-03-25T01:33:57.801392293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:57.810051 containerd[1919]: time="2025-03-25T01:33:57.809965072Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 25 01:33:57.821091 containerd[1919]: time="2025-03-25T01:33:57.820995554Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:57.834375 containerd[1919]: time="2025-03-25T01:33:57.834249209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:57.836561 containerd[1919]: time="2025-03-25T01:33:57.835988139Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.797899669s" Mar 25 01:33:57.836561 containerd[1919]: time="2025-03-25T01:33:57.836032981Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 25 01:33:58.037944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:33:58.040225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:58.509368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:58.538853 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:33:58.627338 kubelet[2696]: E0325 01:33:58.627001 2696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:33:58.629895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:33:58.630168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:33:58.632593 systemd[1]: kubelet.service: Consumed 204ms CPU time, 104M memory peak. Mar 25 01:34:01.147839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:34:01.148568 systemd[1]: kubelet.service: Consumed 204ms CPU time, 104M memory peak. Mar 25 01:34:01.151395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:34:01.263325 systemd[1]: Reload requested from client PID 2712 ('systemctl') (unit session-7.scope)... Mar 25 01:34:01.263353 systemd[1]: Reloading... Mar 25 01:34:01.409326 zram_generator::config[2758]: No configuration found. Mar 25 01:34:01.568080 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:34:01.684105 systemd[1]: Reloading finished in 417 ms. Mar 25 01:34:01.767609 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:34:01.770709 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:34:01.771002 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:34:01.771075 systemd[1]: kubelet.service: Consumed 141ms CPU time, 91.8M memory peak. Mar 25 01:34:01.772978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:34:01.997879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:34:02.009774 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:34:02.063967 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:34:02.063967 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 25 01:34:02.063967 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:34:02.064447 kubelet[2823]: I0325 01:34:02.064053 2823 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:34:02.567478 kubelet[2823]: I0325 01:34:02.567272 2823 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 25 01:34:02.567478 kubelet[2823]: I0325 01:34:02.567327 2823 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:34:02.568642 kubelet[2823]: I0325 01:34:02.568223 2823 server.go:954] "Client rotation is on, will bootstrap in background" Mar 25 01:34:02.620596 kubelet[2823]: I0325 01:34:02.620552 2823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:34:02.623736 kubelet[2823]: E0325 01:34:02.623689 2823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:02.682087 kubelet[2823]: I0325 01:34:02.682054 2823 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:34:02.704037 kubelet[2823]: I0325 01:34:02.703999 2823 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:34:02.713065 kubelet[2823]: I0325 01:34:02.710756 2823 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:34:02.714066 kubelet[2823]: I0325 01:34:02.713060 2823 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:34:02.717242 kubelet[2823]: I0325 01:34:02.717197 2823 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:34:02.717242 kubelet[2823]: I0325 01:34:02.717242 2823 container_manager_linux.go:304] "Creating device plugin manager" Mar 25 01:34:02.717593 kubelet[2823]: I0325 01:34:02.717569 2823 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:34:02.725498 kubelet[2823]: I0325 01:34:02.725376 2823 kubelet.go:446] "Attempting to sync node with API server" Mar 25 01:34:02.725498 kubelet[2823]: I0325 01:34:02.725412 2823 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:34:02.726421 kubelet[2823]: I0325 01:34:02.726242 2823 kubelet.go:352] "Adding apiserver pod source" Mar 25 01:34:02.726421 kubelet[2823]: I0325 01:34:02.726268 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:34:02.738384 kubelet[2823]: W0325 01:34:02.737978 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-232&limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:02.738384 kubelet[2823]: E0325 01:34:02.738049 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-232&limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:02.741607 kubelet[2823]: W0325 01:34:02.741531 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:02.742265 kubelet[2823]: E0325 01:34:02.741864 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:02.742265 kubelet[2823]: I0325 01:34:02.742090 2823 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:34:02.751124 kubelet[2823]: I0325 01:34:02.749741 2823 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:34:02.753658 kubelet[2823]: W0325 01:34:02.752851 2823 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:34:02.759069 kubelet[2823]: I0325 01:34:02.758928 2823 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 25 01:34:02.759069 kubelet[2823]: I0325 01:34:02.758983 2823 server.go:1287] "Started kubelet" Mar 25 01:34:02.764133 kubelet[2823]: I0325 01:34:02.764072 2823 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:34:02.767968 kubelet[2823]: I0325 01:34:02.767906 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:34:02.768323 kubelet[2823]: I0325 01:34:02.768279 2823 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:34:02.772968 kubelet[2823]: I0325 01:34:02.772929 2823 server.go:490] "Adding debug handlers to kubelet server" Mar 25 01:34:02.775665 kubelet[2823]: I0325 01:34:02.775644 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:34:02.777385 kubelet[2823]: E0325 01:34:02.771017 2823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-232.182fe7cbab98a591 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-232,UID:ip-172-31-17-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-232,},FirstTimestamp:2025-03-25 01:34:02.758956433 +0000 UTC m=+0.744724690,LastTimestamp:2025-03-25 01:34:02.758956433 +0000 UTC m=+0.744724690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-232,}" Mar 25 01:34:02.777952 kubelet[2823]: I0325 01:34:02.777932 2823 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:34:02.781084 kubelet[2823]: E0325 01:34:02.781061 2823 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-17-232\" not found" Mar 25 01:34:02.782638 kubelet[2823]: I0325 01:34:02.782619 2823 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 25 01:34:02.786780 kubelet[2823]: I0325 01:34:02.786754 2823 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:34:02.786986 kubelet[2823]: I0325 01:34:02.786974 2823 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:34:02.787611 kubelet[2823]: W0325 01:34:02.787559 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:02.787803 kubelet[2823]: E0325 01:34:02.787782 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:02.788842 kubelet[2823]: I0325 01:34:02.788824 2823 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:34:02.789519 kubelet[2823]: I0325 01:34:02.789490 2823 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:34:02.797726 kubelet[2823]: E0325 01:34:02.797603 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-232?timeout=10s\": dial tcp 172.31.17.232:6443: connect: connection refused" interval="200ms" Mar 25 01:34:02.799393 kubelet[2823]: I0325 01:34:02.798209 2823 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:34:02.809683 kubelet[2823]: I0325 01:34:02.809630 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:34:02.811458 kubelet[2823]: I0325 01:34:02.811419 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:34:02.811458 kubelet[2823]: I0325 01:34:02.811452 2823 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 25 01:34:02.811616 kubelet[2823]: I0325 01:34:02.811477 2823 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 25 01:34:02.811616 kubelet[2823]: I0325 01:34:02.811486 2823 kubelet.go:2388] "Starting kubelet main sync loop" Mar 25 01:34:02.811616 kubelet[2823]: E0325 01:34:02.811540 2823 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:34:02.824489 kubelet[2823]: E0325 01:34:02.824363 2823 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:34:02.828123 kubelet[2823]: W0325 01:34:02.828041 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:02.828268 kubelet[2823]: E0325 01:34:02.828133 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:02.833181 kubelet[2823]: I0325 01:34:02.833153 2823 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 25 01:34:02.833181 kubelet[2823]: I0325 01:34:02.833176 2823 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 25 01:34:02.833371 kubelet[2823]: I0325 01:34:02.833196 2823 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:34:02.835036 kubelet[2823]: I0325 01:34:02.835003 2823 policy_none.go:49] "None policy: Start" Mar 25 01:34:02.835036 kubelet[2823]: I0325 01:34:02.835026 2823 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 25 01:34:02.835036 kubelet[2823]: I0325 01:34:02.835041 2823 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:34:02.841528 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:34:02.850792 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:34:02.855204 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:34:02.867843 kubelet[2823]: I0325 01:34:02.867589 2823 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:34:02.867843 kubelet[2823]: I0325 01:34:02.867849 2823 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:34:02.868024 kubelet[2823]: I0325 01:34:02.867862 2823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:34:02.868737 kubelet[2823]: I0325 01:34:02.868525 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:34:02.870909 kubelet[2823]: E0325 01:34:02.870882 2823 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 25 01:34:02.871018 kubelet[2823]: E0325 01:34:02.870930 2823 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-232\" not found" Mar 25 01:34:02.915323 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 25 01:34:02.941859 systemd[1]: Created slice kubepods-burstable-podee591cc95805b0cda8c870098981227d.slice - libcontainer container kubepods-burstable-podee591cc95805b0cda8c870098981227d.slice. Mar 25 01:34:02.959791 kubelet[2823]: E0325 01:34:02.959508 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:02.962873 systemd[1]: Created slice kubepods-burstable-pod53f5a9f6084210eebf3075c2ca77c06c.slice - libcontainer container kubepods-burstable-pod53f5a9f6084210eebf3075c2ca77c06c.slice. Mar 25 01:34:02.965973 kubelet[2823]: E0325 01:34:02.965661 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:02.969624 systemd[1]: Created slice kubepods-burstable-podb4f9e4faa1a8f77004e35b3062419984.slice - libcontainer container kubepods-burstable-podb4f9e4faa1a8f77004e35b3062419984.slice. Mar 25 01:34:02.973988 kubelet[2823]: I0325 01:34:02.973947 2823 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-17-232" Mar 25 01:34:02.974798 kubelet[2823]: E0325 01:34:02.974734 2823 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.17.232:6443/api/v1/nodes\": dial tcp 172.31.17.232:6443: connect: connection refused" node="ip-172-31-17-232" Mar 25 01:34:02.975007 kubelet[2823]: E0325 01:34:02.974987 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:02.999339 kubelet[2823]: E0325 01:34:02.999274 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-232?timeout=10s\": dial tcp 172.31.17.232:6443: connect: connection refused" interval="400ms" Mar 25 01:34:03.094664 kubelet[2823]: I0325 01:34:03.094290 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:03.094664 kubelet[2823]: I0325 01:34:03.094454 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:03.094664 kubelet[2823]: I0325 01:34:03.094480 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:03.094664 kubelet[2823]: I0325 01:34:03.094498 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4f9e4faa1a8f77004e35b3062419984-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-232\" (UID: \"b4f9e4faa1a8f77004e35b3062419984\") " pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:03.094664 kubelet[2823]: I0325 01:34:03.094520 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee591cc95805b0cda8c870098981227d-ca-certs\") pod \"kube-apiserver-ip-172-31-17-232\" (UID: \"ee591cc95805b0cda8c870098981227d\") " pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:03.095256 kubelet[2823]: I0325 01:34:03.094542 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee591cc95805b0cda8c870098981227d-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-232\" (UID: \"ee591cc95805b0cda8c870098981227d\") " pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:03.095256 kubelet[2823]: I0325 01:34:03.094579 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee591cc95805b0cda8c870098981227d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-232\" (UID: \"ee591cc95805b0cda8c870098981227d\") " pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:03.095256 kubelet[2823]: I0325 01:34:03.094621 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:03.095256 kubelet[2823]: I0325 01:34:03.094651 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:03.177420 kubelet[2823]: I0325 01:34:03.177351 2823 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-17-232" Mar 25 01:34:03.177922 kubelet[2823]: E0325 01:34:03.177888 2823 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.17.232:6443/api/v1/nodes\": dial tcp 172.31.17.232:6443: connect: connection refused" node="ip-172-31-17-232" Mar 25 01:34:03.261761 containerd[1919]: time="2025-03-25T01:34:03.261712643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-232,Uid:ee591cc95805b0cda8c870098981227d,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:03.267494 containerd[1919]: time="2025-03-25T01:34:03.267452544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-232,Uid:53f5a9f6084210eebf3075c2ca77c06c,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:03.276331 containerd[1919]: time="2025-03-25T01:34:03.276277534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-232,Uid:b4f9e4faa1a8f77004e35b3062419984,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:03.401015 kubelet[2823]: E0325 01:34:03.400812 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-232?timeout=10s\": dial tcp 172.31.17.232:6443: connect: connection refused" interval="800ms" Mar 25 01:34:03.415138 containerd[1919]: time="2025-03-25T01:34:03.414600183Z" level=info msg="connecting to shim 8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49" address="unix:///run/containerd/s/60cded0752dc70a92824830af85888b063c37bec50e57466bdd4e8dc49bf2574" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:03.416517 containerd[1919]: time="2025-03-25T01:34:03.416477257Z" level=info msg="connecting to shim 7e9539a990fb7e32bc0de3c6fe408b9d985013a2c97aaace28d6ed2321b9d1c8" address="unix:///run/containerd/s/838a8567b97da4793573efcbcb3d9147427d5b9bf33decda36727aaa0e7f7671" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:03.423095 containerd[1919]: time="2025-03-25T01:34:03.422851862Z" level=info msg="connecting to shim feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632" address="unix:///run/containerd/s/1e26075bd5017a202d57659304dc13b2f71682aa5a3981c03fcc1941b86009ae" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:03.533509 systemd[1]: Started cri-containerd-8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49.scope - libcontainer container 8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49. Mar 25 01:34:03.537294 systemd[1]: Started cri-containerd-feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632.scope - libcontainer container feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632. Mar 25 01:34:03.543363 systemd[1]: Started cri-containerd-7e9539a990fb7e32bc0de3c6fe408b9d985013a2c97aaace28d6ed2321b9d1c8.scope - libcontainer container 7e9539a990fb7e32bc0de3c6fe408b9d985013a2c97aaace28d6ed2321b9d1c8. Mar 25 01:34:03.574436 kubelet[2823]: W0325 01:34:03.574349 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:03.574436 kubelet[2823]: E0325 01:34:03.574436 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:03.581786 kubelet[2823]: I0325 01:34:03.581752 2823 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-17-232" Mar 25 01:34:03.582256 kubelet[2823]: E0325 01:34:03.582109 2823 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.17.232:6443/api/v1/nodes\": dial tcp 172.31.17.232:6443: connect: connection refused" node="ip-172-31-17-232" Mar 25 01:34:03.643467 containerd[1919]: time="2025-03-25T01:34:03.643418222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-232,Uid:b4f9e4faa1a8f77004e35b3062419984,Namespace:kube-system,Attempt:0,} returns sandbox id \"feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632\"" Mar 25 01:34:03.650406 containerd[1919]: time="2025-03-25T01:34:03.650369169Z" level=info msg="CreateContainer within sandbox \"feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:34:03.672487 containerd[1919]: time="2025-03-25T01:34:03.672367531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-232,Uid:ee591cc95805b0cda8c870098981227d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e9539a990fb7e32bc0de3c6fe408b9d985013a2c97aaace28d6ed2321b9d1c8\"" Mar 25 01:34:03.678016 containerd[1919]: time="2025-03-25T01:34:03.677979712Z" level=info msg="CreateContainer within sandbox \"7e9539a990fb7e32bc0de3c6fe408b9d985013a2c97aaace28d6ed2321b9d1c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:34:03.681604 containerd[1919]: time="2025-03-25T01:34:03.681565215Z" level=info msg="Container 30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:03.693268 containerd[1919]: time="2025-03-25T01:34:03.692714101Z" level=info msg="Container 9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:03.695337 containerd[1919]: time="2025-03-25T01:34:03.695277804Z" level=info msg="CreateContainer within sandbox \"feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\"" Mar 25 01:34:03.696242 containerd[1919]: time="2025-03-25T01:34:03.696106874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-232,Uid:53f5a9f6084210eebf3075c2ca77c06c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49\"" Mar 25 01:34:03.696451 containerd[1919]: time="2025-03-25T01:34:03.696424653Z" level=info msg="StartContainer for \"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\"" Mar 25 01:34:03.700277 containerd[1919]: time="2025-03-25T01:34:03.700237269Z" level=info msg="connecting to shim 30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea" address="unix:///run/containerd/s/1e26075bd5017a202d57659304dc13b2f71682aa5a3981c03fcc1941b86009ae" protocol=ttrpc version=3 Mar 25 01:34:03.700792 containerd[1919]: time="2025-03-25T01:34:03.700768358Z" level=info msg="CreateContainer within sandbox \"8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:34:03.706296 containerd[1919]: time="2025-03-25T01:34:03.706195428Z" level=info msg="CreateContainer within sandbox \"7e9539a990fb7e32bc0de3c6fe408b9d985013a2c97aaace28d6ed2321b9d1c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7\"" Mar 25 01:34:03.706959 containerd[1919]: time="2025-03-25T01:34:03.706930035Z" level=info msg="StartContainer for \"9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7\"" Mar 25 01:34:03.716859 containerd[1919]: time="2025-03-25T01:34:03.716816528Z" level=info msg="connecting to shim 9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7" address="unix:///run/containerd/s/838a8567b97da4793573efcbcb3d9147427d5b9bf33decda36727aaa0e7f7671" protocol=ttrpc version=3 Mar 25 01:34:03.719328 containerd[1919]: time="2025-03-25T01:34:03.718692474Z" level=info msg="Container f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:03.732542 systemd[1]: Started cri-containerd-30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea.scope - libcontainer container 30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea. Mar 25 01:34:03.737158 containerd[1919]: time="2025-03-25T01:34:03.737116198Z" level=info msg="CreateContainer within sandbox \"8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\"" Mar 25 01:34:03.739912 containerd[1919]: time="2025-03-25T01:34:03.739831854Z" level=info msg="StartContainer for \"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\"" Mar 25 01:34:03.747032 containerd[1919]: time="2025-03-25T01:34:03.746898008Z" level=info msg="connecting to shim f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f" address="unix:///run/containerd/s/60cded0752dc70a92824830af85888b063c37bec50e57466bdd4e8dc49bf2574" protocol=ttrpc version=3 Mar 25 01:34:03.761543 systemd[1]: Started cri-containerd-9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7.scope - libcontainer container 9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7. Mar 25 01:34:03.791357 systemd[1]: Started cri-containerd-f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f.scope - libcontainer container f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f. Mar 25 01:34:03.909211 containerd[1919]: time="2025-03-25T01:34:03.909138063Z" level=info msg="StartContainer for \"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\" returns successfully" Mar 25 01:34:03.909614 containerd[1919]: time="2025-03-25T01:34:03.909589322Z" level=info msg="StartContainer for \"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\" returns successfully" Mar 25 01:34:03.913401 containerd[1919]: time="2025-03-25T01:34:03.913269863Z" level=info msg="StartContainer for \"9b3ec65df971a7566efea0cade1d8c448239e39dbfd748b1495a2f34d65127b7\" returns successfully" Mar 25 01:34:04.073253 kubelet[2823]: W0325 01:34:04.073140 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:04.073253 kubelet[2823]: E0325 01:34:04.073223 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:04.153227 kubelet[2823]: W0325 01:34:04.153106 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-232&limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:04.153227 kubelet[2823]: E0325 01:34:04.153185 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-232&limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:04.201977 kubelet[2823]: E0325 01:34:04.201924 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-232?timeout=10s\": dial tcp 172.31.17.232:6443: connect: connection refused" interval="1.6s" Mar 25 01:34:04.352246 kubelet[2823]: W0325 01:34:04.350969 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:04.352246 kubelet[2823]: E0325 01:34:04.351052 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:04.384740 kubelet[2823]: I0325 01:34:04.384287 2823 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-17-232" Mar 25 01:34:04.384740 kubelet[2823]: E0325 01:34:04.384632 2823 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.17.232:6443/api/v1/nodes\": dial tcp 172.31.17.232:6443: connect: connection refused" node="ip-172-31-17-232" Mar 25 01:34:04.826443 kubelet[2823]: E0325 01:34:04.826398 2823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:04.873377 kubelet[2823]: E0325 01:34:04.873118 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:04.883325 kubelet[2823]: E0325 01:34:04.882634 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:04.890859 kubelet[2823]: E0325 01:34:04.890093 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:05.252737 kubelet[2823]: E0325 01:34:05.252540 2823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-232.182fe7cbab98a591 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-232,UID:ip-172-31-17-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-232,},FirstTimestamp:2025-03-25 01:34:02.758956433 +0000 UTC m=+0.744724690,LastTimestamp:2025-03-25 01:34:02.758956433 +0000 UTC m=+0.744724690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-232,}" Mar 25 01:34:05.358866 kubelet[2823]: W0325 01:34:05.358812 2823 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.232:6443: connect: connection refused Mar 25 01:34:05.359028 kubelet[2823]: E0325 01:34:05.358888 2823 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.232:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:34:05.890817 kubelet[2823]: E0325 01:34:05.890779 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:05.891575 kubelet[2823]: E0325 01:34:05.891550 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:05.891933 kubelet[2823]: E0325 01:34:05.891911 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:05.987463 kubelet[2823]: I0325 01:34:05.987433 2823 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-17-232" Mar 25 01:34:06.908971 kubelet[2823]: E0325 01:34:06.908937 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:06.911896 kubelet[2823]: E0325 01:34:06.909866 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:07.903503 kubelet[2823]: E0325 01:34:07.903472 2823 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:08.014812 kubelet[2823]: E0325 01:34:08.014767 2823 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-232\" not found" node="ip-172-31-17-232" Mar 25 01:34:08.125121 kubelet[2823]: I0325 01:34:08.125002 2823 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-17-232" Mar 25 01:34:08.125121 kubelet[2823]: E0325 01:34:08.125058 2823 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-232\": node \"ip-172-31-17-232\" not found" Mar 25 01:34:08.191836 kubelet[2823]: I0325 01:34:08.191708 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:08.197205 kubelet[2823]: E0325 01:34:08.197166 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:08.197205 kubelet[2823]: I0325 01:34:08.197199 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:08.199519 kubelet[2823]: E0325 01:34:08.199486 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:08.199519 kubelet[2823]: I0325 01:34:08.199521 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:08.202208 kubelet[2823]: E0325 01:34:08.202162 2823 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:08.742016 kubelet[2823]: I0325 01:34:08.741976 2823 apiserver.go:52] "Watching apiserver" Mar 25 01:34:08.788355 kubelet[2823]: I0325 01:34:08.787925 2823 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:34:09.553164 kubelet[2823]: I0325 01:34:09.553129 2823 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:10.047586 systemd[1]: Reload requested from client PID 3091 ('systemctl') (unit session-7.scope)... Mar 25 01:34:10.047613 systemd[1]: Reloading... Mar 25 01:34:10.252335 zram_generator::config[3139]: No configuration found. Mar 25 01:34:10.440259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:34:10.637160 systemd[1]: Reloading finished in 588 ms. Mar 25 01:34:10.673724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:34:10.691683 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:34:10.691989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:34:10.692085 systemd[1]: kubelet.service: Consumed 1.074s CPU time, 124.3M memory peak. Mar 25 01:34:10.694601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:34:10.970834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:34:10.981855 (kubelet)[3196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:34:11.101233 kubelet[3196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:34:11.101233 kubelet[3196]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 25 01:34:11.101233 kubelet[3196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:34:11.101840 kubelet[3196]: I0325 01:34:11.101790 3196 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:34:11.114447 kubelet[3196]: I0325 01:34:11.114405 3196 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 25 01:34:11.114447 kubelet[3196]: I0325 01:34:11.114440 3196 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:34:11.115016 kubelet[3196]: I0325 01:34:11.114801 3196 server.go:954] "Client rotation is on, will bootstrap in background" Mar 25 01:34:11.119952 kubelet[3196]: I0325 01:34:11.119924 3196 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:34:11.126471 kubelet[3196]: I0325 01:34:11.126190 3196 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:34:11.138878 sudo[3209]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:34:11.139825 sudo[3209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:34:11.141440 kubelet[3196]: I0325 01:34:11.141347 3196 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:34:11.146121 kubelet[3196]: I0325 01:34:11.145437 3196 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:34:11.146121 kubelet[3196]: I0325 01:34:11.145742 3196 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:34:11.146121 kubelet[3196]: I0325 01:34:11.145777 3196 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:34:11.146121 kubelet[3196]: I0325 01:34:11.146007 3196 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:34:11.146465 kubelet[3196]: I0325 01:34:11.146021 3196 container_manager_linux.go:304] "Creating device plugin manager" Mar 25 01:34:11.146465 kubelet[3196]: I0325 01:34:11.146070 3196 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:34:11.146465 kubelet[3196]: I0325 01:34:11.146250 3196 kubelet.go:446] "Attempting to sync node with API server" Mar 25 01:34:11.146465 kubelet[3196]: I0325 01:34:11.146274 3196 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:34:11.148009 kubelet[3196]: I0325 01:34:11.146814 3196 kubelet.go:352] "Adding apiserver pod source" Mar 25 01:34:11.148009 kubelet[3196]: I0325 01:34:11.146833 3196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:34:11.155188 kubelet[3196]: I0325 01:34:11.155150 3196 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:34:11.163799 kubelet[3196]: I0325 01:34:11.163615 3196 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:34:11.165226 kubelet[3196]: I0325 01:34:11.164170 3196 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 25 01:34:11.165879 kubelet[3196]: I0325 01:34:11.165856 3196 server.go:1287] "Started kubelet" Mar 25 01:34:11.176925 kubelet[3196]: I0325 01:34:11.176752 3196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:34:11.196176 kubelet[3196]: I0325 01:34:11.194541 3196 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:34:11.200751 kubelet[3196]: I0325 01:34:11.200726 3196 server.go:490] "Adding debug handlers to kubelet server" Mar 25 01:34:11.204534 kubelet[3196]: I0325 01:34:11.204476 3196 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:34:11.204937 kubelet[3196]: I0325 01:34:11.204922 3196 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:34:11.205262 kubelet[3196]: I0325 01:34:11.205249 3196 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:34:11.213624 kubelet[3196]: I0325 01:34:11.213600 3196 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 25 01:34:11.214583 kubelet[3196]: I0325 01:34:11.214563 3196 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:34:11.215123 kubelet[3196]: I0325 01:34:11.215108 3196 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:34:11.222980 kubelet[3196]: I0325 01:34:11.222877 3196 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:34:11.223200 kubelet[3196]: I0325 01:34:11.223182 3196 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:34:11.225319 kubelet[3196]: I0325 01:34:11.225131 3196 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:34:11.249430 kubelet[3196]: I0325 01:34:11.249380 3196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:34:11.260493 kubelet[3196]: I0325 01:34:11.260440 3196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:34:11.260640 kubelet[3196]: I0325 01:34:11.260574 3196 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 25 01:34:11.260640 kubelet[3196]: I0325 01:34:11.260603 3196 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 25 01:34:11.260640 kubelet[3196]: I0325 01:34:11.260617 3196 kubelet.go:2388] "Starting kubelet main sync loop" Mar 25 01:34:11.260780 kubelet[3196]: E0325 01:34:11.260698 3196 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:34:11.272992 kubelet[3196]: E0325 01:34:11.261181 3196 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:34:11.359795 kubelet[3196]: I0325 01:34:11.359653 3196 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 25 01:34:11.359795 kubelet[3196]: I0325 01:34:11.359675 3196 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 25 01:34:11.359795 kubelet[3196]: I0325 01:34:11.359764 3196 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:34:11.361352 kubelet[3196]: I0325 01:34:11.359989 3196 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:34:11.361352 kubelet[3196]: I0325 01:34:11.360003 3196 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:34:11.361352 kubelet[3196]: I0325 01:34:11.360030 3196 policy_none.go:49] "None policy: Start" Mar 25 01:34:11.361352 kubelet[3196]: I0325 01:34:11.360042 3196 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 25 01:34:11.361352 kubelet[3196]: I0325 01:34:11.360055 3196 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:34:11.361352 kubelet[3196]: I0325 01:34:11.360194 3196 state_mem.go:75] "Updated machine memory state" Mar 25 01:34:11.371114 kubelet[3196]: I0325 01:34:11.371079 3196 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:34:11.371812 kubelet[3196]: I0325 01:34:11.371619 3196 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:34:11.372393 kubelet[3196]: I0325 01:34:11.371637 3196 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:34:11.372393 kubelet[3196]: I0325 01:34:11.372292 3196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:34:11.375131 kubelet[3196]: I0325 01:34:11.375110 3196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:11.376150 kubelet[3196]: I0325 01:34:11.376107 3196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:11.377199 kubelet[3196]: I0325 01:34:11.377186 3196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:11.385361 kubelet[3196]: E0325 01:34:11.384640 3196 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 25 01:34:11.405043 kubelet[3196]: E0325 01:34:11.404998 3196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-232\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:11.421317 kubelet[3196]: I0325 01:34:11.421258 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4f9e4faa1a8f77004e35b3062419984-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-232\" (UID: \"b4f9e4faa1a8f77004e35b3062419984\") " pod="kube-system/kube-scheduler-ip-172-31-17-232" Mar 25 01:34:11.421463 kubelet[3196]: I0325 01:34:11.421333 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:11.421463 kubelet[3196]: I0325 01:34:11.421357 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:11.421463 kubelet[3196]: I0325 01:34:11.421425 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:11.421598 kubelet[3196]: I0325 01:34:11.421487 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:11.421598 kubelet[3196]: I0325 01:34:11.421514 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53f5a9f6084210eebf3075c2ca77c06c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-232\" (UID: \"53f5a9f6084210eebf3075c2ca77c06c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-232" Mar 25 01:34:11.421701 kubelet[3196]: I0325 01:34:11.421638 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee591cc95805b0cda8c870098981227d-ca-certs\") pod \"kube-apiserver-ip-172-31-17-232\" (UID: \"ee591cc95805b0cda8c870098981227d\") " pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:11.421701 kubelet[3196]: I0325 01:34:11.421664 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee591cc95805b0cda8c870098981227d-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-232\" (UID: \"ee591cc95805b0cda8c870098981227d\") " pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:11.421701 kubelet[3196]: I0325 01:34:11.421690 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee591cc95805b0cda8c870098981227d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-232\" (UID: \"ee591cc95805b0cda8c870098981227d\") " pod="kube-system/kube-apiserver-ip-172-31-17-232" Mar 25 01:34:11.491568 kubelet[3196]: I0325 01:34:11.491475 3196 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-17-232" Mar 25 01:34:11.500144 kubelet[3196]: I0325 01:34:11.500109 3196 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-17-232" Mar 25 01:34:11.500351 kubelet[3196]: I0325 01:34:11.500195 3196 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-17-232" Mar 25 01:34:11.924816 sudo[3209]: pam_unix(sudo:session): session closed for user root Mar 25 01:34:12.161722 kubelet[3196]: I0325 01:34:12.161358 3196 apiserver.go:52] "Watching apiserver" Mar 25 01:34:12.215037 kubelet[3196]: I0325 01:34:12.214859 3196 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:34:12.360766 kubelet[3196]: I0325 01:34:12.360686 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-232" podStartSLOduration=1.360651906 podStartE2EDuration="1.360651906s" podCreationTimestamp="2025-03-25 01:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:12.360637931 +0000 UTC m=+1.368034882" watchObservedRunningTime="2025-03-25 01:34:12.360651906 +0000 UTC m=+1.368048847" Mar 25 01:34:12.360974 kubelet[3196]: I0325 01:34:12.360860 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-232" podStartSLOduration=3.360848595 podStartE2EDuration="3.360848595s" podCreationTimestamp="2025-03-25 01:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:12.350326785 +0000 UTC m=+1.357723742" watchObservedRunningTime="2025-03-25 01:34:12.360848595 +0000 UTC m=+1.368245548" Mar 25 01:34:13.737842 sudo[2255]: pam_unix(sudo:session): session closed for user root Mar 25 01:34:13.760253 sshd[2254]: Connection closed by 147.75.109.163 port 34622 Mar 25 01:34:13.761598 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:13.768525 systemd-logind[1897]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:34:13.768891 systemd[1]: sshd@6-172.31.17.232:22-147.75.109.163:34622.service: Deactivated successfully. Mar 25 01:34:13.775117 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:34:13.775503 systemd[1]: session-7.scope: Consumed 4.583s CPU time, 211.4M memory peak. Mar 25 01:34:13.779703 systemd-logind[1897]: Removed session 7. Mar 25 01:34:14.515265 kubelet[3196]: I0325 01:34:14.515015 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-232" podStartSLOduration=3.514985633 podStartE2EDuration="3.514985633s" podCreationTimestamp="2025-03-25 01:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:12.375021254 +0000 UTC m=+1.382418206" watchObservedRunningTime="2025-03-25 01:34:14.514985633 +0000 UTC m=+3.522382586" Mar 25 01:34:15.552826 kubelet[3196]: I0325 01:34:15.552669 3196 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:34:15.555445 containerd[1919]: time="2025-03-25T01:34:15.554818200Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:34:15.556125 kubelet[3196]: I0325 01:34:15.555101 3196 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:34:16.518728 systemd[1]: Created slice kubepods-besteffort-pod1b4457d6_5ef0_4b36_9832_870fdbd4d21b.slice - libcontainer container kubepods-besteffort-pod1b4457d6_5ef0_4b36_9832_870fdbd4d21b.slice. Mar 25 01:34:16.543207 systemd[1]: Created slice kubepods-burstable-pod47a3a49e_d9d5_4e05_8f4f_61f2dfec3438.slice - libcontainer container kubepods-burstable-pod47a3a49e_d9d5_4e05_8f4f_61f2dfec3438.slice. Mar 25 01:34:16.563965 kubelet[3196]: I0325 01:34:16.563365 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2fmg\" (UniqueName: \"kubernetes.io/projected/1b4457d6-5ef0-4b36-9832-870fdbd4d21b-kube-api-access-w2fmg\") pod \"kube-proxy-skrwx\" (UID: \"1b4457d6-5ef0-4b36-9832-870fdbd4d21b\") " pod="kube-system/kube-proxy-skrwx" Mar 25 01:34:16.563965 kubelet[3196]: I0325 01:34:16.563421 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-clustermesh-secrets\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.563965 kubelet[3196]: I0325 01:34:16.563453 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b4457d6-5ef0-4b36-9832-870fdbd4d21b-kube-proxy\") pod \"kube-proxy-skrwx\" (UID: \"1b4457d6-5ef0-4b36-9832-870fdbd4d21b\") " pod="kube-system/kube-proxy-skrwx" Mar 25 01:34:16.563965 kubelet[3196]: I0325 01:34:16.563474 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-config-path\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.563965 kubelet[3196]: I0325 01:34:16.563510 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hubble-tls\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.564671 kubelet[3196]: I0325 01:34:16.563534 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b4457d6-5ef0-4b36-9832-870fdbd4d21b-xtables-lock\") pod \"kube-proxy-skrwx\" (UID: \"1b4457d6-5ef0-4b36-9832-870fdbd4d21b\") " pod="kube-system/kube-proxy-skrwx" Mar 25 01:34:16.564671 kubelet[3196]: I0325 01:34:16.563554 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b4457d6-5ef0-4b36-9832-870fdbd4d21b-lib-modules\") pod \"kube-proxy-skrwx\" (UID: \"1b4457d6-5ef0-4b36-9832-870fdbd4d21b\") " pod="kube-system/kube-proxy-skrwx" Mar 25 01:34:16.564671 kubelet[3196]: I0325 01:34:16.563577 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-cgroup\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.564671 kubelet[3196]: I0325 01:34:16.563600 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cni-path\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.564671 kubelet[3196]: I0325 01:34:16.563622 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-lib-modules\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.564671 kubelet[3196]: I0325 01:34:16.563646 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-bpf-maps\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567672 kubelet[3196]: I0325 01:34:16.563672 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hostproc\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567672 kubelet[3196]: I0325 01:34:16.563691 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-xtables-lock\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567672 kubelet[3196]: I0325 01:34:16.563711 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-net\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567672 kubelet[3196]: I0325 01:34:16.563744 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcw7\" (UniqueName: \"kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-kube-api-access-9xcw7\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567672 kubelet[3196]: I0325 01:34:16.563768 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-run\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567672 kubelet[3196]: I0325 01:34:16.563791 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-etc-cni-netd\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.567906 kubelet[3196]: I0325 01:34:16.563817 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-kernel\") pod \"cilium-7l6fc\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " pod="kube-system/cilium-7l6fc" Mar 25 01:34:16.751922 systemd[1]: Created slice kubepods-besteffort-podfe2509a0_14fc_4a90_a491_b1a57f495ab6.slice - libcontainer container kubepods-besteffort-podfe2509a0_14fc_4a90_a491_b1a57f495ab6.slice. Mar 25 01:34:16.766317 kubelet[3196]: I0325 01:34:16.765782 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xztb\" (UniqueName: \"kubernetes.io/projected/fe2509a0-14fc-4a90-a491-b1a57f495ab6-kube-api-access-8xztb\") pod \"cilium-operator-6c4d7847fc-np4gm\" (UID: \"fe2509a0-14fc-4a90-a491-b1a57f495ab6\") " pod="kube-system/cilium-operator-6c4d7847fc-np4gm" Mar 25 01:34:16.766317 kubelet[3196]: I0325 01:34:16.766153 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe2509a0-14fc-4a90-a491-b1a57f495ab6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-np4gm\" (UID: \"fe2509a0-14fc-4a90-a491-b1a57f495ab6\") " pod="kube-system/cilium-operator-6c4d7847fc-np4gm" Mar 25 01:34:16.839903 containerd[1919]: time="2025-03-25T01:34:16.839861971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skrwx,Uid:1b4457d6-5ef0-4b36-9832-870fdbd4d21b,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:16.849728 containerd[1919]: time="2025-03-25T01:34:16.849534220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7l6fc,Uid:47a3a49e-d9d5-4e05-8f4f-61f2dfec3438,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:16.868126 containerd[1919]: time="2025-03-25T01:34:16.867642422Z" level=info msg="connecting to shim 099ba9f937185aa3b2e3926f244c288184740a1fd25b47b9abce00efd351020d" address="unix:///run/containerd/s/f955098277da78610aca93c3c60ecaab5e67081a0a740538453cf1014b3a8124" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:16.901755 containerd[1919]: time="2025-03-25T01:34:16.901700390Z" level=info msg="connecting to shim 4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c" address="unix:///run/containerd/s/c01193d3b3769c471445411f3fed934d5eff6731dda6985acb7a4b7e33b98908" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:16.936014 systemd[1]: Started cri-containerd-099ba9f937185aa3b2e3926f244c288184740a1fd25b47b9abce00efd351020d.scope - libcontainer container 099ba9f937185aa3b2e3926f244c288184740a1fd25b47b9abce00efd351020d. Mar 25 01:34:16.943461 systemd[1]: Started cri-containerd-4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c.scope - libcontainer container 4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c. Mar 25 01:34:17.001471 containerd[1919]: time="2025-03-25T01:34:17.001423093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7l6fc,Uid:47a3a49e-d9d5-4e05-8f4f-61f2dfec3438,Namespace:kube-system,Attempt:0,} returns sandbox id \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\"" Mar 25 01:34:17.004054 containerd[1919]: time="2025-03-25T01:34:17.003684655Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:34:17.004909 containerd[1919]: time="2025-03-25T01:34:17.004883192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skrwx,Uid:1b4457d6-5ef0-4b36-9832-870fdbd4d21b,Namespace:kube-system,Attempt:0,} returns sandbox id \"099ba9f937185aa3b2e3926f244c288184740a1fd25b47b9abce00efd351020d\"" Mar 25 01:34:17.008711 containerd[1919]: time="2025-03-25T01:34:17.008169936Z" level=info msg="CreateContainer within sandbox \"099ba9f937185aa3b2e3926f244c288184740a1fd25b47b9abce00efd351020d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:34:17.045029 containerd[1919]: time="2025-03-25T01:34:17.044983639Z" level=info msg="Container 55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:17.053276 update_engine[1898]: I20250325 01:34:17.052379 1898 update_attempter.cc:509] Updating boot flags... Mar 25 01:34:17.055384 containerd[1919]: time="2025-03-25T01:34:17.054905003Z" level=info msg="CreateContainer within sandbox \"099ba9f937185aa3b2e3926f244c288184740a1fd25b47b9abce00efd351020d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1\"" Mar 25 01:34:17.056292 containerd[1919]: time="2025-03-25T01:34:17.056036702Z" level=info msg="StartContainer for \"55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1\"" Mar 25 01:34:17.060281 containerd[1919]: time="2025-03-25T01:34:17.060152033Z" level=info msg="connecting to shim 55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1" address="unix:///run/containerd/s/f955098277da78610aca93c3c60ecaab5e67081a0a740538453cf1014b3a8124" protocol=ttrpc version=3 Mar 25 01:34:17.063869 containerd[1919]: time="2025-03-25T01:34:17.063240652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-np4gm,Uid:fe2509a0-14fc-4a90-a491-b1a57f495ab6,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:17.138547 systemd[1]: Started cri-containerd-55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1.scope - libcontainer container 55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1. Mar 25 01:34:17.143440 containerd[1919]: time="2025-03-25T01:34:17.141727431Z" level=info msg="connecting to shim cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386" address="unix:///run/containerd/s/e3fa4f25f3c4f4eaaab086cc00d20d7f05ecdfcbe3b0e68e790eb2f317ab4ecb" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:17.230541 systemd[1]: Started cri-containerd-cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386.scope - libcontainer container cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386. Mar 25 01:34:17.249352 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3425) Mar 25 01:34:17.341328 containerd[1919]: time="2025-03-25T01:34:17.341204464Z" level=info msg="StartContainer for \"55404dc6d7c3b62c9a301a7316bec29dc5c966e4d5b2eb4a8331cf032716c4f1\" returns successfully" Mar 25 01:34:17.619948 kubelet[3196]: I0325 01:34:17.617738 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-skrwx" podStartSLOduration=1.617716253 podStartE2EDuration="1.617716253s" podCreationTimestamp="2025-03-25 01:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:17.424556972 +0000 UTC m=+6.431953944" watchObservedRunningTime="2025-03-25 01:34:17.617716253 +0000 UTC m=+6.625113207" Mar 25 01:34:17.788329 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3428) Mar 25 01:34:17.813094 containerd[1919]: time="2025-03-25T01:34:17.813042203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-np4gm,Uid:fe2509a0-14fc-4a90-a491-b1a57f495ab6,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\"" Mar 25 01:34:18.028524 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3428) Mar 25 01:34:27.296132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628096113.mount: Deactivated successfully. Mar 25 01:34:30.048707 containerd[1919]: time="2025-03-25T01:34:30.030564999Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 25 01:34:30.048707 containerd[1919]: time="2025-03-25T01:34:30.042774214Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:30.048707 containerd[1919]: time="2025-03-25T01:34:30.048582676Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.044852878s" Mar 25 01:34:30.048707 containerd[1919]: time="2025-03-25T01:34:30.048621260Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 25 01:34:30.049474 containerd[1919]: time="2025-03-25T01:34:30.049213120Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:30.056064 containerd[1919]: time="2025-03-25T01:34:30.056014185Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:34:30.061556 containerd[1919]: time="2025-03-25T01:34:30.061157455Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:34:30.132122 containerd[1919]: time="2025-03-25T01:34:30.132078857Z" level=info msg="Container cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:30.202486 containerd[1919]: time="2025-03-25T01:34:30.202443911Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\"" Mar 25 01:34:30.203045 containerd[1919]: time="2025-03-25T01:34:30.203001120Z" level=info msg="StartContainer for \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\"" Mar 25 01:34:30.205346 containerd[1919]: time="2025-03-25T01:34:30.205263812Z" level=info msg="connecting to shim cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e" address="unix:///run/containerd/s/c01193d3b3769c471445411f3fed934d5eff6731dda6985acb7a4b7e33b98908" protocol=ttrpc version=3 Mar 25 01:34:30.318514 systemd[1]: Started cri-containerd-cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e.scope - libcontainer container cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e. Mar 25 01:34:30.426282 containerd[1919]: time="2025-03-25T01:34:30.426236914Z" level=info msg="StartContainer for \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" returns successfully" Mar 25 01:34:30.453485 systemd[1]: cri-containerd-cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e.scope: Deactivated successfully. Mar 25 01:34:30.508059 containerd[1919]: time="2025-03-25T01:34:30.507088309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" id:\"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" pid:3875 exited_at:{seconds:1742866470 nanos:481273336}" Mar 25 01:34:30.516136 containerd[1919]: time="2025-03-25T01:34:30.516082278Z" level=info msg="received exit event container_id:\"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" id:\"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" pid:3875 exited_at:{seconds:1742866470 nanos:481273336}" Mar 25 01:34:30.571734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e-rootfs.mount: Deactivated successfully. Mar 25 01:34:31.444809 containerd[1919]: time="2025-03-25T01:34:31.444764815Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:34:31.473362 containerd[1919]: time="2025-03-25T01:34:31.470604525Z" level=info msg="Container 54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:31.486263 containerd[1919]: time="2025-03-25T01:34:31.485983063Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\"" Mar 25 01:34:31.491346 containerd[1919]: time="2025-03-25T01:34:31.490562905Z" level=info msg="StartContainer for \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\"" Mar 25 01:34:31.493078 containerd[1919]: time="2025-03-25T01:34:31.492764312Z" level=info msg="connecting to shim 54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241" address="unix:///run/containerd/s/c01193d3b3769c471445411f3fed934d5eff6731dda6985acb7a4b7e33b98908" protocol=ttrpc version=3 Mar 25 01:34:31.531524 systemd[1]: Started cri-containerd-54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241.scope - libcontainer container 54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241. Mar 25 01:34:31.579076 containerd[1919]: time="2025-03-25T01:34:31.579033502Z" level=info msg="StartContainer for \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" returns successfully" Mar 25 01:34:31.593959 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:34:31.595016 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:34:31.595581 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:34:31.599127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:34:31.603101 containerd[1919]: time="2025-03-25T01:34:31.600247556Z" level=info msg="received exit event container_id:\"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" id:\"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" pid:3916 exited_at:{seconds:1742866471 nanos:597776777}" Mar 25 01:34:31.603101 containerd[1919]: time="2025-03-25T01:34:31.600994964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" id:\"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" pid:3916 exited_at:{seconds:1742866471 nanos:597776777}" Mar 25 01:34:31.605090 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:34:31.607615 systemd[1]: cri-containerd-54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241.scope: Deactivated successfully. Mar 25 01:34:31.688015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:34:32.440978 containerd[1919]: time="2025-03-25T01:34:32.440937713Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:34:32.484766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241-rootfs.mount: Deactivated successfully. Mar 25 01:34:32.508333 containerd[1919]: time="2025-03-25T01:34:32.505144888Z" level=info msg="Container f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:32.519545 containerd[1919]: time="2025-03-25T01:34:32.519496516Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\"" Mar 25 01:34:32.522329 containerd[1919]: time="2025-03-25T01:34:32.520517903Z" level=info msg="StartContainer for \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\"" Mar 25 01:34:32.523988 containerd[1919]: time="2025-03-25T01:34:32.523928146Z" level=info msg="connecting to shim f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984" address="unix:///run/containerd/s/c01193d3b3769c471445411f3fed934d5eff6731dda6985acb7a4b7e33b98908" protocol=ttrpc version=3 Mar 25 01:34:32.560609 systemd[1]: Started cri-containerd-f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984.scope - libcontainer container f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984. Mar 25 01:34:32.611788 systemd[1]: cri-containerd-f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984.scope: Deactivated successfully. Mar 25 01:34:32.614423 containerd[1919]: time="2025-03-25T01:34:32.614385742Z" level=info msg="received exit event container_id:\"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" id:\"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" pid:3963 exited_at:{seconds:1742866472 nanos:614039098}" Mar 25 01:34:32.616239 containerd[1919]: time="2025-03-25T01:34:32.614783367Z" level=info msg="StartContainer for \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" returns successfully" Mar 25 01:34:32.616239 containerd[1919]: time="2025-03-25T01:34:32.614833852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" id:\"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" pid:3963 exited_at:{seconds:1742866472 nanos:614039098}" Mar 25 01:34:32.646412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984-rootfs.mount: Deactivated successfully. Mar 25 01:34:33.458017 containerd[1919]: time="2025-03-25T01:34:33.457966483Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:34:33.514447 containerd[1919]: time="2025-03-25T01:34:33.512851299Z" level=info msg="Container 2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:33.545457 containerd[1919]: time="2025-03-25T01:34:33.545417641Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\"" Mar 25 01:34:33.546866 containerd[1919]: time="2025-03-25T01:34:33.546057324Z" level=info msg="StartContainer for \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\"" Mar 25 01:34:33.548045 containerd[1919]: time="2025-03-25T01:34:33.547117571Z" level=info msg="connecting to shim 2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce" address="unix:///run/containerd/s/c01193d3b3769c471445411f3fed934d5eff6731dda6985acb7a4b7e33b98908" protocol=ttrpc version=3 Mar 25 01:34:33.586571 systemd[1]: Started cri-containerd-2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce.scope - libcontainer container 2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce. Mar 25 01:34:33.616443 systemd[1]: cri-containerd-2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce.scope: Deactivated successfully. Mar 25 01:34:33.617800 containerd[1919]: time="2025-03-25T01:34:33.617671978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" id:\"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" pid:4005 exited_at:{seconds:1742866473 nanos:616693372}" Mar 25 01:34:33.617989 containerd[1919]: time="2025-03-25T01:34:33.617895258Z" level=info msg="received exit event container_id:\"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" id:\"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" pid:4005 exited_at:{seconds:1742866473 nanos:616693372}" Mar 25 01:34:33.620480 containerd[1919]: time="2025-03-25T01:34:33.620452352Z" level=info msg="StartContainer for \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" returns successfully" Mar 25 01:34:33.644438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce-rootfs.mount: Deactivated successfully. Mar 25 01:34:34.481994 containerd[1919]: time="2025-03-25T01:34:34.481874700Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:34:34.515371 containerd[1919]: time="2025-03-25T01:34:34.515034999Z" level=info msg="Container 0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:34.531293 containerd[1919]: time="2025-03-25T01:34:34.531251358Z" level=info msg="CreateContainer within sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\"" Mar 25 01:34:34.532048 containerd[1919]: time="2025-03-25T01:34:34.532015750Z" level=info msg="StartContainer for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\"" Mar 25 01:34:34.533145 containerd[1919]: time="2025-03-25T01:34:34.533107258Z" level=info msg="connecting to shim 0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408" address="unix:///run/containerd/s/c01193d3b3769c471445411f3fed934d5eff6731dda6985acb7a4b7e33b98908" protocol=ttrpc version=3 Mar 25 01:34:34.598513 systemd[1]: Started cri-containerd-0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408.scope - libcontainer container 0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408. Mar 25 01:34:34.638841 containerd[1919]: time="2025-03-25T01:34:34.637248856Z" level=info msg="StartContainer for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" returns successfully" Mar 25 01:34:34.810325 containerd[1919]: time="2025-03-25T01:34:34.810263696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" id:\"cf8e6679507bcbf1c4cb8fbeaea5c1e0185105e4cf703393b683a1216d58d00a\" pid:4077 exited_at:{seconds:1742866474 nanos:806713697}" Mar 25 01:34:34.944403 kubelet[3196]: I0325 01:34:34.944371 3196 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 25 01:34:35.046563 kubelet[3196]: I0325 01:34:35.046530 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkq9j\" (UniqueName: \"kubernetes.io/projected/ea677ec2-ef23-4bd3-a2f6-bd80c4a48e76-kube-api-access-gkq9j\") pod \"coredns-668d6bf9bc-p6m97\" (UID: \"ea677ec2-ef23-4bd3-a2f6-bd80c4a48e76\") " pod="kube-system/coredns-668d6bf9bc-p6m97" Mar 25 01:34:35.047188 kubelet[3196]: I0325 01:34:35.046923 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea677ec2-ef23-4bd3-a2f6-bd80c4a48e76-config-volume\") pod \"coredns-668d6bf9bc-p6m97\" (UID: \"ea677ec2-ef23-4bd3-a2f6-bd80c4a48e76\") " pod="kube-system/coredns-668d6bf9bc-p6m97" Mar 25 01:34:35.050853 systemd[1]: Created slice kubepods-burstable-podea677ec2_ef23_4bd3_a2f6_bd80c4a48e76.slice - libcontainer container kubepods-burstable-podea677ec2_ef23_4bd3_a2f6_bd80c4a48e76.slice. Mar 25 01:34:35.067469 systemd[1]: Created slice kubepods-burstable-pod224bda55_70ef_403f_98ba_de6aa8dc4035.slice - libcontainer container kubepods-burstable-pod224bda55_70ef_403f_98ba_de6aa8dc4035.slice. Mar 25 01:34:35.148768 kubelet[3196]: I0325 01:34:35.147199 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224bda55-70ef-403f-98ba-de6aa8dc4035-config-volume\") pod \"coredns-668d6bf9bc-xldb5\" (UID: \"224bda55-70ef-403f-98ba-de6aa8dc4035\") " pod="kube-system/coredns-668d6bf9bc-xldb5" Mar 25 01:34:35.148768 kubelet[3196]: I0325 01:34:35.147278 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkwk\" (UniqueName: \"kubernetes.io/projected/224bda55-70ef-403f-98ba-de6aa8dc4035-kube-api-access-chkwk\") pod \"coredns-668d6bf9bc-xldb5\" (UID: \"224bda55-70ef-403f-98ba-de6aa8dc4035\") " pod="kube-system/coredns-668d6bf9bc-xldb5" Mar 25 01:34:35.362250 containerd[1919]: time="2025-03-25T01:34:35.361847018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p6m97,Uid:ea677ec2-ef23-4bd3-a2f6-bd80c4a48e76,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:35.379510 containerd[1919]: time="2025-03-25T01:34:35.379467041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xldb5,Uid:224bda55-70ef-403f-98ba-de6aa8dc4035,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:35.547465 kubelet[3196]: I0325 01:34:35.546709 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7l6fc" podStartSLOduration=6.49643031 podStartE2EDuration="19.546680247s" podCreationTimestamp="2025-03-25 01:34:16 +0000 UTC" firstStartedPulling="2025-03-25 01:34:17.003235717 +0000 UTC m=+6.010632651" lastFinishedPulling="2025-03-25 01:34:30.053485642 +0000 UTC m=+19.060882588" observedRunningTime="2025-03-25 01:34:35.546264194 +0000 UTC m=+24.553661148" watchObservedRunningTime="2025-03-25 01:34:35.546680247 +0000 UTC m=+24.554077199" Mar 25 01:34:35.956376 containerd[1919]: time="2025-03-25T01:34:35.956330113Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:35.957499 containerd[1919]: time="2025-03-25T01:34:35.957440734Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 25 01:34:35.958670 containerd[1919]: time="2025-03-25T01:34:35.958614292Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:35.960067 containerd[1919]: time="2025-03-25T01:34:35.959906669Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.903833946s" Mar 25 01:34:35.960067 containerd[1919]: time="2025-03-25T01:34:35.959944474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 25 01:34:35.962698 containerd[1919]: time="2025-03-25T01:34:35.962653250Z" level=info msg="CreateContainer within sandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:34:35.972361 containerd[1919]: time="2025-03-25T01:34:35.970171909Z" level=info msg="Container f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:35.989002 containerd[1919]: time="2025-03-25T01:34:35.988960066Z" level=info msg="CreateContainer within sandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\"" Mar 25 01:34:35.991341 containerd[1919]: time="2025-03-25T01:34:35.989928188Z" level=info msg="StartContainer for \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\"" Mar 25 01:34:35.991341 containerd[1919]: time="2025-03-25T01:34:35.991159347Z" level=info msg="connecting to shim f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0" address="unix:///run/containerd/s/e3fa4f25f3c4f4eaaab086cc00d20d7f05ecdfcbe3b0e68e790eb2f317ab4ecb" protocol=ttrpc version=3 Mar 25 01:34:36.052559 systemd[1]: Started cri-containerd-f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0.scope - libcontainer container f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0. Mar 25 01:34:36.107980 containerd[1919]: time="2025-03-25T01:34:36.107944906Z" level=info msg="StartContainer for \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" returns successfully" Mar 25 01:34:40.242607 systemd-networkd[1821]: cilium_host: Link UP Mar 25 01:34:40.244430 systemd-networkd[1821]: cilium_net: Link UP Mar 25 01:34:40.252430 systemd-networkd[1821]: cilium_net: Gained carrier Mar 25 01:34:40.256577 systemd-networkd[1821]: cilium_host: Gained carrier Mar 25 01:34:40.259165 (udev-worker)[4215]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:34:40.260764 (udev-worker)[4217]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:34:40.476407 (udev-worker)[4221]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:34:40.491268 systemd-networkd[1821]: cilium_vxlan: Link UP Mar 25 01:34:40.491281 systemd-networkd[1821]: cilium_vxlan: Gained carrier Mar 25 01:34:41.008525 systemd-networkd[1821]: cilium_host: Gained IPv6LL Mar 25 01:34:41.200469 systemd-networkd[1821]: cilium_net: Gained IPv6LL Mar 25 01:34:41.965438 kernel: NET: Registered PF_ALG protocol family Mar 25 01:34:42.418687 systemd-networkd[1821]: cilium_vxlan: Gained IPv6LL Mar 25 01:34:43.550561 systemd-networkd[1821]: lxc_health: Link UP Mar 25 01:34:43.558888 systemd-networkd[1821]: lxc_health: Gained carrier Mar 25 01:34:43.977904 (udev-worker)[4548]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:34:43.991417 (udev-worker)[4222]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:34:43.998999 systemd-networkd[1821]: lxc8d907f07fa7d: Link UP Mar 25 01:34:44.003660 kernel: eth0: renamed from tmp4fdf1 Mar 25 01:34:44.008425 systemd-networkd[1821]: lxc3115f39acfeb: Link UP Mar 25 01:34:44.019433 systemd-networkd[1821]: lxc8d907f07fa7d: Gained carrier Mar 25 01:34:44.021694 kernel: eth0: renamed from tmpdaa6f Mar 25 01:34:44.038916 systemd-networkd[1821]: lxc3115f39acfeb: Gained carrier Mar 25 01:34:44.072191 systemd[1]: Started sshd@7-172.31.17.232:22-147.75.109.163:60206.service - OpenSSH per-connection server daemon (147.75.109.163:60206). Mar 25 01:34:44.336414 sshd[4565]: Accepted publickey for core from 147.75.109.163 port 60206 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:34:44.339726 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:44.357298 systemd-logind[1897]: New session 8 of user core. Mar 25 01:34:44.364707 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:34:44.890435 kubelet[3196]: I0325 01:34:44.889649 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-np4gm" podStartSLOduration=10.744053472000001 podStartE2EDuration="28.889621s" podCreationTimestamp="2025-03-25 01:34:16 +0000 UTC" firstStartedPulling="2025-03-25 01:34:17.815375001 +0000 UTC m=+6.822771944" lastFinishedPulling="2025-03-25 01:34:35.960942531 +0000 UTC m=+24.968339472" observedRunningTime="2025-03-25 01:34:36.538786782 +0000 UTC m=+25.546183736" watchObservedRunningTime="2025-03-25 01:34:44.889621 +0000 UTC m=+33.897017955" Mar 25 01:34:44.913786 systemd-networkd[1821]: lxc_health: Gained IPv6LL Mar 25 01:34:45.232459 systemd-networkd[1821]: lxc8d907f07fa7d: Gained IPv6LL Mar 25 01:34:45.296468 systemd-networkd[1821]: lxc3115f39acfeb: Gained IPv6LL Mar 25 01:34:45.848137 sshd[4573]: Connection closed by 147.75.109.163 port 60206 Mar 25 01:34:45.850556 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:45.875643 systemd[1]: sshd@7-172.31.17.232:22-147.75.109.163:60206.service: Deactivated successfully. Mar 25 01:34:45.886266 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:34:45.888510 systemd-logind[1897]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:34:45.889690 systemd-logind[1897]: Removed session 8. Mar 25 01:34:48.157289 ntpd[1889]: Listen normally on 8 cilium_host 192.168.0.82:123 Mar 25 01:34:48.158067 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 8 cilium_host 192.168.0.82:123 Mar 25 01:34:48.158067 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 9 cilium_net [fe80::242c:a0ff:fe6c:9a1c%4]:123 Mar 25 01:34:48.158067 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 10 cilium_host [fe80::8457:baff:fe6f:6068%5]:123 Mar 25 01:34:48.158067 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 11 cilium_vxlan [fe80::487b:33ff:fe00:aa07%6]:123 Mar 25 01:34:48.157900 ntpd[1889]: Listen normally on 9 cilium_net [fe80::242c:a0ff:fe6c:9a1c%4]:123 Mar 25 01:34:48.157956 ntpd[1889]: Listen normally on 10 cilium_host [fe80::8457:baff:fe6f:6068%5]:123 Mar 25 01:34:48.157994 ntpd[1889]: Listen normally on 11 cilium_vxlan [fe80::487b:33ff:fe00:aa07%6]:123 Mar 25 01:34:48.158531 ntpd[1889]: Listen normally on 12 lxc_health [fe80::d84d:d8ff:fe02:773e%8]:123 Mar 25 01:34:48.158995 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 12 lxc_health [fe80::d84d:d8ff:fe02:773e%8]:123 Mar 25 01:34:48.158995 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 13 lxc3115f39acfeb [fe80::8463:afff:fed2:c7f6%10]:123 Mar 25 01:34:48.158995 ntpd[1889]: 25 Mar 01:34:48 ntpd[1889]: Listen normally on 14 lxc8d907f07fa7d [fe80::cc6a:beff:fe19:289a%12]:123 Mar 25 01:34:48.158591 ntpd[1889]: Listen normally on 13 lxc3115f39acfeb [fe80::8463:afff:fed2:c7f6%10]:123 Mar 25 01:34:48.158632 ntpd[1889]: Listen normally on 14 lxc8d907f07fa7d [fe80::cc6a:beff:fe19:289a%12]:123 Mar 25 01:34:50.128464 containerd[1919]: time="2025-03-25T01:34:50.128385074Z" level=info msg="connecting to shim daa6f810292fab2e58e66c9f3b437387f3517d6d1ad9f765711810415d9caa3d" address="unix:///run/containerd/s/67c7f6fb594faf744fb8364a74fac0db3a36f9d52c7b24a63f2673f40de2da3c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:50.136518 containerd[1919]: time="2025-03-25T01:34:50.136450151Z" level=info msg="connecting to shim 4fdf1f0c61b1e5ab0974cba1ee7c6b0cb955a9b1a8f10367e1410b1bc334770e" address="unix:///run/containerd/s/9f478d88cd071bc54afa15ca99f38943e33a624e45ee7b6d3d43c853223dc8df" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:50.248555 systemd[1]: Started cri-containerd-4fdf1f0c61b1e5ab0974cba1ee7c6b0cb955a9b1a8f10367e1410b1bc334770e.scope - libcontainer container 4fdf1f0c61b1e5ab0974cba1ee7c6b0cb955a9b1a8f10367e1410b1bc334770e. Mar 25 01:34:50.252601 systemd[1]: Started cri-containerd-daa6f810292fab2e58e66c9f3b437387f3517d6d1ad9f765711810415d9caa3d.scope - libcontainer container daa6f810292fab2e58e66c9f3b437387f3517d6d1ad9f765711810415d9caa3d. Mar 25 01:34:50.415745 containerd[1919]: time="2025-03-25T01:34:50.415618379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p6m97,Uid:ea677ec2-ef23-4bd3-a2f6-bd80c4a48e76,Namespace:kube-system,Attempt:0,} returns sandbox id \"daa6f810292fab2e58e66c9f3b437387f3517d6d1ad9f765711810415d9caa3d\"" Mar 25 01:34:50.426600 containerd[1919]: time="2025-03-25T01:34:50.424107523Z" level=info msg="CreateContainer within sandbox \"daa6f810292fab2e58e66c9f3b437387f3517d6d1ad9f765711810415d9caa3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:34:50.444458 containerd[1919]: time="2025-03-25T01:34:50.444351166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xldb5,Uid:224bda55-70ef-403f-98ba-de6aa8dc4035,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fdf1f0c61b1e5ab0974cba1ee7c6b0cb955a9b1a8f10367e1410b1bc334770e\"" Mar 25 01:34:50.447387 containerd[1919]: time="2025-03-25T01:34:50.446346467Z" level=info msg="Container d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:50.450509 containerd[1919]: time="2025-03-25T01:34:50.450397501Z" level=info msg="CreateContainer within sandbox \"4fdf1f0c61b1e5ab0974cba1ee7c6b0cb955a9b1a8f10367e1410b1bc334770e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:34:50.466241 containerd[1919]: time="2025-03-25T01:34:50.465137829Z" level=info msg="CreateContainer within sandbox \"daa6f810292fab2e58e66c9f3b437387f3517d6d1ad9f765711810415d9caa3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080\"" Mar 25 01:34:50.467214 containerd[1919]: time="2025-03-25T01:34:50.466742328Z" level=info msg="StartContainer for \"d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080\"" Mar 25 01:34:50.468502 containerd[1919]: time="2025-03-25T01:34:50.468466163Z" level=info msg="Container fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:50.470855 containerd[1919]: time="2025-03-25T01:34:50.470811760Z" level=info msg="connecting to shim d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080" address="unix:///run/containerd/s/67c7f6fb594faf744fb8364a74fac0db3a36f9d52c7b24a63f2673f40de2da3c" protocol=ttrpc version=3 Mar 25 01:34:50.479470 containerd[1919]: time="2025-03-25T01:34:50.479338456Z" level=info msg="CreateContainer within sandbox \"4fdf1f0c61b1e5ab0974cba1ee7c6b0cb955a9b1a8f10367e1410b1bc334770e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73\"" Mar 25 01:34:50.481942 containerd[1919]: time="2025-03-25T01:34:50.481899717Z" level=info msg="StartContainer for \"fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73\"" Mar 25 01:34:50.484151 containerd[1919]: time="2025-03-25T01:34:50.484109994Z" level=info msg="connecting to shim fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73" address="unix:///run/containerd/s/9f478d88cd071bc54afa15ca99f38943e33a624e45ee7b6d3d43c853223dc8df" protocol=ttrpc version=3 Mar 25 01:34:50.496540 systemd[1]: Started cri-containerd-d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080.scope - libcontainer container d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080. Mar 25 01:34:50.513697 systemd[1]: Started cri-containerd-fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73.scope - libcontainer container fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73. Mar 25 01:34:50.567660 containerd[1919]: time="2025-03-25T01:34:50.567621900Z" level=info msg="StartContainer for \"d88bd286e267791b84e6a8c2ad8d8e1521ffe2e18c730ff65368afc84541f080\" returns successfully" Mar 25 01:34:50.568619 containerd[1919]: time="2025-03-25T01:34:50.568574047Z" level=info msg="StartContainer for \"fefa92be89d451c60eee1e2731aaf7ea67a3fff51ce072be62096d9566be3f73\" returns successfully" Mar 25 01:34:50.886143 systemd[1]: Started sshd@8-172.31.17.232:22-147.75.109.163:53388.service - OpenSSH per-connection server daemon (147.75.109.163:53388). Mar 25 01:34:51.046055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685102726.mount: Deactivated successfully. Mar 25 01:34:51.092041 sshd[4747]: Accepted publickey for core from 147.75.109.163 port 53388 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:34:51.093600 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:51.108533 systemd-logind[1897]: New session 9 of user core. Mar 25 01:34:51.113478 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:34:51.384148 sshd[4749]: Connection closed by 147.75.109.163 port 53388 Mar 25 01:34:51.387285 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:51.392682 systemd-logind[1897]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:34:51.393724 systemd[1]: sshd@8-172.31.17.232:22-147.75.109.163:53388.service: Deactivated successfully. Mar 25 01:34:51.398928 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:34:51.401420 systemd-logind[1897]: Removed session 9. Mar 25 01:34:51.600383 kubelet[3196]: I0325 01:34:51.598274 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p6m97" podStartSLOduration=35.59815555 podStartE2EDuration="35.59815555s" podCreationTimestamp="2025-03-25 01:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:51.596547556 +0000 UTC m=+40.603944519" watchObservedRunningTime="2025-03-25 01:34:51.59815555 +0000 UTC m=+40.605552524" Mar 25 01:34:56.418473 systemd[1]: Started sshd@9-172.31.17.232:22-147.75.109.163:53390.service - OpenSSH per-connection server daemon (147.75.109.163:53390). Mar 25 01:34:56.603330 sshd[4777]: Accepted publickey for core from 147.75.109.163 port 53390 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:34:56.604967 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:56.610866 systemd-logind[1897]: New session 10 of user core. Mar 25 01:34:56.615551 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:34:56.831775 sshd[4779]: Connection closed by 147.75.109.163 port 53390 Mar 25 01:34:56.832510 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:56.836697 systemd[1]: sshd@9-172.31.17.232:22-147.75.109.163:53390.service: Deactivated successfully. Mar 25 01:34:56.838924 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:34:56.840739 systemd-logind[1897]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:34:56.842086 systemd-logind[1897]: Removed session 10. Mar 25 01:35:01.869851 systemd[1]: Started sshd@10-172.31.17.232:22-147.75.109.163:43012.service - OpenSSH per-connection server daemon (147.75.109.163:43012). Mar 25 01:35:02.071575 sshd[4798]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:02.077506 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:02.095494 systemd-logind[1897]: New session 11 of user core. Mar 25 01:35:02.099538 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:35:02.392288 sshd[4800]: Connection closed by 147.75.109.163 port 43012 Mar 25 01:35:02.393633 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:02.397517 systemd[1]: sshd@10-172.31.17.232:22-147.75.109.163:43012.service: Deactivated successfully. Mar 25 01:35:02.400730 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:35:02.402888 systemd-logind[1897]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:35:02.404943 systemd-logind[1897]: Removed session 11. Mar 25 01:35:02.430395 systemd[1]: Started sshd@11-172.31.17.232:22-147.75.109.163:43028.service - OpenSSH per-connection server daemon (147.75.109.163:43028). Mar 25 01:35:02.601074 kubelet[3196]: I0325 01:35:02.601009 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xldb5" podStartSLOduration=46.600990179 podStartE2EDuration="46.600990179s" podCreationTimestamp="2025-03-25 01:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:51.627210838 +0000 UTC m=+40.634607789" watchObservedRunningTime="2025-03-25 01:35:02.600990179 +0000 UTC m=+51.608387131" Mar 25 01:35:02.681061 sshd[4813]: Accepted publickey for core from 147.75.109.163 port 43028 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:02.682631 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:02.690415 systemd-logind[1897]: New session 12 of user core. Mar 25 01:35:02.704551 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:35:03.035360 sshd[4817]: Connection closed by 147.75.109.163 port 43028 Mar 25 01:35:03.038779 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:03.048327 systemd-logind[1897]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:35:03.053258 systemd[1]: sshd@11-172.31.17.232:22-147.75.109.163:43028.service: Deactivated successfully. Mar 25 01:35:03.073084 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:35:03.099259 systemd[1]: Started sshd@12-172.31.17.232:22-147.75.109.163:43032.service - OpenSSH per-connection server daemon (147.75.109.163:43032). Mar 25 01:35:03.101004 systemd-logind[1897]: Removed session 12. Mar 25 01:35:03.293729 sshd[4826]: Accepted publickey for core from 147.75.109.163 port 43032 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:03.295539 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:03.303521 systemd-logind[1897]: New session 13 of user core. Mar 25 01:35:03.317589 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:35:03.559615 sshd[4829]: Connection closed by 147.75.109.163 port 43032 Mar 25 01:35:03.560914 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:03.566356 systemd[1]: sshd@12-172.31.17.232:22-147.75.109.163:43032.service: Deactivated successfully. Mar 25 01:35:03.568763 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:35:03.569850 systemd-logind[1897]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:35:03.571214 systemd-logind[1897]: Removed session 13. Mar 25 01:35:08.606179 systemd[1]: Started sshd@13-172.31.17.232:22-147.75.109.163:43034.service - OpenSSH per-connection server daemon (147.75.109.163:43034). Mar 25 01:35:08.821519 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 43034 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:08.823861 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:08.840652 systemd-logind[1897]: New session 14 of user core. Mar 25 01:35:08.849556 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:35:09.048371 sshd[4849]: Connection closed by 147.75.109.163 port 43034 Mar 25 01:35:09.049044 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:09.052753 systemd[1]: sshd@13-172.31.17.232:22-147.75.109.163:43034.service: Deactivated successfully. Mar 25 01:35:09.055260 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:35:09.057414 systemd-logind[1897]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:35:09.059604 systemd-logind[1897]: Removed session 14. Mar 25 01:35:14.086257 systemd[1]: Started sshd@14-172.31.17.232:22-147.75.109.163:45200.service - OpenSSH per-connection server daemon (147.75.109.163:45200). Mar 25 01:35:14.272805 sshd[4863]: Accepted publickey for core from 147.75.109.163 port 45200 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:14.274576 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:14.280245 systemd-logind[1897]: New session 15 of user core. Mar 25 01:35:14.292556 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:35:14.575248 sshd[4865]: Connection closed by 147.75.109.163 port 45200 Mar 25 01:35:14.577666 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:14.590391 systemd[1]: sshd@14-172.31.17.232:22-147.75.109.163:45200.service: Deactivated successfully. Mar 25 01:35:14.603096 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:35:14.606371 systemd-logind[1897]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:35:14.613451 systemd-logind[1897]: Removed session 15. Mar 25 01:35:19.607264 systemd[1]: Started sshd@15-172.31.17.232:22-147.75.109.163:45202.service - OpenSSH per-connection server daemon (147.75.109.163:45202). Mar 25 01:35:19.790040 sshd[4879]: Accepted publickey for core from 147.75.109.163 port 45202 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:19.793874 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:19.807234 systemd-logind[1897]: New session 16 of user core. Mar 25 01:35:19.814854 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:35:20.034277 sshd[4881]: Connection closed by 147.75.109.163 port 45202 Mar 25 01:35:20.036036 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:20.040454 systemd[1]: sshd@15-172.31.17.232:22-147.75.109.163:45202.service: Deactivated successfully. Mar 25 01:35:20.043987 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:35:20.045128 systemd-logind[1897]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:35:20.046365 systemd-logind[1897]: Removed session 16. Mar 25 01:35:20.069539 systemd[1]: Started sshd@16-172.31.17.232:22-147.75.109.163:54186.service - OpenSSH per-connection server daemon (147.75.109.163:54186). Mar 25 01:35:20.255340 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 54186 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:20.256853 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:20.276240 systemd-logind[1897]: New session 17 of user core. Mar 25 01:35:20.280538 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:35:21.035475 sshd[4894]: Connection closed by 147.75.109.163 port 54186 Mar 25 01:35:21.036365 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:21.043372 systemd[1]: sshd@16-172.31.17.232:22-147.75.109.163:54186.service: Deactivated successfully. Mar 25 01:35:21.046420 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:35:21.047865 systemd-logind[1897]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:35:21.048988 systemd-logind[1897]: Removed session 17. Mar 25 01:35:21.068384 systemd[1]: Started sshd@17-172.31.17.232:22-147.75.109.163:54190.service - OpenSSH per-connection server daemon (147.75.109.163:54190). Mar 25 01:35:21.282710 sshd[4904]: Accepted publickey for core from 147.75.109.163 port 54190 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:21.284273 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:21.293128 systemd-logind[1897]: New session 18 of user core. Mar 25 01:35:21.299560 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:35:22.741553 sshd[4906]: Connection closed by 147.75.109.163 port 54190 Mar 25 01:35:22.743070 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:22.746335 systemd[1]: sshd@17-172.31.17.232:22-147.75.109.163:54190.service: Deactivated successfully. Mar 25 01:35:22.749196 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:35:22.751092 systemd-logind[1897]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:35:22.753163 systemd-logind[1897]: Removed session 18. Mar 25 01:35:22.774239 systemd[1]: Started sshd@18-172.31.17.232:22-147.75.109.163:54206.service - OpenSSH per-connection server daemon (147.75.109.163:54206). Mar 25 01:35:22.960838 sshd[4923]: Accepted publickey for core from 147.75.109.163 port 54206 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:22.962287 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:22.968412 systemd-logind[1897]: New session 19 of user core. Mar 25 01:35:22.975523 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:35:23.419683 sshd[4925]: Connection closed by 147.75.109.163 port 54206 Mar 25 01:35:23.420118 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:23.426886 systemd[1]: sshd@18-172.31.17.232:22-147.75.109.163:54206.service: Deactivated successfully. Mar 25 01:35:23.433702 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:35:23.435414 systemd-logind[1897]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:35:23.438785 systemd-logind[1897]: Removed session 19. Mar 25 01:35:23.479606 systemd[1]: Started sshd@19-172.31.17.232:22-147.75.109.163:54214.service - OpenSSH per-connection server daemon (147.75.109.163:54214). Mar 25 01:35:23.671355 sshd[4935]: Accepted publickey for core from 147.75.109.163 port 54214 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:23.672459 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:23.677275 systemd-logind[1897]: New session 20 of user core. Mar 25 01:35:23.686536 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:35:23.888613 sshd[4937]: Connection closed by 147.75.109.163 port 54214 Mar 25 01:35:23.890445 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:23.894740 systemd-logind[1897]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:35:23.896775 systemd[1]: sshd@19-172.31.17.232:22-147.75.109.163:54214.service: Deactivated successfully. Mar 25 01:35:23.899529 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:35:23.900745 systemd-logind[1897]: Removed session 20. Mar 25 01:35:28.934590 systemd[1]: Started sshd@20-172.31.17.232:22-147.75.109.163:54220.service - OpenSSH per-connection server daemon (147.75.109.163:54220). Mar 25 01:35:29.125078 sshd[4949]: Accepted publickey for core from 147.75.109.163 port 54220 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:29.131664 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:29.139626 systemd-logind[1897]: New session 21 of user core. Mar 25 01:35:29.146513 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:35:29.388604 sshd[4951]: Connection closed by 147.75.109.163 port 54220 Mar 25 01:35:29.389644 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:29.393155 systemd[1]: sshd@20-172.31.17.232:22-147.75.109.163:54220.service: Deactivated successfully. Mar 25 01:35:29.395883 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:35:29.397908 systemd-logind[1897]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:35:29.399704 systemd-logind[1897]: Removed session 21. Mar 25 01:35:34.428727 systemd[1]: Started sshd@21-172.31.17.232:22-147.75.109.163:51180.service - OpenSSH per-connection server daemon (147.75.109.163:51180). Mar 25 01:35:34.606989 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 51180 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:34.608540 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:34.614370 systemd-logind[1897]: New session 22 of user core. Mar 25 01:35:34.625668 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:35:34.820704 sshd[4969]: Connection closed by 147.75.109.163 port 51180 Mar 25 01:35:34.822941 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:34.826536 systemd[1]: sshd@21-172.31.17.232:22-147.75.109.163:51180.service: Deactivated successfully. Mar 25 01:35:34.828910 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:35:34.831176 systemd-logind[1897]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:35:34.832905 systemd-logind[1897]: Removed session 22. Mar 25 01:35:39.855321 systemd[1]: Started sshd@22-172.31.17.232:22-147.75.109.163:51192.service - OpenSSH per-connection server daemon (147.75.109.163:51192). Mar 25 01:35:40.054001 sshd[4982]: Accepted publickey for core from 147.75.109.163 port 51192 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:40.055726 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:40.061242 systemd-logind[1897]: New session 23 of user core. Mar 25 01:35:40.080612 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:35:40.327273 sshd[4984]: Connection closed by 147.75.109.163 port 51192 Mar 25 01:35:40.328001 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:40.340073 systemd[1]: sshd@22-172.31.17.232:22-147.75.109.163:51192.service: Deactivated successfully. Mar 25 01:35:40.346793 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:35:40.350057 systemd-logind[1897]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:35:40.352250 systemd-logind[1897]: Removed session 23. Mar 25 01:35:45.360365 systemd[1]: Started sshd@23-172.31.17.232:22-147.75.109.163:42390.service - OpenSSH per-connection server daemon (147.75.109.163:42390). Mar 25 01:35:45.580791 sshd[4996]: Accepted publickey for core from 147.75.109.163 port 42390 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:45.582974 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:45.589382 systemd-logind[1897]: New session 24 of user core. Mar 25 01:35:45.598532 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:35:45.835682 sshd[4998]: Connection closed by 147.75.109.163 port 42390 Mar 25 01:35:45.836432 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:45.839747 systemd[1]: sshd@23-172.31.17.232:22-147.75.109.163:42390.service: Deactivated successfully. Mar 25 01:35:45.842335 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:35:45.844564 systemd-logind[1897]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:35:45.846240 systemd-logind[1897]: Removed session 24. Mar 25 01:35:45.874605 systemd[1]: Started sshd@24-172.31.17.232:22-147.75.109.163:42400.service - OpenSSH per-connection server daemon (147.75.109.163:42400). Mar 25 01:35:46.046971 sshd[5010]: Accepted publickey for core from 147.75.109.163 port 42400 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:46.048623 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:46.053383 systemd-logind[1897]: New session 25 of user core. Mar 25 01:35:46.062585 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:35:50.559696 containerd[1919]: time="2025-03-25T01:35:50.559643830Z" level=info msg="StopContainer for \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" with timeout 30 (s)" Mar 25 01:35:50.560644 containerd[1919]: time="2025-03-25T01:35:50.560201257Z" level=info msg="Stop container \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" with signal terminated" Mar 25 01:35:50.572124 systemd[1]: cri-containerd-f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0.scope: Deactivated successfully. Mar 25 01:35:50.575971 containerd[1919]: time="2025-03-25T01:35:50.575818816Z" level=info msg="received exit event container_id:\"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" id:\"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" pid:4189 exited_at:{seconds:1742866550 nanos:574158285}" Mar 25 01:35:50.576933 containerd[1919]: time="2025-03-25T01:35:50.576854270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" id:\"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" pid:4189 exited_at:{seconds:1742866550 nanos:574158285}" Mar 25 01:35:50.608082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0-rootfs.mount: Deactivated successfully. Mar 25 01:35:50.624076 containerd[1919]: time="2025-03-25T01:35:50.624027509Z" level=info msg="StopContainer for \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" returns successfully" Mar 25 01:35:50.634771 containerd[1919]: time="2025-03-25T01:35:50.634680468Z" level=info msg="StopPodSandbox for \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\"" Mar 25 01:35:50.634925 containerd[1919]: time="2025-03-25T01:35:50.634826096Z" level=info msg="Container to stop \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:50.644077 systemd[1]: cri-containerd-cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386.scope: Deactivated successfully. Mar 25 01:35:50.650708 containerd[1919]: time="2025-03-25T01:35:50.650450017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" id:\"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" pid:3433 exit_status:137 exited_at:{seconds:1742866550 nanos:649967880}" Mar 25 01:35:50.670650 containerd[1919]: time="2025-03-25T01:35:50.670599252Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:35:50.678515 containerd[1919]: time="2025-03-25T01:35:50.677587485Z" level=info msg="StopContainer for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" with timeout 2 (s)" Mar 25 01:35:50.678515 containerd[1919]: time="2025-03-25T01:35:50.677961117Z" level=info msg="Stop container \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" with signal terminated" Mar 25 01:35:50.694800 systemd-networkd[1821]: lxc_health: Link DOWN Mar 25 01:35:50.694810 systemd-networkd[1821]: lxc_health: Lost carrier Mar 25 01:35:50.720015 systemd[1]: cri-containerd-0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408.scope: Deactivated successfully. Mar 25 01:35:50.721486 systemd[1]: cri-containerd-0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408.scope: Consumed 8.525s CPU time, 189.2M memory peak, 69.3M read from disk, 13.3M written to disk. Mar 25 01:35:50.725977 containerd[1919]: time="2025-03-25T01:35:50.725939711Z" level=info msg="received exit event container_id:\"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" id:\"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" pid:4046 exited_at:{seconds:1742866550 nanos:725563098}" Mar 25 01:35:50.738856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386-rootfs.mount: Deactivated successfully. Mar 25 01:35:50.775227 containerd[1919]: time="2025-03-25T01:35:50.775175371Z" level=info msg="shim disconnected" id=cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386 namespace=k8s.io Mar 25 01:35:50.775227 containerd[1919]: time="2025-03-25T01:35:50.775210689Z" level=warning msg="cleaning up after shim disconnected" id=cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386 namespace=k8s.io Mar 25 01:35:50.775894 containerd[1919]: time="2025-03-25T01:35:50.775224767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:35:50.806621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408-rootfs.mount: Deactivated successfully. Mar 25 01:35:50.820532 containerd[1919]: time="2025-03-25T01:35:50.819163441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" id:\"64228328a087a73ebd2fe0e98a0bf0b4896b21f68c9ecd2bc80c6eb0c27814fd\" pid:5034 exited_at:{seconds:1742866550 nanos:673491584}" Mar 25 01:35:50.820532 containerd[1919]: time="2025-03-25T01:35:50.819180963Z" level=info msg="received exit event sandbox_id:\"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" exit_status:137 exited_at:{seconds:1742866550 nanos:649967880}" Mar 25 01:35:50.820532 containerd[1919]: time="2025-03-25T01:35:50.819678946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" id:\"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" pid:4046 exited_at:{seconds:1742866550 nanos:725563098}" Mar 25 01:35:50.824253 containerd[1919]: time="2025-03-25T01:35:50.820939738Z" level=info msg="TearDown network for sandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" successfully" Mar 25 01:35:50.824253 containerd[1919]: time="2025-03-25T01:35:50.820964565Z" level=info msg="StopPodSandbox for \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" returns successfully" Mar 25 01:35:50.826296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386-shm.mount: Deactivated successfully. Mar 25 01:35:50.834479 containerd[1919]: time="2025-03-25T01:35:50.834101451Z" level=info msg="StopContainer for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" returns successfully" Mar 25 01:35:50.836775 containerd[1919]: time="2025-03-25T01:35:50.835112817Z" level=info msg="StopPodSandbox for \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\"" Mar 25 01:35:50.836775 containerd[1919]: time="2025-03-25T01:35:50.836510654Z" level=info msg="Container to stop \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:50.836775 containerd[1919]: time="2025-03-25T01:35:50.836553876Z" level=info msg="Container to stop \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:50.836775 containerd[1919]: time="2025-03-25T01:35:50.836567724Z" level=info msg="Container to stop \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:50.836775 containerd[1919]: time="2025-03-25T01:35:50.836581084Z" level=info msg="Container to stop \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:50.836775 containerd[1919]: time="2025-03-25T01:35:50.836596027Z" level=info msg="Container to stop \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:50.849973 systemd[1]: cri-containerd-4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c.scope: Deactivated successfully. Mar 25 01:35:50.853816 containerd[1919]: time="2025-03-25T01:35:50.852880028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" id:\"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" pid:3344 exit_status:137 exited_at:{seconds:1742866550 nanos:851029655}" Mar 25 01:35:50.888090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c-rootfs.mount: Deactivated successfully. Mar 25 01:35:50.902313 containerd[1919]: time="2025-03-25T01:35:50.902054016Z" level=info msg="shim disconnected" id=4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c namespace=k8s.io Mar 25 01:35:50.902313 containerd[1919]: time="2025-03-25T01:35:50.902163019Z" level=warning msg="cleaning up after shim disconnected" id=4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c namespace=k8s.io Mar 25 01:35:50.902313 containerd[1919]: time="2025-03-25T01:35:50.902178618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:35:50.919141 containerd[1919]: time="2025-03-25T01:35:50.918467778Z" level=info msg="received exit event sandbox_id:\"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" exit_status:137 exited_at:{seconds:1742866550 nanos:851029655}" Mar 25 01:35:50.919141 containerd[1919]: time="2025-03-25T01:35:50.918587884Z" level=info msg="TearDown network for sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" successfully" Mar 25 01:35:50.919141 containerd[1919]: time="2025-03-25T01:35:50.918606606Z" level=info msg="StopPodSandbox for \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" returns successfully" Mar 25 01:35:51.065339 kubelet[3196]: I0325 01:35:51.062852 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hubble-tls\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.065339 kubelet[3196]: I0325 01:35:51.062944 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-bpf-maps\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.065339 kubelet[3196]: I0325 01:35:51.062978 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xcw7\" (UniqueName: \"kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-kube-api-access-9xcw7\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.065339 kubelet[3196]: I0325 01:35:51.063013 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-clustermesh-secrets\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.065339 kubelet[3196]: I0325 01:35:51.063041 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-config-path\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.065339 kubelet[3196]: I0325 01:35:51.063063 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-lib-modules\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066122 kubelet[3196]: I0325 01:35:51.063086 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-etc-cni-netd\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066122 kubelet[3196]: I0325 01:35:51.063111 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xztb\" (UniqueName: \"kubernetes.io/projected/fe2509a0-14fc-4a90-a491-b1a57f495ab6-kube-api-access-8xztb\") pod \"fe2509a0-14fc-4a90-a491-b1a57f495ab6\" (UID: \"fe2509a0-14fc-4a90-a491-b1a57f495ab6\") " Mar 25 01:35:51.066122 kubelet[3196]: I0325 01:35:51.063134 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-cgroup\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066122 kubelet[3196]: I0325 01:35:51.063155 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cni-path\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066122 kubelet[3196]: I0325 01:35:51.063180 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-kernel\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066122 kubelet[3196]: I0325 01:35:51.063205 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe2509a0-14fc-4a90-a491-b1a57f495ab6-cilium-config-path\") pod \"fe2509a0-14fc-4a90-a491-b1a57f495ab6\" (UID: \"fe2509a0-14fc-4a90-a491-b1a57f495ab6\") " Mar 25 01:35:51.066490 kubelet[3196]: I0325 01:35:51.063236 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hostproc\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066490 kubelet[3196]: I0325 01:35:51.063261 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-xtables-lock\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066490 kubelet[3196]: I0325 01:35:51.063283 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-net\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.066490 kubelet[3196]: I0325 01:35:51.063321 3196 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-run\") pod \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\" (UID: \"47a3a49e-d9d5-4e05-8f4f-61f2dfec3438\") " Mar 25 01:35:51.083572 kubelet[3196]: I0325 01:35:51.078664 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:35:51.083572 kubelet[3196]: I0325 01:35:51.080759 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.083572 kubelet[3196]: I0325 01:35:51.079153 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.083572 kubelet[3196]: I0325 01:35:51.080803 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cni-path" (OuterVolumeSpecName: "cni-path") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.083572 kubelet[3196]: I0325 01:35:51.080808 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.084013 kubelet[3196]: I0325 01:35:51.080976 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.086886 kubelet[3196]: I0325 01:35:51.086409 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-kube-api-access-9xcw7" (OuterVolumeSpecName: "kube-api-access-9xcw7") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "kube-api-access-9xcw7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:35:51.088704 kubelet[3196]: I0325 01:35:51.088664 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe2509a0-14fc-4a90-a491-b1a57f495ab6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe2509a0-14fc-4a90-a491-b1a57f495ab6" (UID: "fe2509a0-14fc-4a90-a491-b1a57f495ab6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:35:51.088919 kubelet[3196]: I0325 01:35:51.088899 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hostproc" (OuterVolumeSpecName: "hostproc") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.089020 kubelet[3196]: I0325 01:35:51.089004 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.089101 kubelet[3196]: I0325 01:35:51.089087 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.089186 kubelet[3196]: I0325 01:35:51.089173 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.090581 kubelet[3196]: I0325 01:35:51.090549 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 25 01:35:51.090843 kubelet[3196]: I0325 01:35:51.090699 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:35:51.092829 kubelet[3196]: I0325 01:35:51.092794 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" (UID: "47a3a49e-d9d5-4e05-8f4f-61f2dfec3438"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:35:51.095629 kubelet[3196]: I0325 01:35:51.095593 3196 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2509a0-14fc-4a90-a491-b1a57f495ab6-kube-api-access-8xztb" (OuterVolumeSpecName: "kube-api-access-8xztb") pod "fe2509a0-14fc-4a90-a491-b1a57f495ab6" (UID: "fe2509a0-14fc-4a90-a491-b1a57f495ab6"). InnerVolumeSpecName "kube-api-access-8xztb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:35:51.164465 kubelet[3196]: I0325 01:35:51.164426 3196 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-net\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.164465 kubelet[3196]: I0325 01:35:51.164461 3196 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-run\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.164465 kubelet[3196]: I0325 01:35:51.164475 3196 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hostproc\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.164693 kubelet[3196]: I0325 01:35:51.164486 3196 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-xtables-lock\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.164693 kubelet[3196]: I0325 01:35:51.164498 3196 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-hubble-tls\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168508 kubelet[3196]: I0325 01:35:51.168464 3196 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xcw7\" (UniqueName: \"kubernetes.io/projected/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-kube-api-access-9xcw7\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168523 3196 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-bpf-maps\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168541 3196 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-lib-modules\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168553 3196 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-etc-cni-netd\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168567 3196 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xztb\" (UniqueName: \"kubernetes.io/projected/fe2509a0-14fc-4a90-a491-b1a57f495ab6-kube-api-access-8xztb\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168582 3196 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-clustermesh-secrets\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168594 3196 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-config-path\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168606 3196 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cilium-cgroup\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168657 kubelet[3196]: I0325 01:35:51.168618 3196 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-cni-path\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168871 kubelet[3196]: I0325 01:35:51.168629 3196 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438-host-proc-sys-kernel\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.168871 kubelet[3196]: I0325 01:35:51.168640 3196 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe2509a0-14fc-4a90-a491-b1a57f495ab6-cilium-config-path\") on node \"ip-172-31-17-232\" DevicePath \"\"" Mar 25 01:35:51.282477 systemd[1]: Removed slice kubepods-besteffort-podfe2509a0_14fc_4a90_a491_b1a57f495ab6.slice - libcontainer container kubepods-besteffort-podfe2509a0_14fc_4a90_a491_b1a57f495ab6.slice. Mar 25 01:35:51.289904 systemd[1]: Removed slice kubepods-burstable-pod47a3a49e_d9d5_4e05_8f4f_61f2dfec3438.slice - libcontainer container kubepods-burstable-pod47a3a49e_d9d5_4e05_8f4f_61f2dfec3438.slice. Mar 25 01:35:51.290585 systemd[1]: kubepods-burstable-pod47a3a49e_d9d5_4e05_8f4f_61f2dfec3438.slice: Consumed 8.627s CPU time, 189.5M memory peak, 69.3M read from disk, 13.3M written to disk. Mar 25 01:35:51.444580 kubelet[3196]: E0325 01:35:51.443603 3196 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:35:51.606745 systemd[1]: var-lib-kubelet-pods-fe2509a0\x2d14fc\x2d4a90\x2da491\x2db1a57f495ab6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xztb.mount: Deactivated successfully. Mar 25 01:35:51.606874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c-shm.mount: Deactivated successfully. Mar 25 01:35:51.607040 systemd[1]: var-lib-kubelet-pods-47a3a49e\x2dd9d5\x2d4e05\x2d8f4f\x2d61f2dfec3438-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9xcw7.mount: Deactivated successfully. Mar 25 01:35:51.607138 systemd[1]: var-lib-kubelet-pods-47a3a49e\x2dd9d5\x2d4e05\x2d8f4f\x2d61f2dfec3438-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:35:51.607224 systemd[1]: var-lib-kubelet-pods-47a3a49e\x2dd9d5\x2d4e05\x2d8f4f\x2d61f2dfec3438-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:35:51.780882 kubelet[3196]: I0325 01:35:51.779939 3196 scope.go:117] "RemoveContainer" containerID="f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0" Mar 25 01:35:51.800427 containerd[1919]: time="2025-03-25T01:35:51.800138489Z" level=info msg="RemoveContainer for \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\"" Mar 25 01:35:51.808054 containerd[1919]: time="2025-03-25T01:35:51.808009426Z" level=info msg="RemoveContainer for \"f8e45f6d54118e37e160f3a55ff44dbfa2477b44df6a963012d1dd7037db6ad0\" returns successfully" Mar 25 01:35:51.808665 kubelet[3196]: I0325 01:35:51.808529 3196 scope.go:117] "RemoveContainer" containerID="0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408" Mar 25 01:35:51.813297 containerd[1919]: time="2025-03-25T01:35:51.812814310Z" level=info msg="RemoveContainer for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\"" Mar 25 01:35:51.825639 containerd[1919]: time="2025-03-25T01:35:51.825600393Z" level=info msg="RemoveContainer for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" returns successfully" Mar 25 01:35:51.826144 kubelet[3196]: I0325 01:35:51.826122 3196 scope.go:117] "RemoveContainer" containerID="2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce" Mar 25 01:35:51.834263 containerd[1919]: time="2025-03-25T01:35:51.833277184Z" level=info msg="RemoveContainer for \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\"" Mar 25 01:35:51.853275 containerd[1919]: time="2025-03-25T01:35:51.853231240Z" level=info msg="RemoveContainer for \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" returns successfully" Mar 25 01:35:51.856254 kubelet[3196]: I0325 01:35:51.856221 3196 scope.go:117] "RemoveContainer" containerID="f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984" Mar 25 01:35:51.858682 containerd[1919]: time="2025-03-25T01:35:51.858648007Z" level=info msg="RemoveContainer for \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\"" Mar 25 01:35:51.865487 containerd[1919]: time="2025-03-25T01:35:51.865448369Z" level=info msg="RemoveContainer for \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" returns successfully" Mar 25 01:35:51.865706 kubelet[3196]: I0325 01:35:51.865684 3196 scope.go:117] "RemoveContainer" containerID="54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241" Mar 25 01:35:51.867382 containerd[1919]: time="2025-03-25T01:35:51.867354027Z" level=info msg="RemoveContainer for \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\"" Mar 25 01:35:51.873066 containerd[1919]: time="2025-03-25T01:35:51.873026028Z" level=info msg="RemoveContainer for \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" returns successfully" Mar 25 01:35:51.873474 kubelet[3196]: I0325 01:35:51.873444 3196 scope.go:117] "RemoveContainer" containerID="cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e" Mar 25 01:35:51.874926 containerd[1919]: time="2025-03-25T01:35:51.874900754Z" level=info msg="RemoveContainer for \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\"" Mar 25 01:35:51.880315 containerd[1919]: time="2025-03-25T01:35:51.880270232Z" level=info msg="RemoveContainer for \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" returns successfully" Mar 25 01:35:51.880561 kubelet[3196]: I0325 01:35:51.880542 3196 scope.go:117] "RemoveContainer" containerID="0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408" Mar 25 01:35:51.887602 containerd[1919]: time="2025-03-25T01:35:51.880760461Z" level=error msg="ContainerStatus for \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\": not found" Mar 25 01:35:51.889498 kubelet[3196]: E0325 01:35:51.889457 3196 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\": not found" containerID="0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408" Mar 25 01:35:51.901254 kubelet[3196]: I0325 01:35:51.889515 3196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408"} err="failed to get container status \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\": rpc error: code = NotFound desc = an error occurred when try to find container \"0213c6c1327b66b8e0383887a706d9b46ccd267e96fc4f03a9479513622f8408\": not found" Mar 25 01:35:51.901254 kubelet[3196]: I0325 01:35:51.901089 3196 scope.go:117] "RemoveContainer" containerID="2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce" Mar 25 01:35:51.903478 containerd[1919]: time="2025-03-25T01:35:51.901763248Z" level=error msg="ContainerStatus for \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\": not found" Mar 25 01:35:51.903707 kubelet[3196]: E0325 01:35:51.903675 3196 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\": not found" containerID="2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce" Mar 25 01:35:51.903783 kubelet[3196]: I0325 01:35:51.903708 3196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce"} err="failed to get container status \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e635fe386dc242c5e09cf33d3826dfc9a7314cc20ebe9fe78ad97e120e857ce\": not found" Mar 25 01:35:51.903783 kubelet[3196]: I0325 01:35:51.903735 3196 scope.go:117] "RemoveContainer" containerID="f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984" Mar 25 01:35:51.904100 containerd[1919]: time="2025-03-25T01:35:51.904026229Z" level=error msg="ContainerStatus for \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\": not found" Mar 25 01:35:51.904468 kubelet[3196]: E0325 01:35:51.904438 3196 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\": not found" containerID="f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984" Mar 25 01:35:51.904839 kubelet[3196]: I0325 01:35:51.904569 3196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984"} err="failed to get container status \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\": rpc error: code = NotFound desc = an error occurred when try to find container \"f04d4b980f777731198858dbabf30aa34238d1606d309f3588679dde9c973984\": not found" Mar 25 01:35:51.904839 kubelet[3196]: I0325 01:35:51.904592 3196 scope.go:117] "RemoveContainer" containerID="54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241" Mar 25 01:35:51.904953 containerd[1919]: time="2025-03-25T01:35:51.904780629Z" level=error msg="ContainerStatus for \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\": not found" Mar 25 01:35:51.905406 kubelet[3196]: E0325 01:35:51.905068 3196 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\": not found" containerID="54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241" Mar 25 01:35:51.905406 kubelet[3196]: I0325 01:35:51.905112 3196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241"} err="failed to get container status \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\": rpc error: code = NotFound desc = an error occurred when try to find container \"54a5534de023e7d8207651a58c2e82b0bd358406ee88c9c9285044d412ca9241\": not found" Mar 25 01:35:51.905406 kubelet[3196]: I0325 01:35:51.905132 3196 scope.go:117] "RemoveContainer" containerID="cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e" Mar 25 01:35:51.905728 containerd[1919]: time="2025-03-25T01:35:51.905339780Z" level=error msg="ContainerStatus for \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\": not found" Mar 25 01:35:51.905780 kubelet[3196]: E0325 01:35:51.905541 3196 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\": not found" containerID="cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e" Mar 25 01:35:51.905780 kubelet[3196]: I0325 01:35:51.905565 3196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e"} err="failed to get container status \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cadbdaae18ffc2e694c9aa1c0a98c3adb6095f0c450b1c1ad54117324ef6052e\": not found" Mar 25 01:35:52.219312 sshd[5012]: Connection closed by 147.75.109.163 port 42400 Mar 25 01:35:52.220377 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:52.224233 systemd[1]: sshd@24-172.31.17.232:22-147.75.109.163:42400.service: Deactivated successfully. Mar 25 01:35:52.227089 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:35:52.229119 systemd-logind[1897]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:35:52.231345 systemd-logind[1897]: Removed session 25. Mar 25 01:35:52.256271 systemd[1]: Started sshd@25-172.31.17.232:22-147.75.109.163:59486.service - OpenSSH per-connection server daemon (147.75.109.163:59486). Mar 25 01:35:52.459933 sshd[5169]: Accepted publickey for core from 147.75.109.163 port 59486 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:52.461416 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:52.468000 systemd-logind[1897]: New session 26 of user core. Mar 25 01:35:52.473611 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:35:53.157195 ntpd[1889]: Deleting interface #12 lxc_health, fe80::d84d:d8ff:fe02:773e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=65 secs Mar 25 01:35:53.157801 ntpd[1889]: 25 Mar 01:35:53 ntpd[1889]: Deleting interface #12 lxc_health, fe80::d84d:d8ff:fe02:773e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=65 secs Mar 25 01:35:53.266332 kubelet[3196]: I0325 01:35:53.265856 3196 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" path="/var/lib/kubelet/pods/47a3a49e-d9d5-4e05-8f4f-61f2dfec3438/volumes" Mar 25 01:35:53.267731 kubelet[3196]: I0325 01:35:53.267382 3196 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2509a0-14fc-4a90-a491-b1a57f495ab6" path="/var/lib/kubelet/pods/fe2509a0-14fc-4a90-a491-b1a57f495ab6/volumes" Mar 25 01:35:53.517469 kubelet[3196]: I0325 01:35:53.517347 3196 setters.go:602] "Node became not ready" node="ip-172-31-17-232" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T01:35:53Z","lastTransitionTime":"2025-03-25T01:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 25 01:35:54.035561 sshd[5171]: Connection closed by 147.75.109.163 port 59486 Mar 25 01:35:54.039227 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:54.051679 systemd[1]: sshd@25-172.31.17.232:22-147.75.109.163:59486.service: Deactivated successfully. Mar 25 01:35:54.052682 kubelet[3196]: I0325 01:35:54.051378 3196 memory_manager.go:355] "RemoveStaleState removing state" podUID="47a3a49e-d9d5-4e05-8f4f-61f2dfec3438" containerName="cilium-agent" Mar 25 01:35:54.052682 kubelet[3196]: I0325 01:35:54.051726 3196 memory_manager.go:355] "RemoveStaleState removing state" podUID="fe2509a0-14fc-4a90-a491-b1a57f495ab6" containerName="cilium-operator" Mar 25 01:35:54.060865 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:35:54.062692 systemd-logind[1897]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:35:54.085759 systemd[1]: Started sshd@26-172.31.17.232:22-147.75.109.163:59502.service - OpenSSH per-connection server daemon (147.75.109.163:59502). Mar 25 01:35:54.090477 systemd-logind[1897]: Removed session 26. Mar 25 01:35:54.109074 kubelet[3196]: I0325 01:35:54.109026 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-bpf-maps\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.109227 kubelet[3196]: I0325 01:35:54.109106 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-lib-modules\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.109227 kubelet[3196]: I0325 01:35:54.109138 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-hostproc\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.109227 kubelet[3196]: I0325 01:35:54.109169 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78a0678f-2558-4d07-8a4f-26b6bd078040-hubble-tls\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.109227 kubelet[3196]: I0325 01:35:54.109197 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78a0678f-2558-4d07-8a4f-26b6bd078040-clustermesh-secrets\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.109227 kubelet[3196]: I0325 01:35:54.109226 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-cilium-run\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110153 kubelet[3196]: I0325 01:35:54.109254 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-cilium-cgroup\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110153 kubelet[3196]: I0325 01:35:54.109283 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-etc-cni-netd\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110153 kubelet[3196]: I0325 01:35:54.109385 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-xtables-lock\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110153 kubelet[3196]: I0325 01:35:54.109429 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78a0678f-2558-4d07-8a4f-26b6bd078040-cilium-config-path\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110153 kubelet[3196]: I0325 01:35:54.109457 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/78a0678f-2558-4d07-8a4f-26b6bd078040-cilium-ipsec-secrets\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110153 kubelet[3196]: I0325 01:35:54.109489 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-host-proc-sys-net\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110431 kubelet[3196]: I0325 01:35:54.109518 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-host-proc-sys-kernel\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110431 kubelet[3196]: I0325 01:35:54.109547 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78a0678f-2558-4d07-8a4f-26b6bd078040-cni-path\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.110431 kubelet[3196]: I0325 01:35:54.109583 3196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhd8\" (UniqueName: \"kubernetes.io/projected/78a0678f-2558-4d07-8a4f-26b6bd078040-kube-api-access-5jhd8\") pod \"cilium-2j9tg\" (UID: \"78a0678f-2558-4d07-8a4f-26b6bd078040\") " pod="kube-system/cilium-2j9tg" Mar 25 01:35:54.244980 systemd[1]: Created slice kubepods-burstable-pod78a0678f_2558_4d07_8a4f_26b6bd078040.slice - libcontainer container kubepods-burstable-pod78a0678f_2558_4d07_8a4f_26b6bd078040.slice. Mar 25 01:35:54.297178 containerd[1919]: time="2025-03-25T01:35:54.297128309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2j9tg,Uid:78a0678f-2558-4d07-8a4f-26b6bd078040,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:54.332282 sshd[5181]: Accepted publickey for core from 147.75.109.163 port 59502 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:54.335836 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:54.340599 containerd[1919]: time="2025-03-25T01:35:54.340329413Z" level=info msg="connecting to shim 1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a" address="unix:///run/containerd/s/439a7a5857db854937cc8448c33850417ab5d04bc1167277e72d2fc2f1e09f94" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:54.345799 systemd-logind[1897]: New session 27 of user core. Mar 25 01:35:54.351339 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:35:54.372530 systemd[1]: Started cri-containerd-1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a.scope - libcontainer container 1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a. Mar 25 01:35:54.411041 containerd[1919]: time="2025-03-25T01:35:54.410037542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2j9tg,Uid:78a0678f-2558-4d07-8a4f-26b6bd078040,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\"" Mar 25 01:35:54.419581 containerd[1919]: time="2025-03-25T01:35:54.419467545Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:35:54.434481 containerd[1919]: time="2025-03-25T01:35:54.434433275Z" level=info msg="Container 8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:54.448450 containerd[1919]: time="2025-03-25T01:35:54.447902973Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\"" Mar 25 01:35:54.448715 containerd[1919]: time="2025-03-25T01:35:54.448651467Z" level=info msg="StartContainer for \"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\"" Mar 25 01:35:54.449742 containerd[1919]: time="2025-03-25T01:35:54.449707967Z" level=info msg="connecting to shim 8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157" address="unix:///run/containerd/s/439a7a5857db854937cc8448c33850417ab5d04bc1167277e72d2fc2f1e09f94" protocol=ttrpc version=3 Mar 25 01:35:54.474524 systemd[1]: Started cri-containerd-8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157.scope - libcontainer container 8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157. Mar 25 01:35:54.475806 sshd[5215]: Connection closed by 147.75.109.163 port 59502 Mar 25 01:35:54.477520 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:54.483726 systemd[1]: sshd@26-172.31.17.232:22-147.75.109.163:59502.service: Deactivated successfully. Mar 25 01:35:54.489155 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:35:54.491816 systemd-logind[1897]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:35:54.495005 systemd-logind[1897]: Removed session 27. Mar 25 01:35:54.510579 systemd[1]: Started sshd@27-172.31.17.232:22-147.75.109.163:59510.service - OpenSSH per-connection server daemon (147.75.109.163:59510). Mar 25 01:35:54.557971 containerd[1919]: time="2025-03-25T01:35:54.556718153Z" level=info msg="StartContainer for \"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\" returns successfully" Mar 25 01:35:54.581476 systemd[1]: cri-containerd-8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157.scope: Deactivated successfully. Mar 25 01:35:54.582184 systemd[1]: cri-containerd-8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157.scope: Consumed 26ms CPU time, 9.7M memory peak, 3.3M read from disk. Mar 25 01:35:54.583783 containerd[1919]: time="2025-03-25T01:35:54.583740438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\" id:\"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\" pid:5249 exited_at:{seconds:1742866554 nanos:583215644}" Mar 25 01:35:54.583887 containerd[1919]: time="2025-03-25T01:35:54.583828655Z" level=info msg="received exit event container_id:\"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\" id:\"8e26b157e23d3d1be2a47b3c2794fd914ad40a87b9401911548dc69b2441a157\" pid:5249 exited_at:{seconds:1742866554 nanos:583215644}" Mar 25 01:35:54.713326 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 59510 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:35:54.715351 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:54.720543 systemd-logind[1897]: New session 28 of user core. Mar 25 01:35:54.734945 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 01:35:54.795153 containerd[1919]: time="2025-03-25T01:35:54.795061029Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:35:54.808239 containerd[1919]: time="2025-03-25T01:35:54.807157091Z" level=info msg="Container ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:54.821887 containerd[1919]: time="2025-03-25T01:35:54.821844273Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\"" Mar 25 01:35:54.824431 containerd[1919]: time="2025-03-25T01:35:54.822645081Z" level=info msg="StartContainer for \"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\"" Mar 25 01:35:54.824431 containerd[1919]: time="2025-03-25T01:35:54.823990201Z" level=info msg="connecting to shim ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722" address="unix:///run/containerd/s/439a7a5857db854937cc8448c33850417ab5d04bc1167277e72d2fc2f1e09f94" protocol=ttrpc version=3 Mar 25 01:35:54.854810 systemd[1]: Started cri-containerd-ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722.scope - libcontainer container ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722. Mar 25 01:35:54.910071 containerd[1919]: time="2025-03-25T01:35:54.910034728Z" level=info msg="StartContainer for \"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\" returns successfully" Mar 25 01:35:55.370575 systemd[1]: cri-containerd-ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722.scope: Deactivated successfully. Mar 25 01:35:55.372429 containerd[1919]: time="2025-03-25T01:35:55.370579914Z" level=info msg="received exit event container_id:\"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\" id:\"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\" pid:5305 exited_at:{seconds:1742866555 nanos:370373593}" Mar 25 01:35:55.372429 containerd[1919]: time="2025-03-25T01:35:55.370924577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\" id:\"ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722\" pid:5305 exited_at:{seconds:1742866555 nanos:370373593}" Mar 25 01:35:55.370971 systemd[1]: cri-containerd-ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722.scope: Consumed 24ms CPU time, 7.7M memory peak, 2.1M read from disk. Mar 25 01:35:55.400384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3eadcc4cac949f4658626e1dd4a1a98387cc79db891fcc633d279dbd925722-rootfs.mount: Deactivated successfully. Mar 25 01:35:55.797789 containerd[1919]: time="2025-03-25T01:35:55.797746844Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:35:55.816847 containerd[1919]: time="2025-03-25T01:35:55.816738123Z" level=info msg="Container d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:55.829025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476226943.mount: Deactivated successfully. Mar 25 01:35:55.838015 containerd[1919]: time="2025-03-25T01:35:55.837813681Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\"" Mar 25 01:35:55.839402 containerd[1919]: time="2025-03-25T01:35:55.839325401Z" level=info msg="StartContainer for \"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\"" Mar 25 01:35:55.842637 containerd[1919]: time="2025-03-25T01:35:55.842496078Z" level=info msg="connecting to shim d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de" address="unix:///run/containerd/s/439a7a5857db854937cc8448c33850417ab5d04bc1167277e72d2fc2f1e09f94" protocol=ttrpc version=3 Mar 25 01:35:55.874525 systemd[1]: Started cri-containerd-d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de.scope - libcontainer container d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de. Mar 25 01:35:55.924899 containerd[1919]: time="2025-03-25T01:35:55.922599919Z" level=info msg="StartContainer for \"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\" returns successfully" Mar 25 01:35:55.933946 systemd[1]: cri-containerd-d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de.scope: Deactivated successfully. Mar 25 01:35:55.935040 containerd[1919]: time="2025-03-25T01:35:55.934915212Z" level=info msg="received exit event container_id:\"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\" id:\"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\" pid:5351 exited_at:{seconds:1742866555 nanos:933648902}" Mar 25 01:35:55.935040 containerd[1919]: time="2025-03-25T01:35:55.935009208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\" id:\"d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de\" pid:5351 exited_at:{seconds:1742866555 nanos:933648902}" Mar 25 01:35:56.252011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d76e79875edb43e5f2133b4381967645dbf5a6de41fadf97c3cdca50ebc2d0de-rootfs.mount: Deactivated successfully. Mar 25 01:35:56.445691 kubelet[3196]: E0325 01:35:56.445643 3196 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:35:56.815347 containerd[1919]: time="2025-03-25T01:35:56.809121657Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:35:56.832505 containerd[1919]: time="2025-03-25T01:35:56.832439157Z" level=info msg="Container 4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:56.847072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819236754.mount: Deactivated successfully. Mar 25 01:35:56.859904 containerd[1919]: time="2025-03-25T01:35:56.859857165Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\"" Mar 25 01:35:56.860709 containerd[1919]: time="2025-03-25T01:35:56.860550279Z" level=info msg="StartContainer for \"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\"" Mar 25 01:35:56.862955 containerd[1919]: time="2025-03-25T01:35:56.862923067Z" level=info msg="connecting to shim 4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1" address="unix:///run/containerd/s/439a7a5857db854937cc8448c33850417ab5d04bc1167277e72d2fc2f1e09f94" protocol=ttrpc version=3 Mar 25 01:35:56.920535 systemd[1]: Started cri-containerd-4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1.scope - libcontainer container 4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1. Mar 25 01:35:56.964511 systemd[1]: cri-containerd-4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1.scope: Deactivated successfully. Mar 25 01:35:56.970800 containerd[1919]: time="2025-03-25T01:35:56.969746659Z" level=info msg="received exit event container_id:\"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\" id:\"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\" pid:5389 exited_at:{seconds:1742866556 nanos:967539088}" Mar 25 01:35:56.970800 containerd[1919]: time="2025-03-25T01:35:56.970663892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\" id:\"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\" pid:5389 exited_at:{seconds:1742866556 nanos:967539088}" Mar 25 01:35:56.980588 containerd[1919]: time="2025-03-25T01:35:56.980549640Z" level=info msg="StartContainer for \"4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1\" returns successfully" Mar 25 01:35:57.001128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4371959f899b82ee6d1ab68f71e1d8c84f5c22cf523d3408f0fa6db8a69bd7f1-rootfs.mount: Deactivated successfully. Mar 25 01:35:57.814007 containerd[1919]: time="2025-03-25T01:35:57.812106268Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:35:57.831013 containerd[1919]: time="2025-03-25T01:35:57.829185819Z" level=info msg="Container a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:57.838442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085753078.mount: Deactivated successfully. Mar 25 01:35:57.860556 containerd[1919]: time="2025-03-25T01:35:57.860515990Z" level=info msg="CreateContainer within sandbox \"1c70ed3bf0c840dc468e07c9af424b440590c81db05d101ac1aaf6c6c8f6ba5a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\"" Mar 25 01:35:57.867352 containerd[1919]: time="2025-03-25T01:35:57.860990608Z" level=info msg="StartContainer for \"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\"" Mar 25 01:35:57.867352 containerd[1919]: time="2025-03-25T01:35:57.862184006Z" level=info msg="connecting to shim a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b" address="unix:///run/containerd/s/439a7a5857db854937cc8448c33850417ab5d04bc1167277e72d2fc2f1e09f94" protocol=ttrpc version=3 Mar 25 01:35:57.908545 systemd[1]: Started cri-containerd-a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b.scope - libcontainer container a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b. Mar 25 01:35:57.952834 containerd[1919]: time="2025-03-25T01:35:57.952794485Z" level=info msg="StartContainer for \"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\" returns successfully" Mar 25 01:35:59.237381 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 25 01:36:00.072704 containerd[1919]: time="2025-03-25T01:36:00.072657596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\" id:\"d29a0388085f25e486eb86c866d4686f6063b2b2bc609a5aca88e11183d88e90\" pid:5456 exited_at:{seconds:1742866560 nanos:71375187}" Mar 25 01:36:02.551534 containerd[1919]: time="2025-03-25T01:36:02.551476366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\" id:\"fea222dca855883b1a745a34eab3615d8daf40950878ca7d46199e6e191b667c\" pid:5820 exit_status:1 exited_at:{seconds:1742866562 nanos:551043333}" Mar 25 01:36:02.607766 kubelet[3196]: E0325 01:36:02.607713 3196 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56744->127.0.0.1:45231: write tcp 127.0.0.1:56744->127.0.0.1:45231: write: broken pipe Mar 25 01:36:02.608166 kubelet[3196]: E0325 01:36:02.607719 3196 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:56744->127.0.0.1:45231: write tcp 172.31.17.232:10250->172.31.17.232:38672: write: broken pipe Mar 25 01:36:02.941939 (udev-worker)[5974]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:36:02.942363 systemd-networkd[1821]: lxc_health: Link UP Mar 25 01:36:02.943685 systemd-networkd[1821]: lxc_health: Gained carrier Mar 25 01:36:04.336208 kubelet[3196]: I0325 01:36:04.336044 3196 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2j9tg" podStartSLOduration=10.335930723 podStartE2EDuration="10.335930723s" podCreationTimestamp="2025-03-25 01:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:36:00.850511748 +0000 UTC m=+109.857908699" watchObservedRunningTime="2025-03-25 01:36:04.335930723 +0000 UTC m=+113.343327680" Mar 25 01:36:04.464594 systemd-networkd[1821]: lxc_health: Gained IPv6LL Mar 25 01:36:04.846791 containerd[1919]: time="2025-03-25T01:36:04.846749000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\" id:\"bf8b672df8fe66e2b2abf3908d01499e789b18bab88b018afd85488a42df56ae\" pid:6014 exited_at:{seconds:1742866564 nanos:846222389}" Mar 25 01:36:07.040085 containerd[1919]: time="2025-03-25T01:36:07.039614451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\" id:\"c2fc81d153c97f1542cf3f31c5c7476348249ab3b4bf2a0ad1f1b7a54c1da226\" pid:6038 exited_at:{seconds:1742866567 nanos:39254111}" Mar 25 01:36:07.157253 ntpd[1889]: Listen normally on 15 lxc_health [fe80::dc7c:5aff:fe42:4b94%14]:123 Mar 25 01:36:07.158207 ntpd[1889]: 25 Mar 01:36:07 ntpd[1889]: Listen normally on 15 lxc_health [fe80::dc7c:5aff:fe42:4b94%14]:123 Mar 25 01:36:09.175579 containerd[1919]: time="2025-03-25T01:36:09.175403521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a901e3a9988fcd484ad0c621d4fbe0b1d25560f710be8d89188ab18e1bb6b65b\" id:\"26bc7ce8c34fd9625cfcb2bdfa977a611aef2ef1527d4486d8ec847063659e94\" pid:6070 exited_at:{seconds:1742866569 nanos:174623605}" Mar 25 01:36:09.256287 sshd[5288]: Connection closed by 147.75.109.163 port 59510 Mar 25 01:36:09.257686 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:09.261540 systemd-logind[1897]: Session 28 logged out. Waiting for processes to exit. Mar 25 01:36:09.265215 systemd[1]: sshd@27-172.31.17.232:22-147.75.109.163:59510.service: Deactivated successfully. Mar 25 01:36:09.268655 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 01:36:09.270513 systemd-logind[1897]: Removed session 28. Mar 25 01:36:11.262773 containerd[1919]: time="2025-03-25T01:36:11.262339855Z" level=info msg="StopPodSandbox for \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\"" Mar 25 01:36:11.262773 containerd[1919]: time="2025-03-25T01:36:11.262502633Z" level=info msg="TearDown network for sandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" successfully" Mar 25 01:36:11.262773 containerd[1919]: time="2025-03-25T01:36:11.262518895Z" level=info msg="StopPodSandbox for \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" returns successfully" Mar 25 01:36:11.264340 containerd[1919]: time="2025-03-25T01:36:11.263467387Z" level=info msg="RemovePodSandbox for \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\"" Mar 25 01:36:11.264340 containerd[1919]: time="2025-03-25T01:36:11.263510488Z" level=info msg="Forcibly stopping sandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\"" Mar 25 01:36:11.264340 containerd[1919]: time="2025-03-25T01:36:11.263617798Z" level=info msg="TearDown network for sandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" successfully" Mar 25 01:36:11.267321 containerd[1919]: time="2025-03-25T01:36:11.267272671Z" level=info msg="Ensure that sandbox cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386 in task-service has been cleanup successfully" Mar 25 01:36:11.278938 containerd[1919]: time="2025-03-25T01:36:11.278704983Z" level=info msg="RemovePodSandbox \"cbc5714419181a46b3d7fced5b91c4e1afa6d189518f733cedb3151ed1281386\" returns successfully" Mar 25 01:36:11.280296 containerd[1919]: time="2025-03-25T01:36:11.280262553Z" level=info msg="StopPodSandbox for \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\"" Mar 25 01:36:11.280446 containerd[1919]: time="2025-03-25T01:36:11.280424666Z" level=info msg="TearDown network for sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" successfully" Mar 25 01:36:11.280493 containerd[1919]: time="2025-03-25T01:36:11.280443926Z" level=info msg="StopPodSandbox for \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" returns successfully" Mar 25 01:36:11.286215 containerd[1919]: time="2025-03-25T01:36:11.286165707Z" level=info msg="RemovePodSandbox for \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\"" Mar 25 01:36:11.286215 containerd[1919]: time="2025-03-25T01:36:11.286211210Z" level=info msg="Forcibly stopping sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\"" Mar 25 01:36:11.286484 containerd[1919]: time="2025-03-25T01:36:11.286362095Z" level=info msg="TearDown network for sandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" successfully" Mar 25 01:36:11.288012 containerd[1919]: time="2025-03-25T01:36:11.287969882Z" level=info msg="Ensure that sandbox 4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c in task-service has been cleanup successfully" Mar 25 01:36:11.295736 containerd[1919]: time="2025-03-25T01:36:11.295690566Z" level=info msg="RemovePodSandbox \"4042f67412f163352ef85747effae97a6b980838c25d3518a2c27e6fb4b6d95c\" returns successfully" Mar 25 01:36:29.430516 systemd[1]: cri-containerd-f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f.scope: Deactivated successfully. Mar 25 01:36:29.431534 systemd[1]: cri-containerd-f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f.scope: Consumed 3.765s CPU time, 68.9M memory peak, 20.7M read from disk. Mar 25 01:36:29.433484 containerd[1919]: time="2025-03-25T01:36:29.432451218Z" level=info msg="received exit event container_id:\"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\" id:\"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\" pid:3045 exit_status:1 exited_at:{seconds:1742866589 nanos:431677074}" Mar 25 01:36:29.434878 containerd[1919]: time="2025-03-25T01:36:29.434783921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\" id:\"f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f\" pid:3045 exit_status:1 exited_at:{seconds:1742866589 nanos:431677074}" Mar 25 01:36:29.462630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f-rootfs.mount: Deactivated successfully. Mar 25 01:36:29.895620 kubelet[3196]: I0325 01:36:29.895590 3196 scope.go:117] "RemoveContainer" containerID="f602f97183e57dc96394da35a6d9b06610e2db929ad372d29e8d9932e7ca1a2f" Mar 25 01:36:29.902684 containerd[1919]: time="2025-03-25T01:36:29.902645526Z" level=info msg="CreateContainer within sandbox \"8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 25 01:36:29.919988 containerd[1919]: time="2025-03-25T01:36:29.919942403Z" level=info msg="Container 634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:29.925865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179260979.mount: Deactivated successfully. Mar 25 01:36:29.933083 containerd[1919]: time="2025-03-25T01:36:29.933038843Z" level=info msg="CreateContainer within sandbox \"8851cfd4d40ce84bf6cd6b4dc32cc90a3a5e65db8036c704a3a6546a00efda49\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9\"" Mar 25 01:36:29.933680 containerd[1919]: time="2025-03-25T01:36:29.933649639Z" level=info msg="StartContainer for \"634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9\"" Mar 25 01:36:29.934826 containerd[1919]: time="2025-03-25T01:36:29.934794567Z" level=info msg="connecting to shim 634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9" address="unix:///run/containerd/s/60cded0752dc70a92824830af85888b063c37bec50e57466bdd4e8dc49bf2574" protocol=ttrpc version=3 Mar 25 01:36:29.966530 systemd[1]: Started cri-containerd-634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9.scope - libcontainer container 634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9. Mar 25 01:36:30.036893 containerd[1919]: time="2025-03-25T01:36:30.036853155Z" level=info msg="StartContainer for \"634a166362ae4704f74a34c708048296a67c36660c93881762a779e6668091f9\" returns successfully" Mar 25 01:36:33.013655 kubelet[3196]: E0325 01:36:33.013167 3196 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-232?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 25 01:36:34.853276 systemd[1]: cri-containerd-30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea.scope: Deactivated successfully. Mar 25 01:36:34.854208 systemd[1]: cri-containerd-30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea.scope: Consumed 2.368s CPU time, 28.5M memory peak, 11.5M read from disk. Mar 25 01:36:34.857512 containerd[1919]: time="2025-03-25T01:36:34.857422952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\" id:\"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\" pid:3015 exit_status:1 exited_at:{seconds:1742866594 nanos:856257606}" Mar 25 01:36:34.858475 containerd[1919]: time="2025-03-25T01:36:34.858177858Z" level=info msg="received exit event container_id:\"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\" id:\"30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea\" pid:3015 exit_status:1 exited_at:{seconds:1742866594 nanos:856257606}" Mar 25 01:36:34.883731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea-rootfs.mount: Deactivated successfully. Mar 25 01:36:34.909954 kubelet[3196]: I0325 01:36:34.909922 3196 scope.go:117] "RemoveContainer" containerID="30a828974fa58e724ba6313bd4475359f60b74bb34be15fd9e26ad8e24a898ea" Mar 25 01:36:34.912112 containerd[1919]: time="2025-03-25T01:36:34.912074641Z" level=info msg="CreateContainer within sandbox \"feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 25 01:36:34.930395 containerd[1919]: time="2025-03-25T01:36:34.929491015Z" level=info msg="Container 3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:34.944184 containerd[1919]: time="2025-03-25T01:36:34.944142090Z" level=info msg="CreateContainer within sandbox \"feaf818fbc88fb5d60d2aa6240770225ad450d662c3eca151f677a2657cb8632\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d\"" Mar 25 01:36:34.944634 containerd[1919]: time="2025-03-25T01:36:34.944605908Z" level=info msg="StartContainer for \"3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d\"" Mar 25 01:36:34.945639 containerd[1919]: time="2025-03-25T01:36:34.945599069Z" level=info msg="connecting to shim 3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d" address="unix:///run/containerd/s/1e26075bd5017a202d57659304dc13b2f71682aa5a3981c03fcc1941b86009ae" protocol=ttrpc version=3 Mar 25 01:36:34.978534 systemd[1]: Started cri-containerd-3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d.scope - libcontainer container 3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d. Mar 25 01:36:35.044193 containerd[1919]: time="2025-03-25T01:36:35.044157002Z" level=info msg="StartContainer for \"3bca83b113b6c4166743f941be69c796547893dfc25de16de80d6a42a6c6a76d\" returns successfully" Mar 25 01:36:43.014056 kubelet[3196]: E0325 01:36:43.014000 3196 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"