Mar 25 01:38:01.007014 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:38:01.007105 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:38:01.007129 kernel: BIOS-provided physical RAM map: Mar 25 01:38:01.007141 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:38:01.007152 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 25 01:38:01.007164 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 25 01:38:01.007178 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 25 01:38:01.007191 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 25 01:38:01.007203 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 25 01:38:01.007215 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 25 01:38:01.007231 kernel: NX (Execute Disable) protection: active Mar 25 01:38:01.007243 kernel: APIC: Static calls initialized Mar 25 01:38:01.007256 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 25 01:38:01.007270 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Mar 25 01:38:01.007286 kernel: extended physical RAM map: Mar 25 01:38:01.007300 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 25 01:38:01.007316 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Mar 25 01:38:01.007330 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Mar 25 01:38:01.007345 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Mar 25 01:38:01.007359 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Mar 25 01:38:01.007374 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 25 01:38:01.007388 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 25 01:38:01.007403 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 25 01:38:01.007417 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 25 01:38:01.007430 kernel: efi: EFI v2.7 by EDK II Mar 25 01:38:01.007443 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Mar 25 01:38:01.007460 kernel: secureboot: Secure boot disabled Mar 25 01:38:01.007473 kernel: SMBIOS 2.7 present. Mar 25 01:38:01.007487 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 25 01:38:01.007500 kernel: Hypervisor detected: KVM Mar 25 01:38:01.007513 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:38:01.007526 kernel: kvm-clock: using sched offset of 4281976490 cycles Mar 25 01:38:01.007540 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:38:01.007554 kernel: tsc: Detected 2499.998 MHz processor Mar 25 01:38:01.007568 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:38:01.007582 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:38:01.007595 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 25 01:38:01.007612 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 25 01:38:01.007625 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:38:01.007640 kernel: Using GB pages for direct mapping Mar 25 01:38:01.007659 kernel: ACPI: Early table checksum verification disabled Mar 25 01:38:01.007673 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 25 01:38:01.007688 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 25 01:38:01.007705 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 25 01:38:01.007720 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 25 01:38:01.007735 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 25 01:38:01.007749 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 25 01:38:01.007764 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 25 01:38:01.007779 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 25 01:38:01.007793 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 25 01:38:01.007808 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 25 01:38:01.007825 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 25 01:38:01.007840 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 25 01:38:01.007854 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 25 01:38:01.007869 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 25 01:38:01.007883 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 25 01:38:01.007898 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 25 01:38:01.008038 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 25 01:38:01.008054 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 25 01:38:01.008069 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 25 01:38:01.008131 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 25 01:38:01.008227 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 25 01:38:01.008278 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 25 01:38:01.010938 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 25 01:38:01.010977 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 25 01:38:01.010995 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 25 01:38:01.011012 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 25 01:38:01.011030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 25 01:38:01.011047 kernel: NUMA: Initialized distance table, cnt=1 Mar 25 01:38:01.011071 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Mar 25 01:38:01.011089 kernel: Zone ranges: Mar 25 01:38:01.011106 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:38:01.011123 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 25 01:38:01.011140 kernel: Normal empty Mar 25 01:38:01.011157 kernel: Movable zone start for each node Mar 25 01:38:01.011173 kernel: Early memory node ranges Mar 25 01:38:01.011189 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 25 01:38:01.011205 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 25 01:38:01.011224 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 25 01:38:01.011236 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 25 01:38:01.011248 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:38:01.011263 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 25 01:38:01.011277 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 25 01:38:01.011290 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 25 01:38:01.011304 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 25 01:38:01.011317 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:38:01.011329 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 25 01:38:01.011341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:38:01.011358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:38:01.011374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:38:01.011390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:38:01.011407 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:38:01.011423 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 25 01:38:01.011439 kernel: TSC deadline timer available Mar 25 01:38:01.011455 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 01:38:01.011472 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 01:38:01.011488 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 25 01:38:01.011510 kernel: Booting paravirtualized kernel on KVM Mar 25 01:38:01.011527 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:38:01.011543 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 01:38:01.011559 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 01:38:01.011574 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 01:38:01.011590 kernel: pcpu-alloc: [0] 0 1 Mar 25 01:38:01.011606 kernel: kvm-guest: PV spinlocks enabled Mar 25 01:38:01.011622 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:38:01.011647 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:38:01.011665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:38:01.011682 kernel: random: crng init done Mar 25 01:38:01.011698 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:38:01.011714 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 25 01:38:01.011730 kernel: Fallback order for Node 0: 0 Mar 25 01:38:01.011747 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 25 01:38:01.011763 kernel: Policy zone: DMA32 Mar 25 01:38:01.011784 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:38:01.011802 kernel: Memory: 1870484K/2037804K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 167064K reserved, 0K cma-reserved) Mar 25 01:38:01.011819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:38:01.011835 kernel: Kernel/User page tables isolation: enabled Mar 25 01:38:01.011852 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:38:01.011885 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:38:01.011908 kernel: Dynamic Preempt: voluntary Mar 25 01:38:01.012695 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:38:01.012714 kernel: rcu: RCU event tracing is enabled. Mar 25 01:38:01.012729 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:38:01.012745 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:38:01.012761 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:38:01.012781 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:38:01.012795 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:38:01.012809 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:38:01.012823 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 25 01:38:01.012839 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:38:01.012858 kernel: Console: colour dummy device 80x25 Mar 25 01:38:01.012872 kernel: printk: console [tty0] enabled Mar 25 01:38:01.012887 kernel: printk: console [ttyS0] enabled Mar 25 01:38:01.012901 kernel: ACPI: Core revision 20230628 Mar 25 01:38:01.012935 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 25 01:38:01.012950 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:38:01.012964 kernel: x2apic enabled Mar 25 01:38:01.012978 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:38:01.013667 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 25 01:38:01.013689 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 25 01:38:01.013704 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 25 01:38:01.013719 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 25 01:38:01.013734 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:38:01.013749 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:38:01.013763 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:38:01.013777 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:38:01.013792 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 25 01:38:01.013807 kernel: RETBleed: Vulnerable Mar 25 01:38:01.013821 kernel: Speculative Store Bypass: Vulnerable Mar 25 01:38:01.013839 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:38:01.013853 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:38:01.013867 kernel: GDS: Unknown: Dependent on hypervisor status Mar 25 01:38:01.013881 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:38:01.013895 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:38:01.013924 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:38:01.013939 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 25 01:38:01.013954 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 25 01:38:01.013968 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 25 01:38:01.013982 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 25 01:38:01.013997 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 25 01:38:01.014014 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 25 01:38:01.014029 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:38:01.014043 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 25 01:38:01.014058 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 25 01:38:01.014072 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 25 01:38:01.014087 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 25 01:38:01.014101 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 25 01:38:01.014116 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 25 01:38:01.014130 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 25 01:38:01.014145 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:38:01.014159 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:38:01.014173 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:38:01.014190 kernel: landlock: Up and running. Mar 25 01:38:01.014204 kernel: SELinux: Initializing. Mar 25 01:38:01.014218 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 01:38:01.014233 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 01:38:01.014248 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 25 01:38:01.014262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:38:01.014277 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:38:01.014292 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:38:01.014307 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 25 01:38:01.014321 kernel: signal: max sigframe size: 3632 Mar 25 01:38:01.014339 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:38:01.014353 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:38:01.014368 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 25 01:38:01.014381 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:38:01.014395 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:38:01.014409 kernel: .... node #0, CPUs: #1 Mar 25 01:38:01.014434 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 25 01:38:01.014448 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 25 01:38:01.014464 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:38:01.014479 kernel: smpboot: Max logical packages: 1 Mar 25 01:38:01.014494 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 25 01:38:01.014511 kernel: devtmpfs: initialized Mar 25 01:38:01.014524 kernel: x86/mm: Memory block size: 128MB Mar 25 01:38:01.014538 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 25 01:38:01.014553 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:38:01.014567 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:38:01.014583 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:38:01.014602 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:38:01.014617 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:38:01.014632 kernel: audit: type=2000 audit(1742866681.188:1): state=initialized audit_enabled=0 res=1 Mar 25 01:38:01.014647 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:38:01.014662 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:38:01.014678 kernel: cpuidle: using governor menu Mar 25 01:38:01.014693 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:38:01.014708 kernel: dca service started, version 1.12.1 Mar 25 01:38:01.014723 kernel: PCI: Using configuration type 1 for base access Mar 25 01:38:01.014742 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:38:01.014758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:38:01.014773 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:38:01.014788 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:38:01.014802 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:38:01.014818 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:38:01.014833 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:38:01.014848 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:38:01.014863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:38:01.014880 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 25 01:38:01.014894 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:38:01.016431 kernel: ACPI: Interpreter enabled Mar 25 01:38:01.016457 kernel: ACPI: PM: (supports S0 S5) Mar 25 01:38:01.016474 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:38:01.016489 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:38:01.016505 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 01:38:01.016521 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 25 01:38:01.016536 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:38:01.016768 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:38:01.016929 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 25 01:38:01.017068 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 25 01:38:01.017088 kernel: acpiphp: Slot [3] registered Mar 25 01:38:01.017104 kernel: acpiphp: Slot [4] registered Mar 25 01:38:01.017119 kernel: acpiphp: Slot [5] registered Mar 25 01:38:01.017134 kernel: acpiphp: Slot [6] registered Mar 25 01:38:01.017149 kernel: acpiphp: Slot [7] registered Mar 25 01:38:01.017169 kernel: acpiphp: Slot [8] registered Mar 25 01:38:01.017184 kernel: acpiphp: Slot [9] registered Mar 25 01:38:01.017199 kernel: acpiphp: Slot [10] registered Mar 25 01:38:01.017215 kernel: acpiphp: Slot [11] registered Mar 25 01:38:01.017230 kernel: acpiphp: Slot [12] registered Mar 25 01:38:01.017245 kernel: acpiphp: Slot [13] registered Mar 25 01:38:01.017261 kernel: acpiphp: Slot [14] registered Mar 25 01:38:01.017276 kernel: acpiphp: Slot [15] registered Mar 25 01:38:01.017291 kernel: acpiphp: Slot [16] registered Mar 25 01:38:01.017310 kernel: acpiphp: Slot [17] registered Mar 25 01:38:01.017325 kernel: acpiphp: Slot [18] registered Mar 25 01:38:01.017340 kernel: acpiphp: Slot [19] registered Mar 25 01:38:01.017355 kernel: acpiphp: Slot [20] registered Mar 25 01:38:01.017370 kernel: acpiphp: Slot [21] registered Mar 25 01:38:01.017386 kernel: acpiphp: Slot [22] registered Mar 25 01:38:01.017401 kernel: acpiphp: Slot [23] registered Mar 25 01:38:01.017416 kernel: acpiphp: Slot [24] registered Mar 25 01:38:01.017431 kernel: acpiphp: Slot [25] registered Mar 25 01:38:01.017447 kernel: acpiphp: Slot [26] registered Mar 25 01:38:01.017465 kernel: acpiphp: Slot [27] registered Mar 25 01:38:01.017480 kernel: acpiphp: Slot [28] registered Mar 25 01:38:01.017496 kernel: acpiphp: Slot [29] registered Mar 25 01:38:01.017511 kernel: acpiphp: Slot [30] registered Mar 25 01:38:01.017525 kernel: acpiphp: Slot [31] registered Mar 25 01:38:01.017540 kernel: PCI host bridge to bus 0000:00 Mar 25 01:38:01.017683 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:38:01.017805 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:38:01.020790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:38:01.020981 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 25 01:38:01.021106 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 25 01:38:01.021225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:38:01.021380 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 25 01:38:01.021527 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 25 01:38:01.021681 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 25 01:38:01.021997 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 25 01:38:01.022144 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 25 01:38:01.022280 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 25 01:38:01.022414 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 25 01:38:01.022547 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 25 01:38:01.022682 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 25 01:38:01.022821 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 25 01:38:01.023785 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 25 01:38:01.023955 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 25 01:38:01.024101 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 25 01:38:01.024237 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 25 01:38:01.024497 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 01:38:01.024645 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 25 01:38:01.024786 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 25 01:38:01.026740 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 25 01:38:01.032317 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 25 01:38:01.032375 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:38:01.032393 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:38:01.032410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:38:01.032426 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:38:01.032442 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 25 01:38:01.032466 kernel: iommu: Default domain type: Translated Mar 25 01:38:01.032481 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:38:01.032497 kernel: efivars: Registered efivars operations Mar 25 01:38:01.032513 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:38:01.032528 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:38:01.032544 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Mar 25 01:38:01.032559 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 25 01:38:01.032575 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 25 01:38:01.032787 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 25 01:38:01.032994 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 25 01:38:01.033629 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 01:38:01.033660 kernel: vgaarb: loaded Mar 25 01:38:01.033676 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 25 01:38:01.033691 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 25 01:38:01.033706 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:38:01.033721 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:38:01.033736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:38:01.033757 kernel: pnp: PnP ACPI init Mar 25 01:38:01.033772 kernel: pnp: PnP ACPI: found 5 devices Mar 25 01:38:01.033787 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:38:01.033802 kernel: NET: Registered PF_INET protocol family Mar 25 01:38:01.033817 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:38:01.033832 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 25 01:38:01.033847 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:38:01.033862 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:38:01.033877 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 25 01:38:01.033896 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 25 01:38:01.033927 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 01:38:01.033940 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 01:38:01.033953 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:38:01.033969 kernel: NET: Registered PF_XDP protocol family Mar 25 01:38:01.034105 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:38:01.034227 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:38:01.034344 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:38:01.034466 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 25 01:38:01.034582 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 25 01:38:01.034721 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 25 01:38:01.034741 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:38:01.034756 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 25 01:38:01.034771 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 25 01:38:01.034786 kernel: clocksource: Switched to clocksource tsc Mar 25 01:38:01.034801 kernel: Initialise system trusted keyrings Mar 25 01:38:01.034815 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 25 01:38:01.034834 kernel: Key type asymmetric registered Mar 25 01:38:01.034848 kernel: Asymmetric key parser 'x509' registered Mar 25 01:38:01.034863 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:38:01.034877 kernel: io scheduler mq-deadline registered Mar 25 01:38:01.034892 kernel: io scheduler kyber registered Mar 25 01:38:01.034907 kernel: io scheduler bfq registered Mar 25 01:38:01.034934 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:38:01.034949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:38:01.034964 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:38:01.034983 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:38:01.034997 kernel: i8042: Warning: Keylock active Mar 25 01:38:01.035012 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:38:01.035027 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:38:01.035180 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 25 01:38:01.035305 kernel: rtc_cmos 00:00: registered as rtc0 Mar 25 01:38:01.035427 kernel: rtc_cmos 00:00: setting system clock to 2025-03-25T01:38:00 UTC (1742866680) Mar 25 01:38:01.035547 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 25 01:38:01.035569 kernel: intel_pstate: CPU model not supported Mar 25 01:38:01.035584 kernel: efifb: probing for efifb Mar 25 01:38:01.035599 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Mar 25 01:38:01.035636 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 25 01:38:01.035654 kernel: efifb: scrolling: redraw Mar 25 01:38:01.035669 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:38:01.035685 kernel: Console: switching to colour frame buffer device 100x37 Mar 25 01:38:01.035700 kernel: fb0: EFI VGA frame buffer device Mar 25 01:38:01.035719 kernel: pstore: Using crash dump compression: deflate Mar 25 01:38:01.035735 kernel: pstore: Registered efi_pstore as persistent store backend Mar 25 01:38:01.035750 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:38:01.035766 kernel: Segment Routing with IPv6 Mar 25 01:38:01.035781 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:38:01.035796 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:38:01.035811 kernel: Key type dns_resolver registered Mar 25 01:38:01.035826 kernel: IPI shorthand broadcast: enabled Mar 25 01:38:01.035899 kernel: sched_clock: Marking stable (548096523, 154730417)->(800989580, -98162640) Mar 25 01:38:01.035929 kernel: registered taskstats version 1 Mar 25 01:38:01.035950 kernel: Loading compiled-in X.509 certificates Mar 25 01:38:01.035966 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:38:01.035988 kernel: Key type .fscrypt registered Mar 25 01:38:01.036003 kernel: Key type fscrypt-provisioning registered Mar 25 01:38:01.036019 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:38:01.036035 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:38:01.036054 kernel: ima: No architecture policies found Mar 25 01:38:01.036070 kernel: clk: Disabling unused clocks Mar 25 01:38:01.036089 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:38:01.036104 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:38:01.036120 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:38:01.036136 kernel: Run /init as init process Mar 25 01:38:01.036151 kernel: with arguments: Mar 25 01:38:01.036167 kernel: /init Mar 25 01:38:01.036182 kernel: with environment: Mar 25 01:38:01.036197 kernel: HOME=/ Mar 25 01:38:01.036212 kernel: TERM=linux Mar 25 01:38:01.036231 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:38:01.036249 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:38:01.036269 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:38:01.036286 systemd[1]: Detected virtualization amazon. Mar 25 01:38:01.036302 systemd[1]: Detected architecture x86-64. Mar 25 01:38:01.036321 systemd[1]: Running in initrd. Mar 25 01:38:01.036337 systemd[1]: No hostname configured, using default hostname. Mar 25 01:38:01.036353 systemd[1]: Hostname set to . Mar 25 01:38:01.036369 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:38:01.036385 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:38:01.036402 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:38:01.036418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:38:01.036439 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:38:01.036456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:38:01.036473 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:38:01.036491 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:38:01.036509 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:38:01.036525 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:38:01.036541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:38:01.036562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:38:01.036578 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:38:01.036594 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:38:01.036611 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:38:01.036627 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:38:01.036644 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:38:01.036660 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:38:01.036676 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:38:01.036692 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:38:01.036712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:38:01.036729 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:38:01.036745 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:38:01.036761 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:38:01.036777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:38:01.036794 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:38:01.036810 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:38:01.036827 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:38:01.036847 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:38:01.036863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:38:01.036880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:38:01.036897 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:38:01.036960 systemd-journald[179]: Collecting audit messages is disabled. Mar 25 01:38:01.037000 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:38:01.037018 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 01:38:01.037035 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:38:01.037053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:38:01.037073 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:38:01.037091 systemd-journald[179]: Journal started Mar 25 01:38:01.037124 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2d9fcebcff6b7146b08f3dd8e2e6fe) is 4.7M, max 38.1M, 33.3M free. Mar 25 01:38:01.005654 systemd-modules-load[180]: Inserted module 'overlay' Mar 25 01:38:01.044935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:38:01.051934 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:38:01.061950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:38:01.065117 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 25 01:38:01.066411 kernel: Bridge firewalling registered Mar 25 01:38:01.066637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:38:01.067612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:38:01.070651 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:38:01.076069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:38:01.080362 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:38:01.099288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:38:01.107582 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:38:01.115826 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:38:01.117967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:38:01.119247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:38:01.134088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:38:01.146078 dracut-cmdline[211]: dracut-dracut-053 Mar 25 01:38:01.151393 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:38:01.220209 systemd-resolved[215]: Positive Trust Anchors: Mar 25 01:38:01.220229 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:38:01.220289 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:38:01.232026 systemd-resolved[215]: Defaulting to hostname 'linux'. Mar 25 01:38:01.235450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:38:01.236360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:38:01.285948 kernel: SCSI subsystem initialized Mar 25 01:38:01.300049 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:38:01.314942 kernel: iscsi: registered transport (tcp) Mar 25 01:38:01.339270 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:38:01.339361 kernel: QLogic iSCSI HBA Driver Mar 25 01:38:01.413840 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:38:01.420569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:38:01.478969 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:38:01.479063 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:38:01.480046 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:38:01.523945 kernel: raid6: avx512x4 gen() 14134 MB/s Mar 25 01:38:01.541948 kernel: raid6: avx512x2 gen() 13517 MB/s Mar 25 01:38:01.558944 kernel: raid6: avx512x1 gen() 14178 MB/s Mar 25 01:38:01.577067 kernel: raid6: avx2x4 gen() 14516 MB/s Mar 25 01:38:01.594937 kernel: raid6: avx2x2 gen() 13791 MB/s Mar 25 01:38:01.613276 kernel: raid6: avx2x1 gen() 9826 MB/s Mar 25 01:38:01.613353 kernel: raid6: using algorithm avx2x4 gen() 14516 MB/s Mar 25 01:38:01.633087 kernel: raid6: .... xor() 4277 MB/s, rmw enabled Mar 25 01:38:01.633167 kernel: raid6: using avx512x2 recovery algorithm Mar 25 01:38:01.657999 kernel: xor: automatically using best checksumming function avx Mar 25 01:38:01.833944 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:38:01.845846 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:38:01.848050 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:38:01.872173 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 25 01:38:01.879496 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:38:01.885862 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:38:01.911428 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 25 01:38:01.944218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:38:01.945997 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:38:02.013129 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:38:02.018270 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:38:02.056651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:38:02.059760 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:38:02.061525 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:38:02.062686 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:38:02.066080 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:38:02.099618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:38:02.127882 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 25 01:38:02.172361 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 25 01:38:02.172544 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 25 01:38:02.172700 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:38:02.172805 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:09:1f:3f:a9:d3 Mar 25 01:38:02.171099 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:38:02.179538 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 25 01:38:02.179799 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 25 01:38:02.179830 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:38:02.181934 kernel: AES CTR mode by8 optimization enabled Mar 25 01:38:02.193315 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 25 01:38:02.197714 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:38:02.212937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:38:02.212971 kernel: GPT:9289727 != 16777215 Mar 25 01:38:02.212990 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:38:02.213009 kernel: GPT:9289727 != 16777215 Mar 25 01:38:02.213027 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:38:02.213046 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:38:02.203067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:38:02.214885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:38:02.215408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:38:02.215726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:38:02.217908 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:38:02.221709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:38:02.224550 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:38:02.248327 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:38:02.250543 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:38:02.279100 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:38:02.288989 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (445) Mar 25 01:38:02.325940 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (449) Mar 25 01:38:02.396094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:38:02.407661 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 25 01:38:02.418759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 25 01:38:02.435638 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 25 01:38:02.436260 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 25 01:38:02.438795 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:38:02.455420 disk-uuid[632]: Primary Header is updated. Mar 25 01:38:02.455420 disk-uuid[632]: Secondary Entries is updated. Mar 25 01:38:02.455420 disk-uuid[632]: Secondary Header is updated. Mar 25 01:38:02.463259 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:38:02.477945 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:38:03.475027 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:38:03.476672 disk-uuid[633]: The operation has completed successfully. Mar 25 01:38:03.622412 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:38:03.622553 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:38:03.664765 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:38:03.681433 sh[891]: Success Mar 25 01:38:03.702953 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 25 01:38:03.823645 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:38:03.828022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:38:03.838770 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:38:03.871589 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:38:03.871724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:38:03.871748 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:38:03.874160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:38:03.876435 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:38:03.964956 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 25 01:38:03.979552 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:38:03.980897 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:38:03.982232 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:38:03.985059 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:38:04.030200 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:38:04.030331 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:38:04.032584 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:38:04.040931 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:38:04.049964 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:38:04.053081 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:38:04.056241 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:38:04.100045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:38:04.102659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:38:04.145539 systemd-networkd[1080]: lo: Link UP Mar 25 01:38:04.145550 systemd-networkd[1080]: lo: Gained carrier Mar 25 01:38:04.148048 systemd-networkd[1080]: Enumeration completed Mar 25 01:38:04.148758 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:38:04.148764 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:38:04.154520 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:38:04.155559 systemd[1]: Reached target network.target - Network. Mar 25 01:38:04.157056 systemd-networkd[1080]: eth0: Link UP Mar 25 01:38:04.157061 systemd-networkd[1080]: eth0: Gained carrier Mar 25 01:38:04.157076 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:38:04.181480 systemd-networkd[1080]: eth0: DHCPv4 address 172.31.29.210/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:38:04.410070 ignition[1029]: Ignition 2.20.0 Mar 25 01:38:04.410083 ignition[1029]: Stage: fetch-offline Mar 25 01:38:04.410314 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:04.410328 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:04.412239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:38:04.410632 ignition[1029]: Ignition finished successfully Mar 25 01:38:04.415420 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:38:04.438034 ignition[1090]: Ignition 2.20.0 Mar 25 01:38:04.438049 ignition[1090]: Stage: fetch Mar 25 01:38:04.438506 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:04.438519 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:04.438809 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:04.495384 ignition[1090]: PUT result: OK Mar 25 01:38:04.499150 ignition[1090]: parsed url from cmdline: "" Mar 25 01:38:04.499162 ignition[1090]: no config URL provided Mar 25 01:38:04.499171 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:38:04.499186 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:38:04.499208 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:04.500099 ignition[1090]: PUT result: OK Mar 25 01:38:04.500164 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 25 01:38:04.501381 ignition[1090]: GET result: OK Mar 25 01:38:04.501461 ignition[1090]: parsing config with SHA512: a74a3d93d4b8589d91a928acf4db5d2e95fcd5188ef5e5babe6abdc29aaffa96da9bdb1b0f48fcde235e82505a05d8b09f733017283b39faa3b41c9d50ada031 Mar 25 01:38:04.509442 unknown[1090]: fetched base config from "system" Mar 25 01:38:04.509456 unknown[1090]: fetched base config from "system" Mar 25 01:38:04.510088 ignition[1090]: fetch: fetch complete Mar 25 01:38:04.509466 unknown[1090]: fetched user config from "aws" Mar 25 01:38:04.510096 ignition[1090]: fetch: fetch passed Mar 25 01:38:04.512012 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:38:04.510151 ignition[1090]: Ignition finished successfully Mar 25 01:38:04.514294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:38:04.539542 ignition[1096]: Ignition 2.20.0 Mar 25 01:38:04.539556 ignition[1096]: Stage: kargs Mar 25 01:38:04.540113 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:04.540128 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:04.540255 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:04.541354 ignition[1096]: PUT result: OK Mar 25 01:38:04.544317 ignition[1096]: kargs: kargs passed Mar 25 01:38:04.544389 ignition[1096]: Ignition finished successfully Mar 25 01:38:04.546035 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:38:04.547402 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:38:04.577783 ignition[1103]: Ignition 2.20.0 Mar 25 01:38:04.577808 ignition[1103]: Stage: disks Mar 25 01:38:04.578332 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:04.578346 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:04.578494 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:04.579501 ignition[1103]: PUT result: OK Mar 25 01:38:04.582493 ignition[1103]: disks: disks passed Mar 25 01:38:04.582567 ignition[1103]: Ignition finished successfully Mar 25 01:38:04.583860 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:38:04.584899 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:38:04.585228 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:38:04.585668 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:38:04.585905 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:38:04.586238 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:38:04.588593 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:38:04.633461 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:38:04.636982 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:38:04.638992 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:38:04.754936 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:38:04.756004 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:38:04.757029 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:38:04.761416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:38:04.766010 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:38:04.767650 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:38:04.768749 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:38:04.768785 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:38:04.775142 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:38:04.777091 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:38:04.793932 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1131) Mar 25 01:38:04.800370 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:38:04.800440 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:38:04.800472 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:38:04.809929 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:38:04.811417 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:38:05.128861 initrd-setup-root[1156]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:38:05.145738 initrd-setup-root[1163]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:38:05.151474 initrd-setup-root[1170]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:38:05.155852 initrd-setup-root[1177]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:38:05.414088 systemd-networkd[1080]: eth0: Gained IPv6LL Mar 25 01:38:05.422137 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:38:05.424126 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:38:05.428071 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:38:05.440270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:38:05.444001 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:38:05.477709 ignition[1244]: INFO : Ignition 2.20.0 Mar 25 01:38:05.477709 ignition[1244]: INFO : Stage: mount Mar 25 01:38:05.477709 ignition[1244]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:05.477709 ignition[1244]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:05.477709 ignition[1244]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:05.481367 ignition[1244]: INFO : PUT result: OK Mar 25 01:38:05.478974 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:38:05.482657 ignition[1244]: INFO : mount: mount passed Mar 25 01:38:05.483178 ignition[1244]: INFO : Ignition finished successfully Mar 25 01:38:05.483830 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:38:05.485549 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:38:05.500070 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:38:05.530944 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1257) Mar 25 01:38:05.531011 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:38:05.535237 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:38:05.535297 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:38:05.541932 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:38:05.544786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:38:05.571399 ignition[1274]: INFO : Ignition 2.20.0 Mar 25 01:38:05.571399 ignition[1274]: INFO : Stage: files Mar 25 01:38:05.572798 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:05.572798 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:05.572798 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:05.574025 ignition[1274]: INFO : PUT result: OK Mar 25 01:38:05.575688 ignition[1274]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:38:05.589933 ignition[1274]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:38:05.589933 ignition[1274]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:38:05.628696 ignition[1274]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:38:05.629762 ignition[1274]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:38:05.629762 ignition[1274]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:38:05.629216 unknown[1274]: wrote ssh authorized keys file for user: core Mar 25 01:38:05.632210 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:38:05.632210 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 01:38:05.704149 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:38:05.863901 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:38:05.867063 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:38:05.867063 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 25 01:38:06.322639 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:38:06.433304 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:38:06.435004 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:38:06.442539 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 25 01:38:06.722881 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:38:07.044986 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:38:07.044986 ignition[1274]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:38:07.056875 ignition[1274]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:38:07.058262 ignition[1274]: INFO : files: files passed Mar 25 01:38:07.058262 ignition[1274]: INFO : Ignition finished successfully Mar 25 01:38:07.059797 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:38:07.064092 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:38:07.070977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:38:07.081741 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:38:07.081872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:38:07.089359 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:38:07.089359 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:38:07.093503 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:38:07.094113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:38:07.095718 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:38:07.097643 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:38:07.147510 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:38:07.147650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:38:07.148975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:38:07.150037 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:38:07.150778 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:38:07.152775 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:38:07.175632 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:38:07.177984 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:38:07.197515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:38:07.198264 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:38:07.199192 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:38:07.200059 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:38:07.200244 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:38:07.201485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:38:07.202310 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:38:07.203081 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:38:07.203905 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:38:07.204775 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:38:07.205529 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:38:07.206367 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:38:07.207185 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:38:07.208303 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:38:07.209040 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:38:07.209724 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:38:07.209901 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:38:07.211056 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:38:07.211815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:38:07.212504 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:38:07.212818 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:38:07.213285 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:38:07.213451 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:38:07.214808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:38:07.215006 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:38:07.215677 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:38:07.215822 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:38:07.219146 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:38:07.223127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:38:07.224283 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:38:07.225062 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:38:07.225816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:38:07.227533 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:38:07.234348 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:38:07.235151 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:38:07.253736 ignition[1328]: INFO : Ignition 2.20.0 Mar 25 01:38:07.253736 ignition[1328]: INFO : Stage: umount Mar 25 01:38:07.256135 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:38:07.256135 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:38:07.256135 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:38:07.256135 ignition[1328]: INFO : PUT result: OK Mar 25 01:38:07.260801 ignition[1328]: INFO : umount: umount passed Mar 25 01:38:07.260801 ignition[1328]: INFO : Ignition finished successfully Mar 25 01:38:07.263110 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:38:07.263266 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:38:07.264678 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:38:07.264904 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:38:07.267124 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:38:07.267205 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:38:07.267859 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:38:07.267953 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:38:07.268506 systemd[1]: Stopped target network.target - Network. Mar 25 01:38:07.269239 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:38:07.269307 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:38:07.269865 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:38:07.271085 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:38:07.274965 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:38:07.275328 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:38:07.276279 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:38:07.277029 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:38:07.277089 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:38:07.277776 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:38:07.277826 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:38:07.278376 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:38:07.278446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:38:07.279016 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:38:07.279071 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:38:07.279796 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:38:07.280403 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:38:07.282838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:38:07.288039 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:38:07.288175 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:38:07.292606 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:38:07.294014 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:38:07.294238 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:38:07.296407 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:38:07.297410 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:38:07.297486 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:38:07.299563 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:38:07.300219 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:38:07.300289 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:38:07.300835 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:38:07.300895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:38:07.305801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:38:07.305875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:38:07.306846 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:38:07.306936 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:38:07.307762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:38:07.314448 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:38:07.314549 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:38:07.329268 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:38:07.329474 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:38:07.331227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:38:07.331310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:38:07.332348 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:38:07.332394 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:38:07.334759 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:38:07.334828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:38:07.336077 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:38:07.336140 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:38:07.337305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:38:07.337372 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:38:07.340398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:38:07.341758 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:38:07.342482 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:38:07.344545 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 01:38:07.345225 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:38:07.347366 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:38:07.347435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:38:07.348408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:38:07.348466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:38:07.350530 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:38:07.350614 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:38:07.353731 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:38:07.353871 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:38:07.360100 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:38:07.360223 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:38:07.388371 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:38:07.388514 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:38:07.389773 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:38:07.397770 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:38:07.398081 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:38:07.400176 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:38:07.422134 systemd[1]: Switching root. Mar 25 01:38:07.455572 systemd-journald[179]: Journal stopped Mar 25 01:38:09.286946 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 25 01:38:09.287036 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:38:09.287060 kernel: SELinux: policy capability open_perms=1 Mar 25 01:38:09.287087 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:38:09.287109 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:38:09.287136 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:38:09.287158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:38:09.287234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:38:09.287254 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:38:09.287273 kernel: audit: type=1403 audit(1742866687.840:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:38:09.287292 systemd[1]: Successfully loaded SELinux policy in 68.785ms. Mar 25 01:38:09.287329 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.944ms. Mar 25 01:38:09.287355 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:38:09.287373 systemd[1]: Detected virtualization amazon. Mar 25 01:38:09.287392 systemd[1]: Detected architecture x86-64. Mar 25 01:38:09.287413 systemd[1]: Detected first boot. Mar 25 01:38:09.287431 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:38:09.287451 zram_generator::config[1373]: No configuration found. Mar 25 01:38:09.287469 kernel: Guest personality initialized and is inactive Mar 25 01:38:09.287486 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:38:09.287503 kernel: Initialized host personality Mar 25 01:38:09.287522 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:38:09.287540 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:38:09.287559 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:38:09.287576 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:38:09.287596 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:38:09.287616 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:38:09.287641 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:38:09.287661 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:38:09.287682 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:38:09.287713 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:38:09.287731 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:38:09.287751 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:38:09.287769 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:38:09.287786 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:38:09.287805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:38:09.287823 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:38:09.287841 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:38:09.287870 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:38:09.287890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:38:09.290867 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:38:09.290936 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:38:09.290962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:38:09.290983 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:38:09.291003 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:38:09.291028 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:38:09.291047 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:38:09.291067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:38:09.291087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:38:09.291106 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:38:09.291123 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:38:09.291143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:38:09.291163 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:38:09.291185 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:38:09.291206 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:38:09.291232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:38:09.291253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:38:09.291273 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:38:09.291293 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:38:09.291315 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:38:09.291335 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:38:09.291449 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:09.291472 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:38:09.291496 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:38:09.291518 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:38:09.291539 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:38:09.291558 systemd[1]: Reached target machines.target - Containers. Mar 25 01:38:09.291577 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:38:09.291598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:38:09.291617 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:38:09.291636 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:38:09.291656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:38:09.291682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:38:09.291703 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:38:09.291723 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:38:09.291743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:38:09.291763 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:38:09.291783 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:38:09.291803 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:38:09.291821 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:38:09.291846 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:38:09.291868 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:38:09.291887 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:38:09.291907 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:38:09.291943 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:38:09.292405 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:38:09.292432 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:38:09.292452 kernel: loop: module loaded Mar 25 01:38:09.292474 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:38:09.292503 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:38:09.292524 systemd[1]: Stopped verity-setup.service. Mar 25 01:38:09.292544 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:09.292565 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:38:09.292590 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:38:09.292611 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:38:09.292631 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:38:09.292688 systemd-journald[1459]: Collecting audit messages is disabled. Mar 25 01:38:09.292727 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:38:09.292753 systemd-journald[1459]: Journal started Mar 25 01:38:09.292793 systemd-journald[1459]: Runtime Journal (/run/log/journal/ec2d9fcebcff6b7146b08f3dd8e2e6fe) is 4.7M, max 38.1M, 33.3M free. Mar 25 01:38:09.297252 kernel: fuse: init (API version 7.39) Mar 25 01:38:09.297301 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:38:08.910838 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:38:08.921984 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 25 01:38:08.922410 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:38:09.302543 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:38:09.304974 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:38:09.306031 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:38:09.306278 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:38:09.307679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:38:09.308128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:38:09.309526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:38:09.309811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:38:09.311426 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:38:09.311651 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:38:09.315271 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:38:09.315519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:38:09.316480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:38:09.317440 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:38:09.325023 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:38:09.343980 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:38:09.351117 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:38:09.373157 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:38:09.375021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:38:09.375070 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:38:09.379438 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:38:09.383228 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:38:09.395119 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:38:09.395866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:38:09.423394 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:38:09.425408 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:38:09.426936 kernel: ACPI: bus type drm_connector registered Mar 25 01:38:09.427122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:38:09.429144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:38:09.429797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:38:09.432202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:38:09.437143 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:38:09.449418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:38:09.455032 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:38:09.456064 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:38:09.456308 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:38:09.462382 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:38:09.464174 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:38:09.465250 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:38:09.466180 systemd-journald[1459]: Time spent on flushing to /var/log/journal/ec2d9fcebcff6b7146b08f3dd8e2e6fe is 29.066ms for 1012 entries. Mar 25 01:38:09.466180 systemd-journald[1459]: System Journal (/var/log/journal/ec2d9fcebcff6b7146b08f3dd8e2e6fe) is 8M, max 195.6M, 187.6M free. Mar 25 01:38:09.510171 systemd-journald[1459]: Received client request to flush runtime journal. Mar 25 01:38:09.477486 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:38:09.490523 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:38:09.505752 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:38:09.510562 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:38:09.521651 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:38:09.543889 kernel: loop0: detected capacity change from 0 to 109808 Mar 25 01:38:09.541382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:38:09.552271 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:38:09.571529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:38:09.587036 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:38:09.590027 udevadm[1521]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 25 01:38:09.608303 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Mar 25 01:38:09.608330 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Mar 25 01:38:09.616491 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:38:09.620084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:38:09.653937 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:38:09.695819 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:38:09.699980 kernel: loop1: detected capacity change from 0 to 151640 Mar 25 01:38:09.700055 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:38:09.737630 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Mar 25 01:38:09.737897 systemd-tmpfiles[1530]: ACLs are not supported, ignoring. Mar 25 01:38:09.744834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:38:09.807974 kernel: loop2: detected capacity change from 0 to 64352 Mar 25 01:38:09.847032 kernel: loop3: detected capacity change from 0 to 210664 Mar 25 01:38:09.924443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:38:09.974962 kernel: loop4: detected capacity change from 0 to 109808 Mar 25 01:38:09.995960 kernel: loop5: detected capacity change from 0 to 151640 Mar 25 01:38:10.018472 kernel: loop6: detected capacity change from 0 to 64352 Mar 25 01:38:10.043006 kernel: loop7: detected capacity change from 0 to 210664 Mar 25 01:38:10.079061 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 25 01:38:10.079801 (sd-merge)[1536]: Merged extensions into '/usr'. Mar 25 01:38:10.087336 systemd[1]: Reload requested from client PID 1505 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:38:10.087470 systemd[1]: Reloading... Mar 25 01:38:10.204947 zram_generator::config[1560]: No configuration found. Mar 25 01:38:10.432923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:38:10.560848 systemd[1]: Reloading finished in 469 ms. Mar 25 01:38:10.585401 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:38:10.591414 systemd[1]: Starting ensure-sysext.service... Mar 25 01:38:10.594059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:38:10.628745 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:38:10.631198 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:38:10.632414 systemd[1]: Reload requested from client PID 1615 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:38:10.632509 systemd[1]: Reloading... Mar 25 01:38:10.634510 systemd-tmpfiles[1616]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:38:10.636972 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Mar 25 01:38:10.637082 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Mar 25 01:38:10.647660 systemd-tmpfiles[1616]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:38:10.647680 systemd-tmpfiles[1616]: Skipping /boot Mar 25 01:38:10.671840 systemd-tmpfiles[1616]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:38:10.671855 systemd-tmpfiles[1616]: Skipping /boot Mar 25 01:38:10.763939 zram_generator::config[1646]: No configuration found. Mar 25 01:38:10.896457 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:38:10.906463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:38:10.978936 systemd[1]: Reloading finished in 345 ms. Mar 25 01:38:10.991683 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:38:10.992481 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:38:11.002902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:38:11.012668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:38:11.017031 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:38:11.020334 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:38:11.026019 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:38:11.030203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:38:11.034474 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:38:11.040723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:11.041015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:38:11.047273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:38:11.057475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:38:11.069030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:38:11.070188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:38:11.071008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:38:11.071163 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:11.085986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:11.086337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:38:11.086651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:38:11.086795 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:38:11.090983 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:38:11.092972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:11.094216 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:38:11.111441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:38:11.111880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:38:11.115418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:11.118293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:38:11.134084 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:38:11.134869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:38:11.135065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:38:11.135255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:38:11.135478 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:38:11.138255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:38:11.139783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:38:11.142154 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:38:11.143355 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:38:11.143578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:38:11.146848 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:38:11.154874 systemd[1]: Finished ensure-sysext.service. Mar 25 01:38:11.160226 systemd-udevd[1705]: Using default interface naming scheme 'v255'. Mar 25 01:38:11.166817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:38:11.171153 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:38:11.177784 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:38:11.179626 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:38:11.205262 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:38:11.219959 augenrules[1740]: No rules Mar 25 01:38:11.221459 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:38:11.221742 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:38:11.223849 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:38:11.238618 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:38:11.241597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:38:11.246586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:38:11.247987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:38:11.352837 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:38:11.366097 (udev-worker)[1766]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:38:11.434857 systemd-resolved[1704]: Positive Trust Anchors: Mar 25 01:38:11.435255 systemd-resolved[1704]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:38:11.435386 systemd-resolved[1704]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:38:11.447074 systemd-resolved[1704]: Defaulting to hostname 'linux'. Mar 25 01:38:11.449894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:38:11.451014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:38:11.455292 systemd-networkd[1756]: lo: Link UP Mar 25 01:38:11.455300 systemd-networkd[1756]: lo: Gained carrier Mar 25 01:38:11.458834 systemd-networkd[1756]: Enumeration completed Mar 25 01:38:11.459007 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:38:11.460084 systemd-networkd[1756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:38:11.460090 systemd-networkd[1756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:38:11.461079 systemd[1]: Reached target network.target - Network. Mar 25 01:38:11.465173 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:38:11.465640 systemd-networkd[1756]: eth0: Link UP Mar 25 01:38:11.465837 systemd-networkd[1756]: eth0: Gained carrier Mar 25 01:38:11.465860 systemd-networkd[1756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:38:11.467543 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:38:11.478038 systemd-networkd[1756]: eth0: DHCPv4 address 172.31.29.210/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:38:11.497951 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1767) Mar 25 01:38:11.505479 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:38:11.550960 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 25 01:38:11.561937 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:38:11.565945 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 25 01:38:11.569004 kernel: ACPI: button: Sleep Button [SLPF] Mar 25 01:38:11.585013 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 25 01:38:11.603614 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 25 01:38:11.681941 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:38:11.715284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:38:11.716129 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:38:11.718557 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:38:11.721247 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:38:11.724135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:38:11.754064 lvm[1867]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:38:11.755089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:38:11.779074 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:38:11.779890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:38:11.781806 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:38:11.798980 lvm[1875]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:38:11.824691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:38:11.825569 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:38:11.826798 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:38:11.827590 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:38:11.828089 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:38:11.828655 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:38:11.829273 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:38:11.829669 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:38:11.830136 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:38:11.830177 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:38:11.830564 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:38:11.832132 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:38:11.833941 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:38:11.837153 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:38:11.837946 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:38:11.838432 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:38:11.840892 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:38:11.842291 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:38:11.843502 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:38:11.844041 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:38:11.844422 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:38:11.844833 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:38:11.844875 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:38:11.845931 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:38:11.850092 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:38:11.853110 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:38:11.861844 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:38:11.866418 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:38:11.867108 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:38:11.870581 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:38:11.875661 systemd[1]: Started ntpd.service - Network Time Service. Mar 25 01:38:11.884424 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:38:11.889148 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 25 01:38:11.895931 jq[1885]: false Mar 25 01:38:11.892236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:38:11.909807 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:38:11.919225 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:38:11.922124 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:38:11.941411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:38:11.946105 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:38:11.956091 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:38:11.969651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:38:11.971024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:38:11.979003 jq[1900]: true Mar 25 01:38:11.989384 dbus-daemon[1884]: [system] SELinux support is enabled Mar 25 01:38:11.993461 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:38:12.000805 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:38:12.001652 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:38:12.007577 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:38:12.007964 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:38:12.013839 dbus-daemon[1884]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1756 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 25 01:38:12.034895 extend-filesystems[1886]: Found loop4 Mar 25 01:38:12.034895 extend-filesystems[1886]: Found loop5 Mar 25 01:38:12.034895 extend-filesystems[1886]: Found loop6 Mar 25 01:38:12.034895 extend-filesystems[1886]: Found loop7 Mar 25 01:38:12.034895 extend-filesystems[1886]: Found nvme0n1 Mar 25 01:38:12.034895 extend-filesystems[1886]: Found nvme0n1p1 Mar 25 01:38:12.034895 extend-filesystems[1886]: Found nvme0n1p2 Mar 25 01:38:12.047727 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 01:38:12.053989 extend-filesystems[1886]: Found nvme0n1p3 Mar 25 01:38:12.053989 extend-filesystems[1886]: Found usr Mar 25 01:38:12.053989 extend-filesystems[1886]: Found nvme0n1p4 Mar 25 01:38:12.053989 extend-filesystems[1886]: Found nvme0n1p6 Mar 25 01:38:12.053989 extend-filesystems[1886]: Found nvme0n1p7 Mar 25 01:38:12.053989 extend-filesystems[1886]: Found nvme0n1p9 Mar 25 01:38:12.053989 extend-filesystems[1886]: Checking size of /dev/nvme0n1p9 Mar 25 01:38:12.047085 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: ---------------------------------------------------- Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: corporation. Support and training for ntp-4 are Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: available at https://www.nwtime.org/support Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: ---------------------------------------------------- Mar 25 01:38:12.087380 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: proto: precision = 0.095 usec (-23) Mar 25 01:38:12.060347 ntpd[1888]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:38:12.047145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:38:12.100460 jq[1910]: true Mar 25 01:38:12.104190 update_engine[1898]: I20250325 01:38:12.099179 1898 main.cc:92] Flatcar Update Engine starting Mar 25 01:38:12.104431 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: basedate set to 2025-03-12 Mar 25 01:38:12.104431 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: gps base set to 2025-03-16 (week 2358) Mar 25 01:38:12.060372 ntpd[1888]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:38:12.053556 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:38:12.060383 ntpd[1888]: ---------------------------------------------------- Mar 25 01:38:12.053589 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:38:12.060393 ntpd[1888]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:38:12.061707 (ntainerd)[1912]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:38:12.060402 ntpd[1888]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:38:12.080260 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Listen normally on 3 eth0 172.31.29.210:123 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Listen normally on 4 lo [::1]:123 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: bind(21) AF_INET6 fe80::409:1fff:fe3f:a9d3%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: unable to create socket on eth0 (5) for fe80::409:1fff:fe3f:a9d3%2#123 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: failed to init interface for address fe80::409:1fff:fe3f:a9d3%2 Mar 25 01:38:12.121139 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: Listening on routing socket on fd #21 for interface updates Mar 25 01:38:12.060414 ntpd[1888]: corporation. Support and training for ntp-4 are Mar 25 01:38:12.095614 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 25 01:38:12.060426 ntpd[1888]: available at https://www.nwtime.org/support Mar 25 01:38:12.060434 ntpd[1888]: ---------------------------------------------------- Mar 25 01:38:12.067207 ntpd[1888]: proto: precision = 0.095 usec (-23) Mar 25 01:38:12.089300 ntpd[1888]: basedate set to 2025-03-12 Mar 25 01:38:12.089323 ntpd[1888]: gps base set to 2025-03-16 (week 2358) Mar 25 01:38:12.119955 ntpd[1888]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:38:12.120013 ntpd[1888]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:38:12.120201 ntpd[1888]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:38:12.120242 ntpd[1888]: Listen normally on 3 eth0 172.31.29.210:123 Mar 25 01:38:12.120285 ntpd[1888]: Listen normally on 4 lo [::1]:123 Mar 25 01:38:12.133144 extend-filesystems[1886]: Resized partition /dev/nvme0n1p9 Mar 25 01:38:12.123694 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:38:12.135217 update_engine[1898]: I20250325 01:38:12.126474 1898 update_check_scheduler.cc:74] Next update check in 9m16s Mar 25 01:38:12.120329 ntpd[1888]: bind(21) AF_INET6 fe80::409:1fff:fe3f:a9d3%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:38:12.120352 ntpd[1888]: unable to create socket on eth0 (5) for fe80::409:1fff:fe3f:a9d3%2#123 Mar 25 01:38:12.120368 ntpd[1888]: failed to init interface for address fe80::409:1fff:fe3f:a9d3%2 Mar 25 01:38:12.120400 ntpd[1888]: Listening on routing socket on fd #21 for interface updates Mar 25 01:38:12.137628 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:38:12.140382 extend-filesystems[1939]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:38:12.141125 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:38:12.141125 ntpd[1888]: 25 Mar 01:38:12 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:38:12.137669 ntpd[1888]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:38:12.144432 coreos-metadata[1883]: Mar 25 01:38:12.144 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:38:12.147419 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:38:12.152933 tar[1906]: linux-amd64/helm Mar 25 01:38:12.161308 coreos-metadata[1883]: Mar 25 01:38:12.161 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 25 01:38:12.163337 coreos-metadata[1883]: Mar 25 01:38:12.163 INFO Fetch successful Mar 25 01:38:12.163582 coreos-metadata[1883]: Mar 25 01:38:12.163 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 25 01:38:12.163983 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 25 01:38:12.164479 coreos-metadata[1883]: Mar 25 01:38:12.164 INFO Fetch successful Mar 25 01:38:12.164479 coreos-metadata[1883]: Mar 25 01:38:12.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 25 01:38:12.165305 coreos-metadata[1883]: Mar 25 01:38:12.165 INFO Fetch successful Mar 25 01:38:12.165516 coreos-metadata[1883]: Mar 25 01:38:12.165 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 25 01:38:12.171498 coreos-metadata[1883]: Mar 25 01:38:12.168 INFO Fetch successful Mar 25 01:38:12.171498 coreos-metadata[1883]: Mar 25 01:38:12.168 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 25 01:38:12.172224 coreos-metadata[1883]: Mar 25 01:38:12.172 INFO Fetch failed with 404: resource not found Mar 25 01:38:12.172224 coreos-metadata[1883]: Mar 25 01:38:12.172 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 25 01:38:12.173614 coreos-metadata[1883]: Mar 25 01:38:12.173 INFO Fetch successful Mar 25 01:38:12.173614 coreos-metadata[1883]: Mar 25 01:38:12.173 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 25 01:38:12.183988 coreos-metadata[1883]: Mar 25 01:38:12.180 INFO Fetch successful Mar 25 01:38:12.183988 coreos-metadata[1883]: Mar 25 01:38:12.180 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 25 01:38:12.183988 coreos-metadata[1883]: Mar 25 01:38:12.181 INFO Fetch successful Mar 25 01:38:12.183988 coreos-metadata[1883]: Mar 25 01:38:12.181 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 25 01:38:12.188353 coreos-metadata[1883]: Mar 25 01:38:12.188 INFO Fetch successful Mar 25 01:38:12.188353 coreos-metadata[1883]: Mar 25 01:38:12.188 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 25 01:38:12.190790 coreos-metadata[1883]: Mar 25 01:38:12.189 INFO Fetch successful Mar 25 01:38:12.290979 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1757) Mar 25 01:38:12.319938 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 25 01:38:12.334275 extend-filesystems[1939]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 25 01:38:12.334275 extend-filesystems[1939]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:38:12.334275 extend-filesystems[1939]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 25 01:38:12.334126 systemd-logind[1895]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:38:12.343584 extend-filesystems[1886]: Resized filesystem in /dev/nvme0n1p9 Mar 25 01:38:12.334149 systemd-logind[1895]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 25 01:38:12.334172 systemd-logind[1895]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:38:12.335715 systemd-logind[1895]: New seat seat0. Mar 25 01:38:12.336189 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:38:12.336460 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:38:12.343798 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:38:12.347000 bash[1961]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:38:12.348116 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:38:12.349465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:38:12.362181 systemd[1]: Starting sshkeys.service... Mar 25 01:38:12.362822 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:38:12.365779 locksmithd[1937]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:38:12.400038 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 01:38:12.406386 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 01:38:12.539178 coreos-metadata[1997]: Mar 25 01:38:12.539 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:38:12.546989 coreos-metadata[1997]: Mar 25 01:38:12.544 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 25 01:38:12.546989 coreos-metadata[1997]: Mar 25 01:38:12.546 INFO Fetch successful Mar 25 01:38:12.546989 coreos-metadata[1997]: Mar 25 01:38:12.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 01:38:12.551012 coreos-metadata[1997]: Mar 25 01:38:12.547 INFO Fetch successful Mar 25 01:38:12.560884 unknown[1997]: wrote ssh authorized keys file for user: core Mar 25 01:38:12.630875 update-ssh-keys[2047]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:38:12.631984 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 01:38:12.634240 systemd[1]: Finished sshkeys.service. Mar 25 01:38:12.646140 systemd-networkd[1756]: eth0: Gained IPv6LL Mar 25 01:38:12.654412 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:38:12.661249 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:38:12.666679 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 25 01:38:12.677608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:12.684648 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:38:12.767732 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 25 01:38:12.778089 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 25 01:38:12.801505 dbus-daemon[1884]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1928 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 25 01:38:12.813076 systemd[1]: Starting polkit.service - Authorization Manager... Mar 25 01:38:12.861697 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:38:12.892135 polkitd[2088]: Started polkitd version 121 Mar 25 01:38:12.898089 amazon-ssm-agent[2065]: Initializing new seelog logger Mar 25 01:38:12.898089 amazon-ssm-agent[2065]: New Seelog Logger Creation Complete Mar 25 01:38:12.900948 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.900948 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.900948 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 processing appconfig overrides Mar 25 01:38:12.907965 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.908117 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO Proxy environment variables: Mar 25 01:38:12.910799 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.914168 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 processing appconfig overrides Mar 25 01:38:12.921021 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.921021 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.921021 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 processing appconfig overrides Mar 25 01:38:12.927201 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.927201 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:38:12.927323 amazon-ssm-agent[2065]: 2025/03/25 01:38:12 processing appconfig overrides Mar 25 01:38:12.971726 polkitd[2088]: Loading rules from directory /etc/polkit-1/rules.d Mar 25 01:38:12.971819 polkitd[2088]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 25 01:38:12.982612 polkitd[2088]: Finished loading, compiling and executing 2 rules Mar 25 01:38:12.984346 dbus-daemon[1884]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 25 01:38:12.985020 systemd[1]: Started polkit.service - Authorization Manager. Mar 25 01:38:12.987374 polkitd[2088]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 25 01:38:13.008873 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO https_proxy: Mar 25 01:38:13.014264 containerd[1912]: time="2025-03-25T01:38:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:38:13.018036 containerd[1912]: time="2025-03-25T01:38:13.017632698Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:38:13.042822 systemd-hostnamed[1928]: Hostname set to (transient) Mar 25 01:38:13.046059 systemd-resolved[1704]: System hostname changed to 'ip-172-31-29-210'. Mar 25 01:38:13.076748 containerd[1912]: time="2025-03-25T01:38:13.076637503Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.113µs" Mar 25 01:38:13.076748 containerd[1912]: time="2025-03-25T01:38:13.076686669Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:38:13.076748 containerd[1912]: time="2025-03-25T01:38:13.076711746Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:38:13.076939 containerd[1912]: time="2025-03-25T01:38:13.076887700Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:38:13.076939 containerd[1912]: time="2025-03-25T01:38:13.076931167Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:38:13.077032 containerd[1912]: time="2025-03-25T01:38:13.076978352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077068 containerd[1912]: time="2025-03-25T01:38:13.077048755Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077103 containerd[1912]: time="2025-03-25T01:38:13.077064852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077371705Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077398381Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077415498Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077428105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077524098Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077767890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077804815Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:38:13.077889 containerd[1912]: time="2025-03-25T01:38:13.077820893Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:38:13.082971 containerd[1912]: time="2025-03-25T01:38:13.080453297Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:38:13.082971 containerd[1912]: time="2025-03-25T01:38:13.081489490Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:38:13.082971 containerd[1912]: time="2025-03-25T01:38:13.081595745Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:38:13.091597 containerd[1912]: time="2025-03-25T01:38:13.091550203Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:38:13.091736 containerd[1912]: time="2025-03-25T01:38:13.091629270Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:38:13.091736 containerd[1912]: time="2025-03-25T01:38:13.091650921Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:38:13.091736 containerd[1912]: time="2025-03-25T01:38:13.091679248Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:38:13.091736 containerd[1912]: time="2025-03-25T01:38:13.091695912Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:38:13.091736 containerd[1912]: time="2025-03-25T01:38:13.091711196Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:38:13.091736 containerd[1912]: time="2025-03-25T01:38:13.091732400Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:38:13.091943 containerd[1912]: time="2025-03-25T01:38:13.091750158Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:38:13.091943 containerd[1912]: time="2025-03-25T01:38:13.091768033Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:38:13.091943 containerd[1912]: time="2025-03-25T01:38:13.091784716Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:38:13.091943 containerd[1912]: time="2025-03-25T01:38:13.091799691Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:38:13.091943 containerd[1912]: time="2025-03-25T01:38:13.091817392Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092001130Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092029781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092049827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092067453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092088162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092103730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:38:13.092125 containerd[1912]: time="2025-03-25T01:38:13.092120780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092135789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092152918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092169718Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092185136Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092263217Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092282399Z" level=info msg="Start snapshots syncer" Mar 25 01:38:13.092374 containerd[1912]: time="2025-03-25T01:38:13.092312995Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:38:13.093528 containerd[1912]: time="2025-03-25T01:38:13.092643260Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:38:13.093528 containerd[1912]: time="2025-03-25T01:38:13.092715776Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:38:13.095210 containerd[1912]: time="2025-03-25T01:38:13.095172467Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095356608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095392501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095411250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095429858Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095448221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095463922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095478933Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095525832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095542228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:38:13.095949 containerd[1912]: time="2025-03-25T01:38:13.095555839Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.097955406Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.097993836Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098009075Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098025039Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098037251Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098053243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098069129Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098091767Z" level=info msg="runtime interface created" Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098099227Z" level=info msg="created NRI interface" Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098112510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098133032Z" level=info msg="Connect containerd service" Mar 25 01:38:13.098600 containerd[1912]: time="2025-03-25T01:38:13.098191604Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:38:13.099108 containerd[1912]: time="2025-03-25T01:38:13.098997713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:38:13.112061 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO http_proxy: Mar 25 01:38:13.210045 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO no_proxy: Mar 25 01:38:13.308562 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO Checking if agent identity type OnPrem can be assumed Mar 25 01:38:13.414499 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO Checking if agent identity type EC2 can be assumed Mar 25 01:38:13.513355 amazon-ssm-agent[2065]: 2025-03-25 01:38:12 INFO Agent will take identity from EC2 Mar 25 01:38:13.612256 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:38:13.621003 containerd[1912]: time="2025-03-25T01:38:13.620949593Z" level=info msg="Start subscribing containerd event" Mar 25 01:38:13.621129 containerd[1912]: time="2025-03-25T01:38:13.621023389Z" level=info msg="Start recovering state" Mar 25 01:38:13.621188 containerd[1912]: time="2025-03-25T01:38:13.621141917Z" level=info msg="Start event monitor" Mar 25 01:38:13.621188 containerd[1912]: time="2025-03-25T01:38:13.621160345Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:38:13.621188 containerd[1912]: time="2025-03-25T01:38:13.621173261Z" level=info msg="Start streaming server" Mar 25 01:38:13.621188 containerd[1912]: time="2025-03-25T01:38:13.621184477Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:38:13.621309 containerd[1912]: time="2025-03-25T01:38:13.621194943Z" level=info msg="runtime interface starting up..." Mar 25 01:38:13.621309 containerd[1912]: time="2025-03-25T01:38:13.621204460Z" level=info msg="starting plugins..." Mar 25 01:38:13.621309 containerd[1912]: time="2025-03-25T01:38:13.621221355Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:38:13.623973 containerd[1912]: time="2025-03-25T01:38:13.621748216Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:38:13.623973 containerd[1912]: time="2025-03-25T01:38:13.621864656Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:38:13.630510 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:38:13.631560 containerd[1912]: time="2025-03-25T01:38:13.630489417Z" level=info msg="containerd successfully booted in 0.618273s" Mar 25 01:38:13.712665 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:38:13.726410 tar[1906]: linux-amd64/LICENSE Mar 25 01:38:13.726410 tar[1906]: linux-amd64/README.md Mar 25 01:38:13.747390 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:38:13.811339 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:38:13.912024 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 25 01:38:14.012563 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 25 01:38:14.023597 sshd_keygen[1925]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:38:14.039660 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] Starting Core Agent Mar 25 01:38:14.039660 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 25 01:38:14.039808 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [Registrar] Starting registrar module Mar 25 01:38:14.039808 amazon-ssm-agent[2065]: 2025-03-25 01:38:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 25 01:38:14.039808 amazon-ssm-agent[2065]: 2025-03-25 01:38:14 INFO [EC2Identity] EC2 registration was successful. Mar 25 01:38:14.039808 amazon-ssm-agent[2065]: 2025-03-25 01:38:14 INFO [CredentialRefresher] credentialRefresher has started Mar 25 01:38:14.039808 amazon-ssm-agent[2065]: 2025-03-25 01:38:14 INFO [CredentialRefresher] Starting credentials refresher loop Mar 25 01:38:14.039808 amazon-ssm-agent[2065]: 2025-03-25 01:38:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 25 01:38:14.048975 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:38:14.052130 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:38:14.068106 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:38:14.068563 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:38:14.072184 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:38:14.090467 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:38:14.093191 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:38:14.098235 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:38:14.099117 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:38:14.113013 amazon-ssm-agent[2065]: 2025-03-25 01:38:14 INFO [CredentialRefresher] Next credential rotation will be in 30.9583273881 minutes Mar 25 01:38:14.708788 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:38:14.711941 systemd[1]: Started sshd@0-172.31.29.210:22-147.75.109.163:52426.service - OpenSSH per-connection server daemon (147.75.109.163:52426). Mar 25 01:38:14.919749 sshd[2138]: Accepted publickey for core from 147.75.109.163 port 52426 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:14.922526 sshd-session[2138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:14.936810 systemd-logind[1895]: New session 1 of user core. Mar 25 01:38:14.938324 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:38:14.940834 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:38:14.963925 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:38:14.967794 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:38:14.980948 (systemd)[2142]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:38:14.984279 systemd-logind[1895]: New session c1 of user core. Mar 25 01:38:15.057332 amazon-ssm-agent[2065]: 2025-03-25 01:38:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 25 01:38:15.060879 ntpd[1888]: Listen normally on 6 eth0 [fe80::409:1fff:fe3f:a9d3%2]:123 Mar 25 01:38:15.063379 ntpd[1888]: 25 Mar 01:38:15 ntpd[1888]: Listen normally on 6 eth0 [fe80::409:1fff:fe3f:a9d3%2]:123 Mar 25 01:38:15.158060 amazon-ssm-agent[2065]: 2025-03-25 01:38:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2150) started Mar 25 01:38:15.240572 systemd[2142]: Queued start job for default target default.target. Mar 25 01:38:15.246320 systemd[2142]: Created slice app.slice - User Application Slice. Mar 25 01:38:15.246367 systemd[2142]: Reached target paths.target - Paths. Mar 25 01:38:15.246415 systemd[2142]: Reached target timers.target - Timers. Mar 25 01:38:15.251210 systemd[2142]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:38:15.258722 amazon-ssm-agent[2065]: 2025-03-25 01:38:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 25 01:38:15.275083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:15.276486 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:38:15.285092 systemd[2142]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:38:15.285281 systemd[2142]: Reached target sockets.target - Sockets. Mar 25 01:38:15.285358 systemd[2142]: Reached target basic.target - Basic System. Mar 25 01:38:15.285427 systemd[2142]: Reached target default.target - Main User Target. Mar 25 01:38:15.285469 systemd[2142]: Startup finished in 293ms. Mar 25 01:38:15.285473 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:38:15.286000 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:38:15.292548 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:38:15.294419 systemd[1]: Startup finished in 676ms (kernel) + 7.119s (initrd) + 7.520s (userspace) = 15.317s. Mar 25 01:38:15.444183 systemd[1]: Started sshd@1-172.31.29.210:22-147.75.109.163:52436.service - OpenSSH per-connection server daemon (147.75.109.163:52436). Mar 25 01:38:15.623860 sshd[2176]: Accepted publickey for core from 147.75.109.163 port 52436 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:15.625535 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:15.630561 systemd-logind[1895]: New session 2 of user core. Mar 25 01:38:15.636117 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:38:15.767501 sshd[2182]: Connection closed by 147.75.109.163 port 52436 Mar 25 01:38:15.768461 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:15.774087 systemd[1]: sshd@1-172.31.29.210:22-147.75.109.163:52436.service: Deactivated successfully. Mar 25 01:38:15.776460 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:38:15.777254 systemd-logind[1895]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:38:15.778774 systemd-logind[1895]: Removed session 2. Mar 25 01:38:15.798418 systemd[1]: Started sshd@2-172.31.29.210:22-147.75.109.163:52452.service - OpenSSH per-connection server daemon (147.75.109.163:52452). Mar 25 01:38:15.980244 sshd[2188]: Accepted publickey for core from 147.75.109.163 port 52452 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:15.982198 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:15.989250 systemd-logind[1895]: New session 3 of user core. Mar 25 01:38:15.995099 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:38:16.111792 sshd[2190]: Connection closed by 147.75.109.163 port 52452 Mar 25 01:38:16.112609 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:16.119376 systemd[1]: sshd@2-172.31.29.210:22-147.75.109.163:52452.service: Deactivated successfully. Mar 25 01:38:16.121861 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:38:16.124318 systemd-logind[1895]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:38:16.125427 systemd-logind[1895]: Removed session 3. Mar 25 01:38:16.147081 systemd[1]: Started sshd@3-172.31.29.210:22-147.75.109.163:52454.service - OpenSSH per-connection server daemon (147.75.109.163:52454). Mar 25 01:38:16.326061 sshd[2196]: Accepted publickey for core from 147.75.109.163 port 52454 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:16.327975 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:16.334162 systemd-logind[1895]: New session 4 of user core. Mar 25 01:38:16.339359 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:38:16.480260 sshd[2201]: Connection closed by 147.75.109.163 port 52454 Mar 25 01:38:16.480900 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:16.486280 systemd[1]: sshd@3-172.31.29.210:22-147.75.109.163:52454.service: Deactivated successfully. Mar 25 01:38:16.489812 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:38:16.490820 systemd-logind[1895]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:38:16.493062 systemd-logind[1895]: Removed session 4. Mar 25 01:38:16.512864 systemd[1]: Started sshd@4-172.31.29.210:22-147.75.109.163:52466.service - OpenSSH per-connection server daemon (147.75.109.163:52466). Mar 25 01:38:16.665173 kubelet[2163]: E0325 01:38:16.665039 2163 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:38:16.668298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:38:16.668501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:38:16.669005 systemd[1]: kubelet.service: Consumed 995ms CPU time, 247.2M memory peak. Mar 25 01:38:16.689161 sshd[2207]: Accepted publickey for core from 147.75.109.163 port 52466 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:16.690588 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:16.696331 systemd-logind[1895]: New session 5 of user core. Mar 25 01:38:16.703137 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:38:16.840288 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:38:16.840834 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:38:16.856838 sudo[2213]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:16.878943 sshd[2212]: Connection closed by 147.75.109.163 port 52466 Mar 25 01:38:16.879999 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:16.883832 systemd[1]: sshd@4-172.31.29.210:22-147.75.109.163:52466.service: Deactivated successfully. Mar 25 01:38:16.886267 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:38:16.887794 systemd-logind[1895]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:38:16.889260 systemd-logind[1895]: Removed session 5. Mar 25 01:38:16.916240 systemd[1]: Started sshd@5-172.31.29.210:22-147.75.109.163:52482.service - OpenSSH per-connection server daemon (147.75.109.163:52482). Mar 25 01:38:17.093274 sshd[2219]: Accepted publickey for core from 147.75.109.163 port 52482 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:17.094754 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:17.104246 systemd-logind[1895]: New session 6 of user core. Mar 25 01:38:17.109296 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:38:17.225075 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:38:17.225555 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:38:17.244354 sudo[2223]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:17.259148 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:38:17.260735 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:38:17.289868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:38:17.327754 augenrules[2245]: No rules Mar 25 01:38:17.329133 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:38:17.329406 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:38:17.331031 sudo[2222]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:17.353423 sshd[2221]: Connection closed by 147.75.109.163 port 52482 Mar 25 01:38:17.354080 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:17.357355 systemd[1]: sshd@5-172.31.29.210:22-147.75.109.163:52482.service: Deactivated successfully. Mar 25 01:38:17.359402 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:38:17.360985 systemd-logind[1895]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:38:17.362079 systemd-logind[1895]: Removed session 6. Mar 25 01:38:17.390096 systemd[1]: Started sshd@6-172.31.29.210:22-147.75.109.163:52486.service - OpenSSH per-connection server daemon (147.75.109.163:52486). Mar 25 01:38:17.561314 sshd[2254]: Accepted publickey for core from 147.75.109.163 port 52486 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:38:17.563206 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:38:17.570545 systemd-logind[1895]: New session 7 of user core. Mar 25 01:38:17.580103 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:38:17.676164 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:38:17.676529 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:38:18.411049 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:38:18.425476 (dockerd)[2275]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:38:19.017494 dockerd[2275]: time="2025-03-25T01:38:19.017434530Z" level=info msg="Starting up" Mar 25 01:38:19.019903 dockerd[2275]: time="2025-03-25T01:38:19.019804479Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:38:20.467534 systemd-resolved[1704]: Clock change detected. Flushing caches. Mar 25 01:38:20.555719 dockerd[2275]: time="2025-03-25T01:38:20.555672788Z" level=info msg="Loading containers: start." Mar 25 01:38:20.796578 kernel: Initializing XFRM netlink socket Mar 25 01:38:20.797836 (udev-worker)[2301]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:38:21.055597 systemd-networkd[1756]: docker0: Link UP Mar 25 01:38:21.129048 dockerd[2275]: time="2025-03-25T01:38:21.129002431Z" level=info msg="Loading containers: done." Mar 25 01:38:21.147850 dockerd[2275]: time="2025-03-25T01:38:21.147801191Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:38:21.148035 dockerd[2275]: time="2025-03-25T01:38:21.147900002Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:38:21.148084 dockerd[2275]: time="2025-03-25T01:38:21.148030574Z" level=info msg="Daemon has completed initialization" Mar 25 01:38:21.189702 dockerd[2275]: time="2025-03-25T01:38:21.189552321Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:38:21.189666 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:38:22.684146 containerd[1912]: time="2025-03-25T01:38:22.684028372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 25 01:38:23.323004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170158317.mount: Deactivated successfully. Mar 25 01:38:25.848051 containerd[1912]: time="2025-03-25T01:38:25.847998871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:25.849199 containerd[1912]: time="2025-03-25T01:38:25.849130384Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 25 01:38:25.850219 containerd[1912]: time="2025-03-25T01:38:25.850144341Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:25.853340 containerd[1912]: time="2025-03-25T01:38:25.852939959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:25.854083 containerd[1912]: time="2025-03-25T01:38:25.853863134Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 3.169796365s" Mar 25 01:38:25.854083 containerd[1912]: time="2025-03-25T01:38:25.853903267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 25 01:38:25.872884 containerd[1912]: time="2025-03-25T01:38:25.872829077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 25 01:38:28.325572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:38:28.329553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:28.577566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:28.588063 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:38:28.677644 kubelet[2555]: E0325 01:38:28.677514 2555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:38:28.684610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:38:28.684809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:38:28.686229 systemd[1]: kubelet.service: Consumed 199ms CPU time, 94.7M memory peak. Mar 25 01:38:28.882089 containerd[1912]: time="2025-03-25T01:38:28.881744695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:28.883149 containerd[1912]: time="2025-03-25T01:38:28.883080796Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 25 01:38:28.885444 containerd[1912]: time="2025-03-25T01:38:28.884237313Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:28.887166 containerd[1912]: time="2025-03-25T01:38:28.887136991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:28.888470 containerd[1912]: time="2025-03-25T01:38:28.888441482Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 3.01555607s" Mar 25 01:38:28.888584 containerd[1912]: time="2025-03-25T01:38:28.888566075Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 25 01:38:28.909879 containerd[1912]: time="2025-03-25T01:38:28.909788924Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 25 01:38:30.780343 containerd[1912]: time="2025-03-25T01:38:30.780281831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:30.781460 containerd[1912]: time="2025-03-25T01:38:30.781396805Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 25 01:38:30.782637 containerd[1912]: time="2025-03-25T01:38:30.782585678Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:30.785564 containerd[1912]: time="2025-03-25T01:38:30.785219386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:30.786195 containerd[1912]: time="2025-03-25T01:38:30.786148105Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.87626519s" Mar 25 01:38:30.786281 containerd[1912]: time="2025-03-25T01:38:30.786202944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 25 01:38:30.806052 containerd[1912]: time="2025-03-25T01:38:30.806016636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 25 01:38:31.912986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1627828173.mount: Deactivated successfully. Mar 25 01:38:32.434710 containerd[1912]: time="2025-03-25T01:38:32.434656671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:32.435840 containerd[1912]: time="2025-03-25T01:38:32.435697081Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 25 01:38:32.437793 containerd[1912]: time="2025-03-25T01:38:32.436880576Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:32.439337 containerd[1912]: time="2025-03-25T01:38:32.438901235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:32.439577 containerd[1912]: time="2025-03-25T01:38:32.439545431Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.633491614s" Mar 25 01:38:32.439675 containerd[1912]: time="2025-03-25T01:38:32.439657673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 25 01:38:32.460517 containerd[1912]: time="2025-03-25T01:38:32.460480864Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:38:33.074240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620895267.mount: Deactivated successfully. Mar 25 01:38:34.541498 containerd[1912]: time="2025-03-25T01:38:34.541440365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:34.542737 containerd[1912]: time="2025-03-25T01:38:34.542503382Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 25 01:38:34.544509 containerd[1912]: time="2025-03-25T01:38:34.543914679Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:34.547197 containerd[1912]: time="2025-03-25T01:38:34.547152764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:34.548213 containerd[1912]: time="2025-03-25T01:38:34.548172719Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.087651248s" Mar 25 01:38:34.548304 containerd[1912]: time="2025-03-25T01:38:34.548224380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 01:38:34.571827 containerd[1912]: time="2025-03-25T01:38:34.571783381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 25 01:38:35.084158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548343131.mount: Deactivated successfully. Mar 25 01:38:35.089596 containerd[1912]: time="2025-03-25T01:38:35.089546964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:35.090535 containerd[1912]: time="2025-03-25T01:38:35.090478979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 25 01:38:35.095006 containerd[1912]: time="2025-03-25T01:38:35.093689541Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:35.097034 containerd[1912]: time="2025-03-25T01:38:35.096032820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:35.097034 containerd[1912]: time="2025-03-25T01:38:35.096889939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 525.056963ms" Mar 25 01:38:35.097034 containerd[1912]: time="2025-03-25T01:38:35.096923451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 25 01:38:35.118043 containerd[1912]: time="2025-03-25T01:38:35.117999476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 25 01:38:35.715301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245048622.mount: Deactivated successfully. Mar 25 01:38:38.935911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:38:38.940007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:39.576327 containerd[1912]: time="2025-03-25T01:38:39.576259351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:39.591053 containerd[1912]: time="2025-03-25T01:38:39.590973297Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 25 01:38:39.594585 containerd[1912]: time="2025-03-25T01:38:39.593969046Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:39.601029 containerd[1912]: time="2025-03-25T01:38:39.600976847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:38:39.603036 containerd[1912]: time="2025-03-25T01:38:39.602293998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.484251575s" Mar 25 01:38:39.603036 containerd[1912]: time="2025-03-25T01:38:39.602354058Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 25 01:38:39.645528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:39.659362 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:38:39.773040 kubelet[2719]: E0325 01:38:39.772989 2719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:38:39.780166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:38:39.780381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:38:39.780768 systemd[1]: kubelet.service: Consumed 202ms CPU time, 97.4M memory peak. Mar 25 01:38:42.613177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:42.613664 systemd[1]: kubelet.service: Consumed 202ms CPU time, 97.4M memory peak. Mar 25 01:38:42.617785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:42.651397 systemd[1]: Reload requested from client PID 2808 ('systemctl') (unit session-7.scope)... Mar 25 01:38:42.651418 systemd[1]: Reloading... Mar 25 01:38:42.850354 zram_generator::config[2854]: No configuration found. Mar 25 01:38:43.040034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:38:43.184026 systemd[1]: Reloading finished in 531 ms. Mar 25 01:38:43.254112 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 25 01:38:43.254294 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 25 01:38:43.254798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:43.254860 systemd[1]: kubelet.service: Consumed 133ms CPU time, 83.5M memory peak. Mar 25 01:38:43.258243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:43.607612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:43.633935 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:38:43.699985 kubelet[2918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:38:43.700458 kubelet[2918]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:38:43.700458 kubelet[2918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:38:43.704337 kubelet[2918]: I0325 01:38:43.702302 2918 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:38:44.377837 kubelet[2918]: I0325 01:38:44.377793 2918 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:38:44.377837 kubelet[2918]: I0325 01:38:44.377829 2918 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:38:44.378158 kubelet[2918]: I0325 01:38:44.378135 2918 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:38:44.407429 kubelet[2918]: I0325 01:38:44.406968 2918 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:38:44.411483 kubelet[2918]: E0325 01:38:44.411435 2918 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.429774 kubelet[2918]: I0325 01:38:44.429470 2918 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:38:44.437016 kubelet[2918]: I0325 01:38:44.436877 2918 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:38:44.439191 kubelet[2918]: I0325 01:38:44.437020 2918 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-210","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:38:44.441936 kubelet[2918]: I0325 01:38:44.441898 2918 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:38:44.441936 kubelet[2918]: I0325 01:38:44.441941 2918 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:38:44.442699 kubelet[2918]: I0325 01:38:44.442196 2918 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:38:44.443914 kubelet[2918]: I0325 01:38:44.443889 2918 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:38:44.443914 kubelet[2918]: I0325 01:38:44.443917 2918 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:38:44.444051 kubelet[2918]: I0325 01:38:44.443945 2918 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:38:44.448343 kubelet[2918]: I0325 01:38:44.447829 2918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:38:44.448815 kubelet[2918]: W0325 01:38:44.448753 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.448903 kubelet[2918]: E0325 01:38:44.448820 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.456882 kubelet[2918]: W0325 01:38:44.456398 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.456882 kubelet[2918]: E0325 01:38:44.456465 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.456882 kubelet[2918]: I0325 01:38:44.456608 2918 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:38:44.459445 kubelet[2918]: I0325 01:38:44.459408 2918 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:38:44.459564 kubelet[2918]: W0325 01:38:44.459492 2918 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:38:44.461822 kubelet[2918]: I0325 01:38:44.461789 2918 server.go:1264] "Started kubelet" Mar 25 01:38:44.470876 kubelet[2918]: I0325 01:38:44.470808 2918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:38:44.471255 kubelet[2918]: I0325 01:38:44.471229 2918 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:38:44.478520 kubelet[2918]: I0325 01:38:44.478283 2918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:38:44.483759 kubelet[2918]: I0325 01:38:44.483487 2918 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:38:44.486993 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 25 01:38:44.487648 kubelet[2918]: I0325 01:38:44.487035 2918 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:38:44.489457 kubelet[2918]: E0325 01:38:44.489070 2918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.210:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-210.182fe80d4264538c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-210,UID:ip-172-31-29-210,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-210,},FirstTimestamp:2025-03-25 01:38:44.46176142 +0000 UTC m=+0.821768465,LastTimestamp:2025-03-25 01:38:44.46176142 +0000 UTC m=+0.821768465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-210,}" Mar 25 01:38:44.499441 kubelet[2918]: I0325 01:38:44.498817 2918 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:38:44.503410 kubelet[2918]: W0325 01:38:44.503341 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.504862 kubelet[2918]: E0325 01:38:44.503881 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.504862 kubelet[2918]: E0325 01:38:44.503628 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="200ms" Mar 25 01:38:44.506256 kubelet[2918]: I0325 01:38:44.506236 2918 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:38:44.507023 kubelet[2918]: I0325 01:38:44.506987 2918 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:38:44.513862 kubelet[2918]: I0325 01:38:44.513652 2918 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:38:44.514272 kubelet[2918]: E0325 01:38:44.513694 2918 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:38:44.514779 kubelet[2918]: I0325 01:38:44.514660 2918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:38:44.521089 kubelet[2918]: I0325 01:38:44.521066 2918 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:38:44.537145 kubelet[2918]: I0325 01:38:44.537099 2918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:38:44.538751 kubelet[2918]: I0325 01:38:44.538677 2918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:38:44.538751 kubelet[2918]: I0325 01:38:44.538713 2918 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:38:44.538751 kubelet[2918]: I0325 01:38:44.538738 2918 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:38:44.538972 kubelet[2918]: E0325 01:38:44.538784 2918 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:38:44.545991 kubelet[2918]: W0325 01:38:44.545745 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.545991 kubelet[2918]: E0325 01:38:44.545991 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:44.553199 kubelet[2918]: I0325 01:38:44.553174 2918 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:38:44.553699 kubelet[2918]: I0325 01:38:44.553447 2918 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:38:44.553699 kubelet[2918]: I0325 01:38:44.553471 2918 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:38:44.555577 kubelet[2918]: I0325 01:38:44.555472 2918 policy_none.go:49] "None policy: Start" Mar 25 01:38:44.557994 kubelet[2918]: I0325 01:38:44.557958 2918 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:38:44.557994 kubelet[2918]: I0325 01:38:44.557984 2918 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:38:44.568700 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:38:44.582131 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:38:44.588735 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:38:44.592248 kubelet[2918]: I0325 01:38:44.592213 2918 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:38:44.592498 kubelet[2918]: I0325 01:38:44.592453 2918 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:38:44.592621 kubelet[2918]: I0325 01:38:44.592603 2918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:38:44.596250 kubelet[2918]: E0325 01:38:44.596229 2918 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-210\" not found" Mar 25 01:38:44.601955 kubelet[2918]: I0325 01:38:44.601932 2918 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-210" Mar 25 01:38:44.602934 kubelet[2918]: E0325 01:38:44.602655 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Mar 25 01:38:44.639442 kubelet[2918]: I0325 01:38:44.639185 2918 topology_manager.go:215] "Topology Admit Handler" podUID="06bb5519c7fd53330dba10aa3b2bdca1" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:44.642285 kubelet[2918]: I0325 01:38:44.641889 2918 topology_manager.go:215] "Topology Admit Handler" podUID="ae0f9d5dbd8d9f63ba2bb8ca2895d3d0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-210" Mar 25 01:38:44.644841 kubelet[2918]: I0325 01:38:44.644811 2918 topology_manager.go:215] "Topology Admit Handler" podUID="5125062ab893f8895b928ba53cfc003b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-210" Mar 25 01:38:44.654229 systemd[1]: Created slice kubepods-burstable-pod06bb5519c7fd53330dba10aa3b2bdca1.slice - libcontainer container kubepods-burstable-pod06bb5519c7fd53330dba10aa3b2bdca1.slice. Mar 25 01:38:44.671568 systemd[1]: Created slice kubepods-burstable-podae0f9d5dbd8d9f63ba2bb8ca2895d3d0.slice - libcontainer container kubepods-burstable-podae0f9d5dbd8d9f63ba2bb8ca2895d3d0.slice. Mar 25 01:38:44.686104 systemd[1]: Created slice kubepods-burstable-pod5125062ab893f8895b928ba53cfc003b.slice - libcontainer container kubepods-burstable-pod5125062ab893f8895b928ba53cfc003b.slice. Mar 25 01:38:44.706694 kubelet[2918]: E0325 01:38:44.706627 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="400ms" Mar 25 01:38:44.708682 kubelet[2918]: I0325 01:38:44.708651 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5125062ab893f8895b928ba53cfc003b-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"5125062ab893f8895b928ba53cfc003b\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:44.708809 kubelet[2918]: I0325 01:38:44.708700 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:44.708809 kubelet[2918]: I0325 01:38:44.708725 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:44.708809 kubelet[2918]: I0325 01:38:44.708763 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:44.708809 kubelet[2918]: I0325 01:38:44.708786 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:44.708809 kubelet[2918]: I0325 01:38:44.708807 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae0f9d5dbd8d9f63ba2bb8ca2895d3d0-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-210\" (UID: \"ae0f9d5dbd8d9f63ba2bb8ca2895d3d0\") " pod="kube-system/kube-scheduler-ip-172-31-29-210" Mar 25 01:38:44.709129 kubelet[2918]: I0325 01:38:44.708829 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:44.709129 kubelet[2918]: I0325 01:38:44.708859 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5125062ab893f8895b928ba53cfc003b-ca-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"5125062ab893f8895b928ba53cfc003b\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:44.709129 kubelet[2918]: I0325 01:38:44.708884 2918 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5125062ab893f8895b928ba53cfc003b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"5125062ab893f8895b928ba53cfc003b\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:44.805533 kubelet[2918]: I0325 01:38:44.805042 2918 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-210" Mar 25 01:38:44.805533 kubelet[2918]: E0325 01:38:44.805498 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Mar 25 01:38:44.971786 containerd[1912]: time="2025-03-25T01:38:44.971541730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-210,Uid:06bb5519c7fd53330dba10aa3b2bdca1,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:44.979602 containerd[1912]: time="2025-03-25T01:38:44.979559818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-210,Uid:ae0f9d5dbd8d9f63ba2bb8ca2895d3d0,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:45.001016 containerd[1912]: time="2025-03-25T01:38:45.000971445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-210,Uid:5125062ab893f8895b928ba53cfc003b,Namespace:kube-system,Attempt:0,}" Mar 25 01:38:45.107137 kubelet[2918]: E0325 01:38:45.107087 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="800ms" Mar 25 01:38:45.208051 kubelet[2918]: I0325 01:38:45.208020 2918 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-210" Mar 25 01:38:45.208400 kubelet[2918]: E0325 01:38:45.208338 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Mar 25 01:38:45.352268 kubelet[2918]: W0325 01:38:45.352160 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.352268 kubelet[2918]: E0325 01:38:45.352206 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.394567 kubelet[2918]: W0325 01:38:45.394504 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.394567 kubelet[2918]: E0325 01:38:45.394544 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.529928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131452209.mount: Deactivated successfully. Mar 25 01:38:45.545424 containerd[1912]: time="2025-03-25T01:38:45.545369407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:45.552778 containerd[1912]: time="2025-03-25T01:38:45.552701926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 25 01:38:45.555297 containerd[1912]: time="2025-03-25T01:38:45.555252506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:45.558898 containerd[1912]: time="2025-03-25T01:38:45.558841197Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:45.563875 containerd[1912]: time="2025-03-25T01:38:45.562457717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:38:45.564614 containerd[1912]: time="2025-03-25T01:38:45.564563076Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:45.567533 containerd[1912]: time="2025-03-25T01:38:45.567483870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:38:45.568257 containerd[1912]: time="2025-03-25T01:38:45.568218665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 582.412224ms" Mar 25 01:38:45.568889 containerd[1912]: time="2025-03-25T01:38:45.568825035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:38:45.574672 containerd[1912]: time="2025-03-25T01:38:45.574625845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 586.608711ms" Mar 25 01:38:45.575276 containerd[1912]: time="2025-03-25T01:38:45.575236820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 566.741385ms" Mar 25 01:38:45.730017 containerd[1912]: time="2025-03-25T01:38:45.729378528Z" level=info msg="connecting to shim b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb" address="unix:///run/containerd/s/603d240dcd530393ecdbd24adceffd0135a99b61d31a2d36b6e578133113006f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:45.741986 containerd[1912]: time="2025-03-25T01:38:45.741271653Z" level=info msg="connecting to shim 322256a262582e9c8457fdc82f5e9c64b00a487c33f18abca8d00e19584a4ca1" address="unix:///run/containerd/s/628a635471c4b257b850fda31affcfaf96a49eb3d469a4c52670e8e5c378d0cc" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:45.741986 containerd[1912]: time="2025-03-25T01:38:45.741271696Z" level=info msg="connecting to shim 5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde" address="unix:///run/containerd/s/83500819bbcad826cae1644a3fc86638b0fd827220a8381fc014a6e943694079" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:38:45.819986 kubelet[2918]: W0325 01:38:45.819920 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.819986 kubelet[2918]: E0325 01:38:45.819990 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-210&limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.848535 systemd[1]: Started cri-containerd-322256a262582e9c8457fdc82f5e9c64b00a487c33f18abca8d00e19584a4ca1.scope - libcontainer container 322256a262582e9c8457fdc82f5e9c64b00a487c33f18abca8d00e19584a4ca1. Mar 25 01:38:45.851977 systemd[1]: Started cri-containerd-5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde.scope - libcontainer container 5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde. Mar 25 01:38:45.854117 systemd[1]: Started cri-containerd-b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb.scope - libcontainer container b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb. Mar 25 01:38:45.912181 kubelet[2918]: E0325 01:38:45.912127 2918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": dial tcp 172.31.29.210:6443: connect: connection refused" interval="1.6s" Mar 25 01:38:45.946864 kubelet[2918]: W0325 01:38:45.946814 2918 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:45.946864 kubelet[2918]: E0325 01:38:45.946872 2918 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:46.007545 containerd[1912]: time="2025-03-25T01:38:46.006134830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-210,Uid:06bb5519c7fd53330dba10aa3b2bdca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb\"" Mar 25 01:38:46.019361 containerd[1912]: time="2025-03-25T01:38:46.018412498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-210,Uid:5125062ab893f8895b928ba53cfc003b,Namespace:kube-system,Attempt:0,} returns sandbox id \"322256a262582e9c8457fdc82f5e9c64b00a487c33f18abca8d00e19584a4ca1\"" Mar 25 01:38:46.019516 kubelet[2918]: I0325 01:38:46.019373 2918 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-210" Mar 25 01:38:46.019792 kubelet[2918]: E0325 01:38:46.019763 2918 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.210:6443/api/v1/nodes\": dial tcp 172.31.29.210:6443: connect: connection refused" node="ip-172-31-29-210" Mar 25 01:38:46.032547 containerd[1912]: time="2025-03-25T01:38:46.032504639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-210,Uid:ae0f9d5dbd8d9f63ba2bb8ca2895d3d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde\"" Mar 25 01:38:46.033250 containerd[1912]: time="2025-03-25T01:38:46.032858644Z" level=info msg="CreateContainer within sandbox \"322256a262582e9c8457fdc82f5e9c64b00a487c33f18abca8d00e19584a4ca1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:38:46.033462 containerd[1912]: time="2025-03-25T01:38:46.033176984Z" level=info msg="CreateContainer within sandbox \"b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:38:46.056781 containerd[1912]: time="2025-03-25T01:38:46.055832066Z" level=info msg="Container 7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:46.061400 containerd[1912]: time="2025-03-25T01:38:46.061356894Z" level=info msg="CreateContainer within sandbox \"5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:38:46.087251 containerd[1912]: time="2025-03-25T01:38:46.087208818Z" level=info msg="Container 98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:46.091482 containerd[1912]: time="2025-03-25T01:38:46.091397334Z" level=info msg="CreateContainer within sandbox \"322256a262582e9c8457fdc82f5e9c64b00a487c33f18abca8d00e19584a4ca1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860\"" Mar 25 01:38:46.093055 containerd[1912]: time="2025-03-25T01:38:46.092555609Z" level=info msg="StartContainer for \"7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860\"" Mar 25 01:38:46.094922 containerd[1912]: time="2025-03-25T01:38:46.094894992Z" level=info msg="connecting to shim 7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860" address="unix:///run/containerd/s/628a635471c4b257b850fda31affcfaf96a49eb3d469a4c52670e8e5c378d0cc" protocol=ttrpc version=3 Mar 25 01:38:46.102643 containerd[1912]: time="2025-03-25T01:38:46.102591038Z" level=info msg="CreateContainer within sandbox \"b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\"" Mar 25 01:38:46.103396 containerd[1912]: time="2025-03-25T01:38:46.103207840Z" level=info msg="StartContainer for \"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\"" Mar 25 01:38:46.106036 containerd[1912]: time="2025-03-25T01:38:46.105832117Z" level=info msg="connecting to shim 98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112" address="unix:///run/containerd/s/603d240dcd530393ecdbd24adceffd0135a99b61d31a2d36b6e578133113006f" protocol=ttrpc version=3 Mar 25 01:38:46.107081 containerd[1912]: time="2025-03-25T01:38:46.107053353Z" level=info msg="Container 930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:38:46.126032 containerd[1912]: time="2025-03-25T01:38:46.125979716Z" level=info msg="CreateContainer within sandbox \"5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\"" Mar 25 01:38:46.126601 containerd[1912]: time="2025-03-25T01:38:46.126479306Z" level=info msg="StartContainer for \"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\"" Mar 25 01:38:46.130117 containerd[1912]: time="2025-03-25T01:38:46.130033030Z" level=info msg="connecting to shim 930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3" address="unix:///run/containerd/s/83500819bbcad826cae1644a3fc86638b0fd827220a8381fc014a6e943694079" protocol=ttrpc version=3 Mar 25 01:38:46.131099 systemd[1]: Started cri-containerd-7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860.scope - libcontainer container 7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860. Mar 25 01:38:46.158268 systemd[1]: Started cri-containerd-98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112.scope - libcontainer container 98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112. Mar 25 01:38:46.171256 systemd[1]: Started cri-containerd-930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3.scope - libcontainer container 930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3. Mar 25 01:38:46.304159 containerd[1912]: time="2025-03-25T01:38:46.303764025Z" level=info msg="StartContainer for \"7c5dfba98fcdc410d5034f35f2ce5efcd0d012426250f14bed46a0018361e860\" returns successfully" Mar 25 01:38:46.331233 containerd[1912]: time="2025-03-25T01:38:46.330893563Z" level=info msg="StartContainer for \"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\" returns successfully" Mar 25 01:38:46.349845 containerd[1912]: time="2025-03-25T01:38:46.349664745Z" level=info msg="StartContainer for \"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\" returns successfully" Mar 25 01:38:46.489101 kubelet[2918]: E0325 01:38:46.489041 2918 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.210:6443: connect: connection refused Mar 25 01:38:47.624346 kubelet[2918]: I0325 01:38:47.622665 2918 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-210" Mar 25 01:38:49.317786 kubelet[2918]: E0325 01:38:49.317632 2918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-210\" not found" node="ip-172-31-29-210" Mar 25 01:38:49.410908 kubelet[2918]: I0325 01:38:49.410868 2918 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-210" Mar 25 01:38:49.459424 kubelet[2918]: I0325 01:38:49.459388 2918 apiserver.go:52] "Watching apiserver" Mar 25 01:38:49.507862 kubelet[2918]: I0325 01:38:49.507818 2918 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:38:51.602827 systemd[1]: Reload requested from client PID 3192 ('systemctl') (unit session-7.scope)... Mar 25 01:38:51.602846 systemd[1]: Reloading... Mar 25 01:38:51.747354 zram_generator::config[3237]: No configuration found. Mar 25 01:38:51.936708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:38:52.154973 systemd[1]: Reloading finished in 551 ms. Mar 25 01:38:52.197088 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:52.211699 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:38:52.211927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:52.211984 systemd[1]: kubelet.service: Consumed 999ms CPU time, 113.6M memory peak. Mar 25 01:38:52.216623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:38:52.570186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:38:52.584382 (kubelet)[3296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:38:52.713871 kubelet[3296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:38:52.713871 kubelet[3296]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:38:52.713871 kubelet[3296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:38:52.715087 kubelet[3296]: I0325 01:38:52.714554 3296 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:38:52.726291 kubelet[3296]: I0325 01:38:52.722246 3296 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:38:52.726291 kubelet[3296]: I0325 01:38:52.722276 3296 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:38:52.726291 kubelet[3296]: I0325 01:38:52.722598 3296 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:38:52.726547 kubelet[3296]: I0325 01:38:52.726426 3296 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:38:52.732223 kubelet[3296]: I0325 01:38:52.728716 3296 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:38:52.740614 kubelet[3296]: I0325 01:38:52.740579 3296 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:38:52.740904 kubelet[3296]: I0325 01:38:52.740863 3296 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:38:52.741096 kubelet[3296]: I0325 01:38:52.740904 3296 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-210","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:38:52.741260 kubelet[3296]: I0325 01:38:52.741113 3296 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:38:52.741260 kubelet[3296]: I0325 01:38:52.741127 3296 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:38:52.744485 kubelet[3296]: I0325 01:38:52.743197 3296 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:38:52.744485 kubelet[3296]: I0325 01:38:52.743384 3296 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:38:52.744485 kubelet[3296]: I0325 01:38:52.744125 3296 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:38:52.744485 kubelet[3296]: I0325 01:38:52.744168 3296 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:38:52.744485 kubelet[3296]: I0325 01:38:52.744197 3296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:38:52.743627 sudo[3309]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:38:52.744211 sudo[3309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:38:52.759891 kubelet[3296]: I0325 01:38:52.759658 3296 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:38:52.764631 kubelet[3296]: I0325 01:38:52.764581 3296 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:38:52.768459 kubelet[3296]: I0325 01:38:52.767610 3296 server.go:1264] "Started kubelet" Mar 25 01:38:52.800932 kubelet[3296]: I0325 01:38:52.792573 3296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:38:52.817716 kubelet[3296]: I0325 01:38:52.801205 3296 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:38:52.822181 kubelet[3296]: I0325 01:38:52.821457 3296 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:38:52.836116 kubelet[3296]: I0325 01:38:52.821609 3296 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:38:52.836116 kubelet[3296]: I0325 01:38:52.832128 3296 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:38:52.836116 kubelet[3296]: I0325 01:38:52.831299 3296 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:38:52.836116 kubelet[3296]: I0325 01:38:52.832260 3296 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:38:52.836116 kubelet[3296]: I0325 01:38:52.834395 3296 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:38:52.837922 kubelet[3296]: I0325 01:38:52.802835 3296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:38:52.839553 kubelet[3296]: I0325 01:38:52.839530 3296 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:38:52.859424 kubelet[3296]: E0325 01:38:52.858602 3296 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:38:52.861717 kubelet[3296]: I0325 01:38:52.861567 3296 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:38:52.930420 kubelet[3296]: I0325 01:38:52.929575 3296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:38:52.933253 kubelet[3296]: I0325 01:38:52.933229 3296 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-210" Mar 25 01:38:52.943756 kubelet[3296]: I0325 01:38:52.941532 3296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:38:52.943756 kubelet[3296]: I0325 01:38:52.941563 3296 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:38:52.943756 kubelet[3296]: I0325 01:38:52.941584 3296 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:38:52.943756 kubelet[3296]: E0325 01:38:52.941646 3296 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:38:52.966283 kubelet[3296]: I0325 01:38:52.964901 3296 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-210" Mar 25 01:38:52.966283 kubelet[3296]: I0325 01:38:52.964985 3296 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-210" Mar 25 01:38:53.040493 kubelet[3296]: I0325 01:38:53.040467 3296 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:38:53.040666 kubelet[3296]: I0325 01:38:53.040653 3296 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:38:53.040970 kubelet[3296]: I0325 01:38:53.040958 3296 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:38:53.041297 kubelet[3296]: I0325 01:38:53.041282 3296 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:38:53.041556 kubelet[3296]: I0325 01:38:53.041390 3296 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:38:53.041556 kubelet[3296]: I0325 01:38:53.041433 3296 policy_none.go:49] "None policy: Start" Mar 25 01:38:53.042018 kubelet[3296]: E0325 01:38:53.041967 3296 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:38:53.043163 kubelet[3296]: I0325 01:38:53.042766 3296 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:38:53.043163 kubelet[3296]: I0325 01:38:53.042789 3296 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:38:53.043163 kubelet[3296]: I0325 01:38:53.043072 3296 state_mem.go:75] "Updated machine memory state" Mar 25 01:38:53.054644 kubelet[3296]: I0325 01:38:53.053494 3296 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:38:53.054644 kubelet[3296]: I0325 01:38:53.053692 3296 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:38:53.054644 kubelet[3296]: I0325 01:38:53.054301 3296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:38:53.242537 kubelet[3296]: I0325 01:38:53.242425 3296 topology_manager.go:215] "Topology Admit Handler" podUID="5125062ab893f8895b928ba53cfc003b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-210" Mar 25 01:38:53.243206 kubelet[3296]: I0325 01:38:53.243160 3296 topology_manager.go:215] "Topology Admit Handler" podUID="06bb5519c7fd53330dba10aa3b2bdca1" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:53.248331 kubelet[3296]: I0325 01:38:53.244660 3296 topology_manager.go:215] "Topology Admit Handler" podUID="ae0f9d5dbd8d9f63ba2bb8ca2895d3d0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-210" Mar 25 01:38:53.269187 kubelet[3296]: E0325 01:38:53.269147 3296 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-29-210\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-210" Mar 25 01:38:53.337949 kubelet[3296]: I0325 01:38:53.337519 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:53.337949 kubelet[3296]: I0325 01:38:53.337574 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:53.337949 kubelet[3296]: I0325 01:38:53.337604 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5125062ab893f8895b928ba53cfc003b-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"5125062ab893f8895b928ba53cfc003b\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:53.337949 kubelet[3296]: I0325 01:38:53.337630 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:53.337949 kubelet[3296]: I0325 01:38:53.337655 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:53.338719 kubelet[3296]: I0325 01:38:53.337679 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06bb5519c7fd53330dba10aa3b2bdca1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-210\" (UID: \"06bb5519c7fd53330dba10aa3b2bdca1\") " pod="kube-system/kube-controller-manager-ip-172-31-29-210" Mar 25 01:38:53.338719 kubelet[3296]: I0325 01:38:53.337704 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae0f9d5dbd8d9f63ba2bb8ca2895d3d0-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-210\" (UID: \"ae0f9d5dbd8d9f63ba2bb8ca2895d3d0\") " pod="kube-system/kube-scheduler-ip-172-31-29-210" Mar 25 01:38:53.338719 kubelet[3296]: I0325 01:38:53.337728 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5125062ab893f8895b928ba53cfc003b-ca-certs\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"5125062ab893f8895b928ba53cfc003b\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:53.338719 kubelet[3296]: I0325 01:38:53.337753 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5125062ab893f8895b928ba53cfc003b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-210\" (UID: \"5125062ab893f8895b928ba53cfc003b\") " pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:53.707435 sudo[3309]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:53.749458 kubelet[3296]: I0325 01:38:53.749160 3296 apiserver.go:52] "Watching apiserver" Mar 25 01:38:53.832380 kubelet[3296]: I0325 01:38:53.832269 3296 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:38:54.045931 kubelet[3296]: E0325 01:38:54.045426 3296 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-210\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-210" Mar 25 01:38:54.199626 kubelet[3296]: I0325 01:38:54.196082 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-210" podStartSLOduration=1.196039329 podStartE2EDuration="1.196039329s" podCreationTimestamp="2025-03-25 01:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:54.165159166 +0000 UTC m=+1.553802999" watchObservedRunningTime="2025-03-25 01:38:54.196039329 +0000 UTC m=+1.584683158" Mar 25 01:38:54.230109 kubelet[3296]: I0325 01:38:54.229942 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-210" podStartSLOduration=3.229919006 podStartE2EDuration="3.229919006s" podCreationTimestamp="2025-03-25 01:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:54.202451053 +0000 UTC m=+1.591094889" watchObservedRunningTime="2025-03-25 01:38:54.229919006 +0000 UTC m=+1.618563113" Mar 25 01:38:56.065807 sudo[2257]: pam_unix(sudo:session): session closed for user root Mar 25 01:38:56.088647 sshd[2256]: Connection closed by 147.75.109.163 port 52486 Mar 25 01:38:56.090371 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Mar 25 01:38:56.094086 systemd[1]: sshd@6-172.31.29.210:22-147.75.109.163:52486.service: Deactivated successfully. Mar 25 01:38:56.096827 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:38:56.097053 systemd[1]: session-7.scope: Consumed 4.711s CPU time, 223M memory peak. Mar 25 01:38:56.099457 systemd-logind[1895]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:38:56.101013 systemd-logind[1895]: Removed session 7. Mar 25 01:38:57.052781 kubelet[3296]: I0325 01:38:57.052710 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-210" podStartSLOduration=4.052690721 podStartE2EDuration="4.052690721s" podCreationTimestamp="2025-03-25 01:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:38:54.237955177 +0000 UTC m=+1.626599028" watchObservedRunningTime="2025-03-25 01:38:57.052690721 +0000 UTC m=+4.441334555" Mar 25 01:38:58.385439 update_engine[1898]: I20250325 01:38:58.385358 1898 update_attempter.cc:509] Updating boot flags... Mar 25 01:38:58.560341 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3381) Mar 25 01:38:58.798483 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3381) Mar 25 01:38:59.012447 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3381) Mar 25 01:39:05.440995 kubelet[3296]: I0325 01:39:05.440961 3296 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:39:05.462410 containerd[1912]: time="2025-03-25T01:39:05.461000845Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:39:05.476411 kubelet[3296]: I0325 01:39:05.476116 3296 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:39:06.038343 kubelet[3296]: I0325 01:39:06.037400 3296 topology_manager.go:215] "Topology Admit Handler" podUID="11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98" podNamespace="kube-system" podName="kube-proxy-7c56j" Mar 25 01:39:06.052029 systemd[1]: Created slice kubepods-besteffort-pod11bcdc2f_c5fe_4ff6_aff6_b2c31766bc98.slice - libcontainer container kubepods-besteffort-pod11bcdc2f_c5fe_4ff6_aff6_b2c31766bc98.slice. Mar 25 01:39:06.057346 kubelet[3296]: I0325 01:39:06.055778 3296 topology_manager.go:215] "Topology Admit Handler" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" podNamespace="kube-system" podName="cilium-svsww" Mar 25 01:39:06.072820 systemd[1]: Created slice kubepods-burstable-pod035a6b5c_d525_47f1_9bfb_266d722773ba.slice - libcontainer container kubepods-burstable-pod035a6b5c_d525_47f1_9bfb_266d722773ba.slice. Mar 25 01:39:06.137961 kubelet[3296]: I0325 01:39:06.137919 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98-lib-modules\") pod \"kube-proxy-7c56j\" (UID: \"11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98\") " pod="kube-system/kube-proxy-7c56j" Mar 25 01:39:06.138229 kubelet[3296]: I0325 01:39:06.138200 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-bpf-maps\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138322 kubelet[3296]: I0325 01:39:06.138234 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-xtables-lock\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138322 kubelet[3296]: I0325 01:39:06.138262 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7xr\" (UniqueName: \"kubernetes.io/projected/11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98-kube-api-access-hh7xr\") pod \"kube-proxy-7c56j\" (UID: \"11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98\") " pod="kube-system/kube-proxy-7c56j" Mar 25 01:39:06.138322 kubelet[3296]: I0325 01:39:06.138291 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-hubble-tls\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138464 kubelet[3296]: I0325 01:39:06.138355 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-lib-modules\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138464 kubelet[3296]: I0325 01:39:06.138385 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-hostproc\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138464 kubelet[3296]: I0325 01:39:06.138409 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cni-path\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138464 kubelet[3296]: I0325 01:39:06.138433 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-net\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138620 kubelet[3296]: I0325 01:39:06.138473 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98-xtables-lock\") pod \"kube-proxy-7c56j\" (UID: \"11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98\") " pod="kube-system/kube-proxy-7c56j" Mar 25 01:39:06.138620 kubelet[3296]: I0325 01:39:06.138498 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-cgroup\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138620 kubelet[3296]: I0325 01:39:06.138520 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-etc-cni-netd\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138620 kubelet[3296]: I0325 01:39:06.138551 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcs9k\" (UniqueName: \"kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-kube-api-access-wcs9k\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138620 kubelet[3296]: I0325 01:39:06.138575 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98-kube-proxy\") pod \"kube-proxy-7c56j\" (UID: \"11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98\") " pod="kube-system/kube-proxy-7c56j" Mar 25 01:39:06.138794 kubelet[3296]: I0325 01:39:06.138602 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-kernel\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138794 kubelet[3296]: I0325 01:39:06.138625 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/035a6b5c-d525-47f1-9bfb-266d722773ba-clustermesh-secrets\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138794 kubelet[3296]: I0325 01:39:06.138651 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-run\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.138794 kubelet[3296]: I0325 01:39:06.138674 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-config-path\") pod \"cilium-svsww\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " pod="kube-system/cilium-svsww" Mar 25 01:39:06.380241 containerd[1912]: time="2025-03-25T01:39:06.379968654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-svsww,Uid:035a6b5c-d525-47f1-9bfb-266d722773ba,Namespace:kube-system,Attempt:0,}" Mar 25 01:39:06.385154 containerd[1912]: time="2025-03-25T01:39:06.385016747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c56j,Uid:11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98,Namespace:kube-system,Attempt:0,}" Mar 25 01:39:06.493065 containerd[1912]: time="2025-03-25T01:39:06.493011957Z" level=info msg="connecting to shim a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a" address="unix:///run/containerd/s/314a7332be0a8bfed01e7b8c46dd268a56a11a2ebe64392833af7ae01f720d2a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:39:06.506677 containerd[1912]: time="2025-03-25T01:39:06.505596837Z" level=info msg="connecting to shim 7cefb53863f5515d0a9bf0193b24cfb1bb0925bfd3d302b869e8571be6fc0a7f" address="unix:///run/containerd/s/d0792e502ca0305f4700a62ddf746681365c7c0c4fe7402a02ca1f103ac0cc5d" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:39:06.583805 kubelet[3296]: I0325 01:39:06.581455 3296 topology_manager.go:215] "Topology Admit Handler" podUID="e5aa7182-9f1f-45ee-9555-3680f7481b43" podNamespace="kube-system" podName="cilium-operator-599987898-bf4cb" Mar 25 01:39:06.595242 systemd[1]: Started cri-containerd-7cefb53863f5515d0a9bf0193b24cfb1bb0925bfd3d302b869e8571be6fc0a7f.scope - libcontainer container 7cefb53863f5515d0a9bf0193b24cfb1bb0925bfd3d302b869e8571be6fc0a7f. Mar 25 01:39:06.604584 systemd[1]: Started cri-containerd-a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a.scope - libcontainer container a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a. Mar 25 01:39:06.625722 systemd[1]: Created slice kubepods-besteffort-pode5aa7182_9f1f_45ee_9555_3680f7481b43.slice - libcontainer container kubepods-besteffort-pode5aa7182_9f1f_45ee_9555_3680f7481b43.slice. Mar 25 01:39:06.644205 kubelet[3296]: I0325 01:39:06.643270 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5aa7182-9f1f-45ee-9555-3680f7481b43-cilium-config-path\") pod \"cilium-operator-599987898-bf4cb\" (UID: \"e5aa7182-9f1f-45ee-9555-3680f7481b43\") " pod="kube-system/cilium-operator-599987898-bf4cb" Mar 25 01:39:06.644205 kubelet[3296]: I0325 01:39:06.643685 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-656nr\" (UniqueName: \"kubernetes.io/projected/e5aa7182-9f1f-45ee-9555-3680f7481b43-kube-api-access-656nr\") pod \"cilium-operator-599987898-bf4cb\" (UID: \"e5aa7182-9f1f-45ee-9555-3680f7481b43\") " pod="kube-system/cilium-operator-599987898-bf4cb" Mar 25 01:39:06.822754 containerd[1912]: time="2025-03-25T01:39:06.822693416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-svsww,Uid:035a6b5c-d525-47f1-9bfb-266d722773ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\"" Mar 25 01:39:06.837353 containerd[1912]: time="2025-03-25T01:39:06.837263173Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:39:06.877808 containerd[1912]: time="2025-03-25T01:39:06.877768761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7c56j,Uid:11bcdc2f-c5fe-4ff6-aff6-b2c31766bc98,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cefb53863f5515d0a9bf0193b24cfb1bb0925bfd3d302b869e8571be6fc0a7f\"" Mar 25 01:39:06.902915 containerd[1912]: time="2025-03-25T01:39:06.902714571Z" level=info msg="CreateContainer within sandbox \"7cefb53863f5515d0a9bf0193b24cfb1bb0925bfd3d302b869e8571be6fc0a7f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:39:06.931495 containerd[1912]: time="2025-03-25T01:39:06.931006211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bf4cb,Uid:e5aa7182-9f1f-45ee-9555-3680f7481b43,Namespace:kube-system,Attempt:0,}" Mar 25 01:39:06.932747 containerd[1912]: time="2025-03-25T01:39:06.932708950Z" level=info msg="Container 8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:06.953027 containerd[1912]: time="2025-03-25T01:39:06.952972715Z" level=info msg="CreateContainer within sandbox \"7cefb53863f5515d0a9bf0193b24cfb1bb0925bfd3d302b869e8571be6fc0a7f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a\"" Mar 25 01:39:06.962948 containerd[1912]: time="2025-03-25T01:39:06.961492840Z" level=info msg="StartContainer for \"8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a\"" Mar 25 01:39:06.962948 containerd[1912]: time="2025-03-25T01:39:06.962836033Z" level=info msg="connecting to shim 8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a" address="unix:///run/containerd/s/d0792e502ca0305f4700a62ddf746681365c7c0c4fe7402a02ca1f103ac0cc5d" protocol=ttrpc version=3 Mar 25 01:39:06.985326 containerd[1912]: time="2025-03-25T01:39:06.985261665Z" level=info msg="connecting to shim 49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be" address="unix:///run/containerd/s/ac56999b055298031a5e1bb6649af62f5e14ee8d0c3449b4b39b109416e7e6e2" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:39:07.000594 systemd[1]: Started cri-containerd-8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a.scope - libcontainer container 8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a. Mar 25 01:39:07.033804 systemd[1]: Started cri-containerd-49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be.scope - libcontainer container 49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be. Mar 25 01:39:07.102959 containerd[1912]: time="2025-03-25T01:39:07.102910978Z" level=info msg="StartContainer for \"8ebe2b2f60f046dc4441201dcb3f9ac7e674c566896259ca8db170b3540d350a\" returns successfully" Mar 25 01:39:07.138390 containerd[1912]: time="2025-03-25T01:39:07.137990501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bf4cb,Uid:e5aa7182-9f1f-45ee-9555-3680f7481b43,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\"" Mar 25 01:39:08.196372 kubelet[3296]: I0325 01:39:08.195537 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7c56j" podStartSLOduration=2.180390974 podStartE2EDuration="2.180390974s" podCreationTimestamp="2025-03-25 01:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:39:08.176343141 +0000 UTC m=+15.564986975" watchObservedRunningTime="2025-03-25 01:39:08.180390974 +0000 UTC m=+15.569034898" Mar 25 01:39:14.364949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306128800.mount: Deactivated successfully. Mar 25 01:39:17.894682 containerd[1912]: time="2025-03-25T01:39:17.894627361Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:39:17.897227 containerd[1912]: time="2025-03-25T01:39:17.897151193Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 25 01:39:17.897751 containerd[1912]: time="2025-03-25T01:39:17.897712997Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:39:17.899239 containerd[1912]: time="2025-03-25T01:39:17.899136022Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.061822169s" Mar 25 01:39:17.899396 containerd[1912]: time="2025-03-25T01:39:17.899239376Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 25 01:39:17.903033 containerd[1912]: time="2025-03-25T01:39:17.902211741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:39:17.907812 containerd[1912]: time="2025-03-25T01:39:17.907773915Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:39:17.973408 containerd[1912]: time="2025-03-25T01:39:17.970088585Z" level=info msg="Container 6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:17.983680 containerd[1912]: time="2025-03-25T01:39:17.983632938Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\"" Mar 25 01:39:17.984754 containerd[1912]: time="2025-03-25T01:39:17.984720585Z" level=info msg="StartContainer for \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\"" Mar 25 01:39:17.986262 containerd[1912]: time="2025-03-25T01:39:17.986227653Z" level=info msg="connecting to shim 6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9" address="unix:///run/containerd/s/314a7332be0a8bfed01e7b8c46dd268a56a11a2ebe64392833af7ae01f720d2a" protocol=ttrpc version=3 Mar 25 01:39:18.338345 systemd[1]: Started cri-containerd-6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9.scope - libcontainer container 6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9. Mar 25 01:39:18.434355 containerd[1912]: time="2025-03-25T01:39:18.433432250Z" level=info msg="StartContainer for \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" returns successfully" Mar 25 01:39:18.449613 systemd[1]: cri-containerd-6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9.scope: Deactivated successfully. Mar 25 01:39:18.556945 containerd[1912]: time="2025-03-25T01:39:18.556441115Z" level=info msg="received exit event container_id:\"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" id:\"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" pid:3973 exited_at:{seconds:1742866758 nanos:461069698}" Mar 25 01:39:18.563864 containerd[1912]: time="2025-03-25T01:39:18.563817476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" id:\"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" pid:3973 exited_at:{seconds:1742866758 nanos:461069698}" Mar 25 01:39:18.651040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9-rootfs.mount: Deactivated successfully. Mar 25 01:39:19.177911 containerd[1912]: time="2025-03-25T01:39:19.176514129Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:39:19.202400 containerd[1912]: time="2025-03-25T01:39:19.201607250Z" level=info msg="Container f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:19.220888 containerd[1912]: time="2025-03-25T01:39:19.220842855Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\"" Mar 25 01:39:19.224638 containerd[1912]: time="2025-03-25T01:39:19.223173534Z" level=info msg="StartContainer for \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\"" Mar 25 01:39:19.227974 containerd[1912]: time="2025-03-25T01:39:19.227791107Z" level=info msg="connecting to shim f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e" address="unix:///run/containerd/s/314a7332be0a8bfed01e7b8c46dd268a56a11a2ebe64392833af7ae01f720d2a" protocol=ttrpc version=3 Mar 25 01:39:19.267415 systemd[1]: Started cri-containerd-f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e.scope - libcontainer container f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e. Mar 25 01:39:19.360333 containerd[1912]: time="2025-03-25T01:39:19.359960758Z" level=info msg="StartContainer for \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" returns successfully" Mar 25 01:39:19.370878 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:39:19.371919 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:39:19.372094 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:39:19.377304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:39:19.381747 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:39:19.384484 systemd[1]: cri-containerd-f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e.scope: Deactivated successfully. Mar 25 01:39:19.389712 containerd[1912]: time="2025-03-25T01:39:19.389672692Z" level=info msg="received exit event container_id:\"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" id:\"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" pid:4018 exited_at:{seconds:1742866759 nanos:387266840}" Mar 25 01:39:19.390501 containerd[1912]: time="2025-03-25T01:39:19.390474618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" id:\"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" pid:4018 exited_at:{seconds:1742866759 nanos:387266840}" Mar 25 01:39:19.473033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:39:20.193654 containerd[1912]: time="2025-03-25T01:39:20.192532610Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:39:20.193515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e-rootfs.mount: Deactivated successfully. Mar 25 01:39:20.353539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount570943340.mount: Deactivated successfully. Mar 25 01:39:20.399455 containerd[1912]: time="2025-03-25T01:39:20.399245569Z" level=info msg="Container e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:20.401150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564582652.mount: Deactivated successfully. Mar 25 01:39:20.424887 containerd[1912]: time="2025-03-25T01:39:20.424207395Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\"" Mar 25 01:39:20.427764 containerd[1912]: time="2025-03-25T01:39:20.427412079Z" level=info msg="StartContainer for \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\"" Mar 25 01:39:20.430940 containerd[1912]: time="2025-03-25T01:39:20.430175999Z" level=info msg="connecting to shim e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16" address="unix:///run/containerd/s/314a7332be0a8bfed01e7b8c46dd268a56a11a2ebe64392833af7ae01f720d2a" protocol=ttrpc version=3 Mar 25 01:39:20.464703 systemd[1]: Started cri-containerd-e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16.scope - libcontainer container e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16. Mar 25 01:39:20.539943 systemd[1]: cri-containerd-e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16.scope: Deactivated successfully. Mar 25 01:39:20.544160 containerd[1912]: time="2025-03-25T01:39:20.544106540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" id:\"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" pid:4074 exited_at:{seconds:1742866760 nanos:542785896}" Mar 25 01:39:20.544433 containerd[1912]: time="2025-03-25T01:39:20.544389851Z" level=info msg="received exit event container_id:\"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" id:\"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" pid:4074 exited_at:{seconds:1742866760 nanos:542785896}" Mar 25 01:39:20.547111 containerd[1912]: time="2025-03-25T01:39:20.547080326Z" level=info msg="StartContainer for \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" returns successfully" Mar 25 01:39:21.190705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16-rootfs.mount: Deactivated successfully. Mar 25 01:39:21.209995 containerd[1912]: time="2025-03-25T01:39:21.208519148Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:39:21.253090 containerd[1912]: time="2025-03-25T01:39:21.253049497Z" level=info msg="Container 99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:21.275985 containerd[1912]: time="2025-03-25T01:39:21.275170301Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\"" Mar 25 01:39:21.278117 containerd[1912]: time="2025-03-25T01:39:21.278073981Z" level=info msg="StartContainer for \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\"" Mar 25 01:39:21.285599 containerd[1912]: time="2025-03-25T01:39:21.285557384Z" level=info msg="connecting to shim 99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9" address="unix:///run/containerd/s/314a7332be0a8bfed01e7b8c46dd268a56a11a2ebe64392833af7ae01f720d2a" protocol=ttrpc version=3 Mar 25 01:39:21.352604 systemd[1]: Started cri-containerd-99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9.scope - libcontainer container 99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9. Mar 25 01:39:21.455656 systemd[1]: cri-containerd-99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9.scope: Deactivated successfully. Mar 25 01:39:21.466228 containerd[1912]: time="2025-03-25T01:39:21.465589100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" id:\"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" pid:4117 exited_at:{seconds:1742866761 nanos:456932058}" Mar 25 01:39:21.466228 containerd[1912]: time="2025-03-25T01:39:21.466190727Z" level=info msg="received exit event container_id:\"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" id:\"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" pid:4117 exited_at:{seconds:1742866761 nanos:456932058}" Mar 25 01:39:21.489602 containerd[1912]: time="2025-03-25T01:39:21.489464344Z" level=info msg="StartContainer for \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" returns successfully" Mar 25 01:39:21.516492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9-rootfs.mount: Deactivated successfully. Mar 25 01:39:21.566951 containerd[1912]: time="2025-03-25T01:39:21.566900507Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:39:21.568868 containerd[1912]: time="2025-03-25T01:39:21.568705117Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 25 01:39:21.571443 containerd[1912]: time="2025-03-25T01:39:21.571357939Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:39:21.591559 containerd[1912]: time="2025-03-25T01:39:21.591405256Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.689111996s" Mar 25 01:39:21.591559 containerd[1912]: time="2025-03-25T01:39:21.591457884Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 25 01:39:21.595423 containerd[1912]: time="2025-03-25T01:39:21.595382975Z" level=info msg="CreateContainer within sandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:39:21.611118 containerd[1912]: time="2025-03-25T01:39:21.611074298Z" level=info msg="Container 2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:21.630791 containerd[1912]: time="2025-03-25T01:39:21.630750656Z" level=info msg="CreateContainer within sandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\"" Mar 25 01:39:21.631648 containerd[1912]: time="2025-03-25T01:39:21.631615867Z" level=info msg="StartContainer for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\"" Mar 25 01:39:21.638172 containerd[1912]: time="2025-03-25T01:39:21.637971590Z" level=info msg="connecting to shim 2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b" address="unix:///run/containerd/s/ac56999b055298031a5e1bb6649af62f5e14ee8d0c3449b4b39b109416e7e6e2" protocol=ttrpc version=3 Mar 25 01:39:21.694014 systemd[1]: Started cri-containerd-2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b.scope - libcontainer container 2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b. Mar 25 01:39:21.841376 containerd[1912]: time="2025-03-25T01:39:21.841338991Z" level=info msg="StartContainer for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" returns successfully" Mar 25 01:39:22.222328 containerd[1912]: time="2025-03-25T01:39:22.219166400Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:39:22.242339 containerd[1912]: time="2025-03-25T01:39:22.242090885Z" level=info msg="Container 887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:22.268344 containerd[1912]: time="2025-03-25T01:39:22.267813768Z" level=info msg="CreateContainer within sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\"" Mar 25 01:39:22.273374 containerd[1912]: time="2025-03-25T01:39:22.271122198Z" level=info msg="StartContainer for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\"" Mar 25 01:39:22.277366 containerd[1912]: time="2025-03-25T01:39:22.277270543Z" level=info msg="connecting to shim 887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb" address="unix:///run/containerd/s/314a7332be0a8bfed01e7b8c46dd268a56a11a2ebe64392833af7ae01f720d2a" protocol=ttrpc version=3 Mar 25 01:39:22.365902 systemd[1]: Started cri-containerd-887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb.scope - libcontainer container 887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb. Mar 25 01:39:22.547681 containerd[1912]: time="2025-03-25T01:39:22.547636725Z" level=info msg="StartContainer for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" returns successfully" Mar 25 01:39:23.147051 containerd[1912]: time="2025-03-25T01:39:23.146812529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" id:\"d57337253c521346bcc675bfd6fd1690b847c2f48595cf933c176a6f05d8b604\" pid:4221 exited_at:{seconds:1742866763 nanos:143884863}" Mar 25 01:39:23.253762 kubelet[3296]: I0325 01:39:23.253649 3296 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 25 01:39:23.490845 kubelet[3296]: I0325 01:39:23.490627 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-svsww" podStartSLOduration=6.402358262 podStartE2EDuration="17.466511919s" podCreationTimestamp="2025-03-25 01:39:06 +0000 UTC" firstStartedPulling="2025-03-25 01:39:06.836684357 +0000 UTC m=+14.225328179" lastFinishedPulling="2025-03-25 01:39:17.90083802 +0000 UTC m=+25.289481836" observedRunningTime="2025-03-25 01:39:23.456026775 +0000 UTC m=+30.844670622" watchObservedRunningTime="2025-03-25 01:39:23.466511919 +0000 UTC m=+30.855155747" Mar 25 01:39:23.491043 kubelet[3296]: I0325 01:39:23.490991 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-bf4cb" podStartSLOduration=3.03816058 podStartE2EDuration="17.490976461s" podCreationTimestamp="2025-03-25 01:39:06 +0000 UTC" firstStartedPulling="2025-03-25 01:39:07.139618831 +0000 UTC m=+14.528262652" lastFinishedPulling="2025-03-25 01:39:21.59243471 +0000 UTC m=+28.981078533" observedRunningTime="2025-03-25 01:39:22.307991167 +0000 UTC m=+29.696635003" watchObservedRunningTime="2025-03-25 01:39:23.490976461 +0000 UTC m=+30.879620295" Mar 25 01:39:23.493461 kubelet[3296]: I0325 01:39:23.493405 3296 topology_manager.go:215] "Topology Admit Handler" podUID="c23498bd-92c5-4831-b3ae-8f31f900df85" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qgnsj" Mar 25 01:39:23.494058 kubelet[3296]: I0325 01:39:23.494024 3296 topology_manager.go:215] "Topology Admit Handler" podUID="6903460f-d661-410b-9aa8-3915cfdb4fcf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8ztq5" Mar 25 01:39:23.528859 systemd[1]: Created slice kubepods-burstable-podc23498bd_92c5_4831_b3ae_8f31f900df85.slice - libcontainer container kubepods-burstable-podc23498bd_92c5_4831_b3ae_8f31f900df85.slice. Mar 25 01:39:23.565091 systemd[1]: Created slice kubepods-burstable-pod6903460f_d661_410b_9aa8_3915cfdb4fcf.slice - libcontainer container kubepods-burstable-pod6903460f_d661_410b_9aa8_3915cfdb4fcf.slice. Mar 25 01:39:23.627541 kubelet[3296]: I0325 01:39:23.627502 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58k8\" (UniqueName: \"kubernetes.io/projected/c23498bd-92c5-4831-b3ae-8f31f900df85-kube-api-access-s58k8\") pod \"coredns-7db6d8ff4d-qgnsj\" (UID: \"c23498bd-92c5-4831-b3ae-8f31f900df85\") " pod="kube-system/coredns-7db6d8ff4d-qgnsj" Mar 25 01:39:23.627541 kubelet[3296]: I0325 01:39:23.627547 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6903460f-d661-410b-9aa8-3915cfdb4fcf-config-volume\") pod \"coredns-7db6d8ff4d-8ztq5\" (UID: \"6903460f-d661-410b-9aa8-3915cfdb4fcf\") " pod="kube-system/coredns-7db6d8ff4d-8ztq5" Mar 25 01:39:23.627541 kubelet[3296]: I0325 01:39:23.627574 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6hp\" (UniqueName: \"kubernetes.io/projected/6903460f-d661-410b-9aa8-3915cfdb4fcf-kube-api-access-ss6hp\") pod \"coredns-7db6d8ff4d-8ztq5\" (UID: \"6903460f-d661-410b-9aa8-3915cfdb4fcf\") " pod="kube-system/coredns-7db6d8ff4d-8ztq5" Mar 25 01:39:23.627863 kubelet[3296]: I0325 01:39:23.627602 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23498bd-92c5-4831-b3ae-8f31f900df85-config-volume\") pod \"coredns-7db6d8ff4d-qgnsj\" (UID: \"c23498bd-92c5-4831-b3ae-8f31f900df85\") " pod="kube-system/coredns-7db6d8ff4d-qgnsj" Mar 25 01:39:23.841492 containerd[1912]: time="2025-03-25T01:39:23.841148542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qgnsj,Uid:c23498bd-92c5-4831-b3ae-8f31f900df85,Namespace:kube-system,Attempt:0,}" Mar 25 01:39:23.888342 containerd[1912]: time="2025-03-25T01:39:23.887257121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ztq5,Uid:6903460f-d661-410b-9aa8-3915cfdb4fcf,Namespace:kube-system,Attempt:0,}" Mar 25 01:39:26.454702 systemd-networkd[1756]: cilium_host: Link UP Mar 25 01:39:26.457284 systemd-networkd[1756]: cilium_net: Link UP Mar 25 01:39:26.461514 systemd-networkd[1756]: cilium_net: Gained carrier Mar 25 01:39:26.461810 systemd-networkd[1756]: cilium_host: Gained carrier Mar 25 01:39:26.468503 (udev-worker)[4287]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:39:26.469593 (udev-worker)[4285]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:39:26.716596 systemd-networkd[1756]: cilium_host: Gained IPv6LL Mar 25 01:39:26.947869 systemd-networkd[1756]: cilium_vxlan: Link UP Mar 25 01:39:26.947877 systemd-networkd[1756]: cilium_vxlan: Gained carrier Mar 25 01:39:27.389052 systemd[1]: Started sshd@7-172.31.29.210:22-147.75.109.163:56544.service - OpenSSH per-connection server daemon (147.75.109.163:56544). Mar 25 01:39:27.397707 systemd-networkd[1756]: cilium_net: Gained IPv6LL Mar 25 01:39:27.658625 sshd[4405]: Accepted publickey for core from 147.75.109.163 port 56544 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:27.663890 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:27.689428 systemd-logind[1895]: New session 8 of user core. Mar 25 01:39:27.693977 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:39:27.765367 kernel: NET: Registered PF_ALG protocol family Mar 25 01:39:28.228488 systemd-networkd[1756]: cilium_vxlan: Gained IPv6LL Mar 25 01:39:28.827896 sshd[4417]: Connection closed by 147.75.109.163 port 56544 Mar 25 01:39:28.828821 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:28.835199 systemd[1]: sshd@7-172.31.29.210:22-147.75.109.163:56544.service: Deactivated successfully. Mar 25 01:39:28.840198 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:39:28.843061 systemd-logind[1895]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:39:28.844869 systemd-logind[1895]: Removed session 8. Mar 25 01:39:29.156579 systemd-networkd[1756]: lxc_health: Link UP Mar 25 01:39:29.168568 systemd-networkd[1756]: lxc_health: Gained carrier Mar 25 01:39:29.556481 systemd-networkd[1756]: lxcdda103476830: Link UP Mar 25 01:39:29.581439 kernel: eth0: renamed from tmpe75c0 Mar 25 01:39:29.581090 systemd-networkd[1756]: lxca67ceba3d113: Link UP Mar 25 01:39:29.591467 kernel: eth0: renamed from tmp77e48 Mar 25 01:39:29.601624 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:39:29.603702 systemd-networkd[1756]: lxcdda103476830: Gained carrier Mar 25 01:39:29.605530 systemd-networkd[1756]: lxca67ceba3d113: Gained carrier Mar 25 01:39:31.109522 systemd-networkd[1756]: lxc_health: Gained IPv6LL Mar 25 01:39:31.428646 systemd-networkd[1756]: lxcdda103476830: Gained IPv6LL Mar 25 01:39:31.492550 systemd-networkd[1756]: lxca67ceba3d113: Gained IPv6LL Mar 25 01:39:33.882498 systemd[1]: Started sshd@8-172.31.29.210:22-147.75.109.163:44504.service - OpenSSH per-connection server daemon (147.75.109.163:44504). Mar 25 01:39:34.232370 sshd[4702]: Accepted publickey for core from 147.75.109.163 port 44504 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:34.235349 sshd-session[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:34.246597 systemd-logind[1895]: New session 9 of user core. Mar 25 01:39:34.255565 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:39:34.468525 ntpd[1888]: Listen normally on 7 cilium_host 192.168.0.245:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 7 cilium_host 192.168.0.245:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 8 cilium_net [fe80::e8c0:cdff:fe7e:62c2%4]:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 9 cilium_host [fe80::846f:4ff:fe76:6400%5]:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 10 cilium_vxlan [fe80::38e7:d0ff:fefc:40e%6]:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 11 lxc_health [fe80::f031:c6ff:fe60:75e7%8]:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 12 lxcdda103476830 [fe80::a8f6:bbff:fef7:15a4%10]:123 Mar 25 01:39:34.473152 ntpd[1888]: 25 Mar 01:39:34 ntpd[1888]: Listen normally on 13 lxca67ceba3d113 [fe80::38a8:47ff:fec3:4975%12]:123 Mar 25 01:39:34.468833 ntpd[1888]: Listen normally on 8 cilium_net [fe80::e8c0:cdff:fe7e:62c2%4]:123 Mar 25 01:39:34.468958 ntpd[1888]: Listen normally on 9 cilium_host [fe80::846f:4ff:fe76:6400%5]:123 Mar 25 01:39:34.469004 ntpd[1888]: Listen normally on 10 cilium_vxlan [fe80::38e7:d0ff:fefc:40e%6]:123 Mar 25 01:39:34.469043 ntpd[1888]: Listen normally on 11 lxc_health [fe80::f031:c6ff:fe60:75e7%8]:123 Mar 25 01:39:34.469083 ntpd[1888]: Listen normally on 12 lxcdda103476830 [fe80::a8f6:bbff:fef7:15a4%10]:123 Mar 25 01:39:34.472570 ntpd[1888]: Listen normally on 13 lxca67ceba3d113 [fe80::38a8:47ff:fec3:4975%12]:123 Mar 25 01:39:34.847289 sshd[4704]: Connection closed by 147.75.109.163 port 44504 Mar 25 01:39:34.852553 sshd-session[4702]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:34.858005 systemd[1]: sshd@8-172.31.29.210:22-147.75.109.163:44504.service: Deactivated successfully. Mar 25 01:39:34.858577 systemd-logind[1895]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:39:34.865122 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:39:34.872951 systemd-logind[1895]: Removed session 9. Mar 25 01:39:35.160437 containerd[1912]: time="2025-03-25T01:39:35.159129181Z" level=info msg="connecting to shim e75c0c173df01fd143975bd34c75c0fd6e4f7b75a3a8877d96efbf4aa8bff8a6" address="unix:///run/containerd/s/7964cc6ffce87edd9395c52fa05742c21d49df8daefd472b781c400fe734596c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:39:35.219780 systemd[1]: Started cri-containerd-e75c0c173df01fd143975bd34c75c0fd6e4f7b75a3a8877d96efbf4aa8bff8a6.scope - libcontainer container e75c0c173df01fd143975bd34c75c0fd6e4f7b75a3a8877d96efbf4aa8bff8a6. Mar 25 01:39:35.291777 containerd[1912]: time="2025-03-25T01:39:35.291716521Z" level=info msg="connecting to shim 77e48c0f383302f8eeac1d1539aae524856c0b44790ffb0bd625204a6dc8c7d9" address="unix:///run/containerd/s/d8786a4c3490df975ec96acbdd49c79af71d22d6c8d1d855995e025e537f2b2f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:39:35.354540 systemd[1]: Started cri-containerd-77e48c0f383302f8eeac1d1539aae524856c0b44790ffb0bd625204a6dc8c7d9.scope - libcontainer container 77e48c0f383302f8eeac1d1539aae524856c0b44790ffb0bd625204a6dc8c7d9. Mar 25 01:39:35.376834 containerd[1912]: time="2025-03-25T01:39:35.376746162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qgnsj,Uid:c23498bd-92c5-4831-b3ae-8f31f900df85,Namespace:kube-system,Attempt:0,} returns sandbox id \"e75c0c173df01fd143975bd34c75c0fd6e4f7b75a3a8877d96efbf4aa8bff8a6\"" Mar 25 01:39:35.414700 containerd[1912]: time="2025-03-25T01:39:35.414559172Z" level=info msg="CreateContainer within sandbox \"e75c0c173df01fd143975bd34c75c0fd6e4f7b75a3a8877d96efbf4aa8bff8a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:39:35.445504 containerd[1912]: time="2025-03-25T01:39:35.444729697Z" level=info msg="Container 929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:35.475586 containerd[1912]: time="2025-03-25T01:39:35.475219794Z" level=info msg="CreateContainer within sandbox \"e75c0c173df01fd143975bd34c75c0fd6e4f7b75a3a8877d96efbf4aa8bff8a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae\"" Mar 25 01:39:35.478969 containerd[1912]: time="2025-03-25T01:39:35.478933152Z" level=info msg="StartContainer for \"929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae\"" Mar 25 01:39:35.480361 containerd[1912]: time="2025-03-25T01:39:35.480235273Z" level=info msg="connecting to shim 929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae" address="unix:///run/containerd/s/7964cc6ffce87edd9395c52fa05742c21d49df8daefd472b781c400fe734596c" protocol=ttrpc version=3 Mar 25 01:39:35.486403 containerd[1912]: time="2025-03-25T01:39:35.485345877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ztq5,Uid:6903460f-d661-410b-9aa8-3915cfdb4fcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"77e48c0f383302f8eeac1d1539aae524856c0b44790ffb0bd625204a6dc8c7d9\"" Mar 25 01:39:35.493948 containerd[1912]: time="2025-03-25T01:39:35.493896129Z" level=info msg="CreateContainer within sandbox \"77e48c0f383302f8eeac1d1539aae524856c0b44790ffb0bd625204a6dc8c7d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:39:35.518953 systemd[1]: Started cri-containerd-929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae.scope - libcontainer container 929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae. Mar 25 01:39:35.555536 containerd[1912]: time="2025-03-25T01:39:35.554800636Z" level=info msg="Container 65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:39:35.571242 containerd[1912]: time="2025-03-25T01:39:35.571210010Z" level=info msg="CreateContainer within sandbox \"77e48c0f383302f8eeac1d1539aae524856c0b44790ffb0bd625204a6dc8c7d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b\"" Mar 25 01:39:35.577900 containerd[1912]: time="2025-03-25T01:39:35.576912464Z" level=info msg="StartContainer for \"65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b\"" Mar 25 01:39:35.581120 containerd[1912]: time="2025-03-25T01:39:35.579945559Z" level=info msg="connecting to shim 65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b" address="unix:///run/containerd/s/d8786a4c3490df975ec96acbdd49c79af71d22d6c8d1d855995e025e537f2b2f" protocol=ttrpc version=3 Mar 25 01:39:35.612086 containerd[1912]: time="2025-03-25T01:39:35.612055327Z" level=info msg="StartContainer for \"929eb296c4b9f191745e3a2d9098bafb757a19a9a0df1c078019007006af91ae\" returns successfully" Mar 25 01:39:35.613888 systemd[1]: Started cri-containerd-65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b.scope - libcontainer container 65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b. Mar 25 01:39:35.686206 containerd[1912]: time="2025-03-25T01:39:35.684793493Z" level=info msg="StartContainer for \"65ee519b0ef0cfdeef0dfbdd35cf63a1aa02d906e36c426b2fb5931115fa202b\" returns successfully" Mar 25 01:39:36.124006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843034770.mount: Deactivated successfully. Mar 25 01:39:36.404759 kubelet[3296]: I0325 01:39:36.404470 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qgnsj" podStartSLOduration=30.404391134 podStartE2EDuration="30.404391134s" podCreationTimestamp="2025-03-25 01:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:39:36.378746103 +0000 UTC m=+43.767389951" watchObservedRunningTime="2025-03-25 01:39:36.404391134 +0000 UTC m=+43.793034970" Mar 25 01:39:36.404759 kubelet[3296]: I0325 01:39:36.404738 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8ztq5" podStartSLOduration=30.404724903 podStartE2EDuration="30.404724903s" podCreationTimestamp="2025-03-25 01:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:39:36.401242316 +0000 UTC m=+43.789886151" watchObservedRunningTime="2025-03-25 01:39:36.404724903 +0000 UTC m=+43.793368737" Mar 25 01:39:39.881832 systemd[1]: Started sshd@9-172.31.29.210:22-147.75.109.163:44516.service - OpenSSH per-connection server daemon (147.75.109.163:44516). Mar 25 01:39:40.115967 sshd[4889]: Accepted publickey for core from 147.75.109.163 port 44516 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:40.118659 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:40.152699 systemd-logind[1895]: New session 10 of user core. Mar 25 01:39:40.163553 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:39:40.450607 sshd[4895]: Connection closed by 147.75.109.163 port 44516 Mar 25 01:39:40.452132 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:40.456303 systemd-logind[1895]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:39:40.457397 systemd[1]: sshd@9-172.31.29.210:22-147.75.109.163:44516.service: Deactivated successfully. Mar 25 01:39:40.459915 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:39:40.461277 systemd-logind[1895]: Removed session 10. Mar 25 01:39:45.515351 systemd[1]: Started sshd@10-172.31.29.210:22-147.75.109.163:47290.service - OpenSSH per-connection server daemon (147.75.109.163:47290). Mar 25 01:39:45.776787 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 47290 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:45.780617 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:45.818422 systemd-logind[1895]: New session 11 of user core. Mar 25 01:39:45.825627 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:39:46.210402 sshd[4913]: Connection closed by 147.75.109.163 port 47290 Mar 25 01:39:46.211044 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:46.226504 systemd[1]: sshd@10-172.31.29.210:22-147.75.109.163:47290.service: Deactivated successfully. Mar 25 01:39:46.240560 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:39:46.253405 systemd-logind[1895]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:39:46.279030 systemd[1]: Started sshd@11-172.31.29.210:22-147.75.109.163:47296.service - OpenSSH per-connection server daemon (147.75.109.163:47296). Mar 25 01:39:46.280505 systemd-logind[1895]: Removed session 11. Mar 25 01:39:46.491651 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 47296 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:46.492303 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:46.503397 systemd-logind[1895]: New session 12 of user core. Mar 25 01:39:46.516597 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:39:46.992560 sshd[4927]: Connection closed by 147.75.109.163 port 47296 Mar 25 01:39:46.993109 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:47.004512 systemd-logind[1895]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:39:47.006289 systemd[1]: sshd@11-172.31.29.210:22-147.75.109.163:47296.service: Deactivated successfully. Mar 25 01:39:47.016436 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:39:47.028724 systemd-logind[1895]: Removed session 12. Mar 25 01:39:47.035666 systemd[1]: Started sshd@12-172.31.29.210:22-147.75.109.163:47310.service - OpenSSH per-connection server daemon (147.75.109.163:47310). Mar 25 01:39:47.247698 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 47310 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:47.249013 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:47.258705 systemd-logind[1895]: New session 13 of user core. Mar 25 01:39:47.264879 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:39:47.529734 sshd[4939]: Connection closed by 147.75.109.163 port 47310 Mar 25 01:39:47.536123 systemd-logind[1895]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:39:47.531711 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:47.537270 systemd[1]: sshd@12-172.31.29.210:22-147.75.109.163:47310.service: Deactivated successfully. Mar 25 01:39:47.540825 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:39:47.542213 systemd-logind[1895]: Removed session 13. Mar 25 01:39:52.571037 systemd[1]: Started sshd@13-172.31.29.210:22-147.75.109.163:50516.service - OpenSSH per-connection server daemon (147.75.109.163:50516). Mar 25 01:39:52.768445 sshd[4952]: Accepted publickey for core from 147.75.109.163 port 50516 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:52.772200 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:52.789506 systemd-logind[1895]: New session 14 of user core. Mar 25 01:39:52.794868 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:39:53.040722 sshd[4954]: Connection closed by 147.75.109.163 port 50516 Mar 25 01:39:53.042553 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:53.048488 systemd[1]: sshd@13-172.31.29.210:22-147.75.109.163:50516.service: Deactivated successfully. Mar 25 01:39:53.052153 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:39:53.053612 systemd-logind[1895]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:39:53.055204 systemd-logind[1895]: Removed session 14. Mar 25 01:39:58.074127 systemd[1]: Started sshd@14-172.31.29.210:22-147.75.109.163:50524.service - OpenSSH per-connection server daemon (147.75.109.163:50524). Mar 25 01:39:58.330745 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 50524 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:58.338243 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:58.365620 systemd-logind[1895]: New session 15 of user core. Mar 25 01:39:58.376789 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:39:58.600243 sshd[4969]: Connection closed by 147.75.109.163 port 50524 Mar 25 01:39:58.602645 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Mar 25 01:39:58.607408 systemd-logind[1895]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:39:58.609256 systemd[1]: sshd@14-172.31.29.210:22-147.75.109.163:50524.service: Deactivated successfully. Mar 25 01:39:58.612642 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:39:58.614817 systemd-logind[1895]: Removed session 15. Mar 25 01:39:58.636035 systemd[1]: Started sshd@15-172.31.29.210:22-147.75.109.163:50534.service - OpenSSH per-connection server daemon (147.75.109.163:50534). Mar 25 01:39:58.829005 sshd[4980]: Accepted publickey for core from 147.75.109.163 port 50534 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:39:58.829787 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:39:58.835418 systemd-logind[1895]: New session 16 of user core. Mar 25 01:39:58.839581 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:40:02.426543 sshd[4982]: Connection closed by 147.75.109.163 port 50534 Mar 25 01:40:02.428094 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:02.439982 systemd[1]: sshd@15-172.31.29.210:22-147.75.109.163:50534.service: Deactivated successfully. Mar 25 01:40:02.442268 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:40:02.443955 systemd-logind[1895]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:40:02.445997 systemd-logind[1895]: Removed session 16. Mar 25 01:40:02.458784 systemd[1]: Started sshd@16-172.31.29.210:22-147.75.109.163:42580.service - OpenSSH per-connection server daemon (147.75.109.163:42580). Mar 25 01:40:02.649579 sshd[4992]: Accepted publickey for core from 147.75.109.163 port 42580 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:02.651448 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:02.666577 systemd-logind[1895]: New session 17 of user core. Mar 25 01:40:02.676569 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:40:05.129773 sshd[4994]: Connection closed by 147.75.109.163 port 42580 Mar 25 01:40:05.131034 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:05.139859 systemd-logind[1895]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:40:05.140707 systemd[1]: sshd@16-172.31.29.210:22-147.75.109.163:42580.service: Deactivated successfully. Mar 25 01:40:05.148058 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:40:05.167730 systemd-logind[1895]: Removed session 17. Mar 25 01:40:05.170217 systemd[1]: Started sshd@17-172.31.29.210:22-147.75.109.163:42586.service - OpenSSH per-connection server daemon (147.75.109.163:42586). Mar 25 01:40:05.366418 sshd[5010]: Accepted publickey for core from 147.75.109.163 port 42586 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:05.367920 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:05.377237 systemd-logind[1895]: New session 18 of user core. Mar 25 01:40:05.379540 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:40:06.035341 sshd[5013]: Connection closed by 147.75.109.163 port 42586 Mar 25 01:40:06.036552 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:06.039973 systemd[1]: sshd@17-172.31.29.210:22-147.75.109.163:42586.service: Deactivated successfully. Mar 25 01:40:06.042653 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:40:06.044968 systemd-logind[1895]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:40:06.047237 systemd-logind[1895]: Removed session 18. Mar 25 01:40:06.066467 systemd[1]: Started sshd@18-172.31.29.210:22-147.75.109.163:42596.service - OpenSSH per-connection server daemon (147.75.109.163:42596). Mar 25 01:40:06.245019 sshd[5024]: Accepted publickey for core from 147.75.109.163 port 42596 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:06.246784 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:06.257412 systemd-logind[1895]: New session 19 of user core. Mar 25 01:40:06.266569 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:40:06.510834 sshd[5026]: Connection closed by 147.75.109.163 port 42596 Mar 25 01:40:06.513664 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:06.518803 systemd[1]: sshd@18-172.31.29.210:22-147.75.109.163:42596.service: Deactivated successfully. Mar 25 01:40:06.522223 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:40:06.529393 systemd-logind[1895]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:40:06.531796 systemd-logind[1895]: Removed session 19. Mar 25 01:40:11.552232 systemd[1]: Started sshd@19-172.31.29.210:22-147.75.109.163:38532.service - OpenSSH per-connection server daemon (147.75.109.163:38532). Mar 25 01:40:11.778388 sshd[5043]: Accepted publickey for core from 147.75.109.163 port 38532 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:11.779049 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:11.795668 systemd-logind[1895]: New session 20 of user core. Mar 25 01:40:11.809913 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:40:12.069290 sshd[5045]: Connection closed by 147.75.109.163 port 38532 Mar 25 01:40:12.069729 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:12.079175 systemd-logind[1895]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:40:12.079951 systemd[1]: sshd@19-172.31.29.210:22-147.75.109.163:38532.service: Deactivated successfully. Mar 25 01:40:12.082257 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:40:12.084140 systemd-logind[1895]: Removed session 20. Mar 25 01:40:17.105395 systemd[1]: Started sshd@20-172.31.29.210:22-147.75.109.163:38536.service - OpenSSH per-connection server daemon (147.75.109.163:38536). Mar 25 01:40:17.280798 sshd[5057]: Accepted publickey for core from 147.75.109.163 port 38536 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:17.285897 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:17.299549 systemd-logind[1895]: New session 21 of user core. Mar 25 01:40:17.305558 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:40:17.502276 sshd[5059]: Connection closed by 147.75.109.163 port 38536 Mar 25 01:40:17.504072 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:17.509177 systemd-logind[1895]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:40:17.510044 systemd[1]: sshd@20-172.31.29.210:22-147.75.109.163:38536.service: Deactivated successfully. Mar 25 01:40:17.512709 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:40:17.514985 systemd-logind[1895]: Removed session 21. Mar 25 01:40:22.532820 systemd[1]: Started sshd@21-172.31.29.210:22-147.75.109.163:35050.service - OpenSSH per-connection server daemon (147.75.109.163:35050). Mar 25 01:40:22.738358 sshd[5071]: Accepted publickey for core from 147.75.109.163 port 35050 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:22.742672 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:22.748461 systemd-logind[1895]: New session 22 of user core. Mar 25 01:40:22.754535 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:40:23.197176 sshd[5073]: Connection closed by 147.75.109.163 port 35050 Mar 25 01:40:23.198806 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:23.208419 systemd[1]: sshd@21-172.31.29.210:22-147.75.109.163:35050.service: Deactivated successfully. Mar 25 01:40:23.212504 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:40:23.215210 systemd-logind[1895]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:40:23.218666 systemd-logind[1895]: Removed session 22. Mar 25 01:40:28.236694 systemd[1]: Started sshd@22-172.31.29.210:22-147.75.109.163:35060.service - OpenSSH per-connection server daemon (147.75.109.163:35060). Mar 25 01:40:28.436883 sshd[5085]: Accepted publickey for core from 147.75.109.163 port 35060 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:28.438583 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:28.455219 systemd-logind[1895]: New session 23 of user core. Mar 25 01:40:28.463773 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:40:28.804336 sshd[5087]: Connection closed by 147.75.109.163 port 35060 Mar 25 01:40:28.806923 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:28.811132 systemd[1]: sshd@22-172.31.29.210:22-147.75.109.163:35060.service: Deactivated successfully. Mar 25 01:40:28.813999 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:40:28.815205 systemd-logind[1895]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:40:28.816649 systemd-logind[1895]: Removed session 23. Mar 25 01:40:28.836274 systemd[1]: Started sshd@23-172.31.29.210:22-147.75.109.163:35074.service - OpenSSH per-connection server daemon (147.75.109.163:35074). Mar 25 01:40:29.037104 sshd[5099]: Accepted publickey for core from 147.75.109.163 port 35074 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:29.039280 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:29.051126 systemd-logind[1895]: New session 24 of user core. Mar 25 01:40:29.059609 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:40:30.738402 containerd[1912]: time="2025-03-25T01:40:30.738343771Z" level=info msg="StopContainer for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" with timeout 30 (s)" Mar 25 01:40:30.740022 containerd[1912]: time="2025-03-25T01:40:30.739587971Z" level=info msg="Stop container \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" with signal terminated" Mar 25 01:40:30.780894 systemd[1]: cri-containerd-2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b.scope: Deactivated successfully. Mar 25 01:40:30.786290 containerd[1912]: time="2025-03-25T01:40:30.785627391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" id:\"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" pid:4159 exited_at:{seconds:1742866830 nanos:785068976}" Mar 25 01:40:30.786424 containerd[1912]: time="2025-03-25T01:40:30.786345180Z" level=info msg="received exit event container_id:\"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" id:\"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" pid:4159 exited_at:{seconds:1742866830 nanos:785068976}" Mar 25 01:40:30.817217 containerd[1912]: time="2025-03-25T01:40:30.817113584Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:40:30.824662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b-rootfs.mount: Deactivated successfully. Mar 25 01:40:30.830448 containerd[1912]: time="2025-03-25T01:40:30.830278491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" id:\"549cb7a161e83ae7a3b2683dc7a07c53cc27e0b6498621c6e1a98af8e7f4c039\" pid:5126 exited_at:{seconds:1742866830 nanos:829838690}" Mar 25 01:40:30.832862 containerd[1912]: time="2025-03-25T01:40:30.832814490Z" level=info msg="StopContainer for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" with timeout 2 (s)" Mar 25 01:40:30.835274 containerd[1912]: time="2025-03-25T01:40:30.833097722Z" level=info msg="Stop container \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" with signal terminated" Mar 25 01:40:30.848335 systemd-networkd[1756]: lxc_health: Link DOWN Mar 25 01:40:30.848346 systemd-networkd[1756]: lxc_health: Lost carrier Mar 25 01:40:30.852976 containerd[1912]: time="2025-03-25T01:40:30.852932047Z" level=info msg="StopContainer for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" returns successfully" Mar 25 01:40:30.856545 containerd[1912]: time="2025-03-25T01:40:30.856506853Z" level=info msg="StopPodSandbox for \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\"" Mar 25 01:40:30.875809 systemd[1]: cri-containerd-887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb.scope: Deactivated successfully. Mar 25 01:40:30.876246 systemd[1]: cri-containerd-887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb.scope: Consumed 8.717s CPU time, 201M memory peak, 81.9M read from disk, 13.3M written to disk. Mar 25 01:40:30.878864 containerd[1912]: time="2025-03-25T01:40:30.878757566Z" level=info msg="Container to stop \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:30.878864 containerd[1912]: time="2025-03-25T01:40:30.878828699Z" level=info msg="received exit event container_id:\"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" id:\"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" pid:4191 exited_at:{seconds:1742866830 nanos:878484578}" Mar 25 01:40:30.879143 containerd[1912]: time="2025-03-25T01:40:30.879089159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" id:\"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" pid:4191 exited_at:{seconds:1742866830 nanos:878484578}" Mar 25 01:40:30.899640 systemd[1]: cri-containerd-49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be.scope: Deactivated successfully. Mar 25 01:40:30.902423 containerd[1912]: time="2025-03-25T01:40:30.902386632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" id:\"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" pid:3786 exit_status:137 exited_at:{seconds:1742866830 nanos:899677073}" Mar 25 01:40:30.932256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb-rootfs.mount: Deactivated successfully. Mar 25 01:40:30.957692 containerd[1912]: time="2025-03-25T01:40:30.957648343Z" level=info msg="StopContainer for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" returns successfully" Mar 25 01:40:30.958256 containerd[1912]: time="2025-03-25T01:40:30.958228291Z" level=info msg="StopPodSandbox for \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\"" Mar 25 01:40:30.958391 containerd[1912]: time="2025-03-25T01:40:30.958298784Z" level=info msg="Container to stop \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:30.958391 containerd[1912]: time="2025-03-25T01:40:30.958364966Z" level=info msg="Container to stop \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:30.958391 containerd[1912]: time="2025-03-25T01:40:30.958380653Z" level=info msg="Container to stop \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:30.958525 containerd[1912]: time="2025-03-25T01:40:30.958394866Z" level=info msg="Container to stop \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:30.958525 containerd[1912]: time="2025-03-25T01:40:30.958408841Z" level=info msg="Container to stop \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:40:30.985046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be-rootfs.mount: Deactivated successfully. Mar 25 01:40:30.989677 containerd[1912]: time="2025-03-25T01:40:30.989469061Z" level=info msg="shim disconnected" id=49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be namespace=k8s.io Mar 25 01:40:30.989677 containerd[1912]: time="2025-03-25T01:40:30.989567752Z" level=warning msg="cleaning up after shim disconnected" id=49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be namespace=k8s.io Mar 25 01:40:30.990375 containerd[1912]: time="2025-03-25T01:40:30.989580254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:40:30.993637 systemd[1]: cri-containerd-a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a.scope: Deactivated successfully. Mar 25 01:40:31.046880 containerd[1912]: time="2025-03-25T01:40:31.044161101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" id:\"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" pid:3707 exit_status:137 exited_at:{seconds:1742866831 nanos:640698}" Mar 25 01:40:31.049615 containerd[1912]: time="2025-03-25T01:40:31.049568153Z" level=info msg="TearDown network for sandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" successfully" Mar 25 01:40:31.050395 containerd[1912]: time="2025-03-25T01:40:31.050364320Z" level=info msg="StopPodSandbox for \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" returns successfully" Mar 25 01:40:31.053366 containerd[1912]: time="2025-03-25T01:40:31.050735440Z" level=info msg="received exit event sandbox_id:\"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" exit_status:137 exited_at:{seconds:1742866830 nanos:899677073}" Mar 25 01:40:31.053447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be-shm.mount: Deactivated successfully. Mar 25 01:40:31.062199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a-rootfs.mount: Deactivated successfully. Mar 25 01:40:31.087688 containerd[1912]: time="2025-03-25T01:40:31.085133118Z" level=info msg="shim disconnected" id=a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a namespace=k8s.io Mar 25 01:40:31.087688 containerd[1912]: time="2025-03-25T01:40:31.086382408Z" level=warning msg="cleaning up after shim disconnected" id=a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a namespace=k8s.io Mar 25 01:40:31.087688 containerd[1912]: time="2025-03-25T01:40:31.086395512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:40:31.097478 containerd[1912]: time="2025-03-25T01:40:31.097342723Z" level=info msg="received exit event sandbox_id:\"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" exit_status:137 exited_at:{seconds:1742866831 nanos:640698}" Mar 25 01:40:31.098155 containerd[1912]: time="2025-03-25T01:40:31.097436506Z" level=info msg="TearDown network for sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" successfully" Mar 25 01:40:31.098155 containerd[1912]: time="2025-03-25T01:40:31.098101973Z" level=info msg="StopPodSandbox for \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" returns successfully" Mar 25 01:40:31.198981 kubelet[3296]: I0325 01:40:31.198936 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5aa7182-9f1f-45ee-9555-3680f7481b43-cilium-config-path\") pod \"e5aa7182-9f1f-45ee-9555-3680f7481b43\" (UID: \"e5aa7182-9f1f-45ee-9555-3680f7481b43\") " Mar 25 01:40:31.198981 kubelet[3296]: I0325 01:40:31.198989 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-656nr\" (UniqueName: \"kubernetes.io/projected/e5aa7182-9f1f-45ee-9555-3680f7481b43-kube-api-access-656nr\") pod \"e5aa7182-9f1f-45ee-9555-3680f7481b43\" (UID: \"e5aa7182-9f1f-45ee-9555-3680f7481b43\") " Mar 25 01:40:31.212706 kubelet[3296]: I0325 01:40:31.210851 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5aa7182-9f1f-45ee-9555-3680f7481b43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5aa7182-9f1f-45ee-9555-3680f7481b43" (UID: "e5aa7182-9f1f-45ee-9555-3680f7481b43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:40:31.222670 kubelet[3296]: I0325 01:40:31.222503 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5aa7182-9f1f-45ee-9555-3680f7481b43-kube-api-access-656nr" (OuterVolumeSpecName: "kube-api-access-656nr") pod "e5aa7182-9f1f-45ee-9555-3680f7481b43" (UID: "e5aa7182-9f1f-45ee-9555-3680f7481b43"). InnerVolumeSpecName "kube-api-access-656nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:40:31.300410 kubelet[3296]: I0325 01:40:31.299878 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-cgroup\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.300410 kubelet[3296]: I0325 01:40:31.299925 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-bpf-maps\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.300410 kubelet[3296]: I0325 01:40:31.300005 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-hubble-tls\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.300410 kubelet[3296]: I0325 01:40:31.300036 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-config-path\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.300410 kubelet[3296]: I0325 01:40:31.300058 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-lib-modules\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.300410 kubelet[3296]: I0325 01:40:31.300083 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cni-path\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.301741 kubelet[3296]: I0325 01:40:31.300179 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-kernel\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.301741 kubelet[3296]: I0325 01:40:31.300200 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-run\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.301741 kubelet[3296]: I0325 01:40:31.300222 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-net\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.301741 kubelet[3296]: I0325 01:40:31.300245 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-etc-cni-netd\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.301741 kubelet[3296]: I0325 01:40:31.300269 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-xtables-lock\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.301741 kubelet[3296]: I0325 01:40:31.300375 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcs9k\" (UniqueName: \"kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-kube-api-access-wcs9k\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.302094 kubelet[3296]: I0325 01:40:31.300404 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/035a6b5c-d525-47f1-9bfb-266d722773ba-clustermesh-secrets\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.302094 kubelet[3296]: I0325 01:40:31.300424 3296 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-hostproc\") pod \"035a6b5c-d525-47f1-9bfb-266d722773ba\" (UID: \"035a6b5c-d525-47f1-9bfb-266d722773ba\") " Mar 25 01:40:31.302094 kubelet[3296]: I0325 01:40:31.300477 3296 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5aa7182-9f1f-45ee-9555-3680f7481b43-cilium-config-path\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.302094 kubelet[3296]: I0325 01:40:31.300491 3296 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-656nr\" (UniqueName: \"kubernetes.io/projected/e5aa7182-9f1f-45ee-9555-3680f7481b43-kube-api-access-656nr\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.302094 kubelet[3296]: I0325 01:40:31.300055 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.302094 kubelet[3296]: I0325 01:40:31.300524 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-hostproc" (OuterVolumeSpecName: "hostproc") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.302425 kubelet[3296]: I0325 01:40:31.301262 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.302425 kubelet[3296]: I0325 01:40:31.301422 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.302425 kubelet[3296]: I0325 01:40:31.301441 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.302425 kubelet[3296]: I0325 01:40:31.301658 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.302425 kubelet[3296]: I0325 01:40:31.301718 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.303015 kubelet[3296]: I0325 01:40:31.301737 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.303015 kubelet[3296]: I0325 01:40:31.301628 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cni-path" (OuterVolumeSpecName: "cni-path") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.303015 kubelet[3296]: I0325 01:40:31.302399 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:40:31.314066 kubelet[3296]: I0325 01:40:31.314021 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-kube-api-access-wcs9k" (OuterVolumeSpecName: "kube-api-access-wcs9k") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "kube-api-access-wcs9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:40:31.315102 kubelet[3296]: I0325 01:40:31.314956 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:40:31.316280 kubelet[3296]: I0325 01:40:31.316250 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:40:31.317036 kubelet[3296]: I0325 01:40:31.316998 3296 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035a6b5c-d525-47f1-9bfb-266d722773ba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "035a6b5c-d525-47f1-9bfb-266d722773ba" (UID: "035a6b5c-d525-47f1-9bfb-266d722773ba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:40:31.401448 kubelet[3296]: I0325 01:40:31.401411 3296 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-config-path\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401448 kubelet[3296]: I0325 01:40:31.401448 3296 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-lib-modules\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401448 kubelet[3296]: I0325 01:40:31.401460 3296 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cni-path\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401471 3296 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-net\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401481 3296 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-etc-cni-netd\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401492 3296 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-host-proc-sys-kernel\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401503 3296 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-run\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401515 3296 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-xtables-lock\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401525 3296 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-hostproc\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401535 3296 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wcs9k\" (UniqueName: \"kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-kube-api-access-wcs9k\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.401699 kubelet[3296]: I0325 01:40:31.401547 3296 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/035a6b5c-d525-47f1-9bfb-266d722773ba-clustermesh-secrets\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.402055 kubelet[3296]: I0325 01:40:31.401557 3296 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-bpf-maps\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.402055 kubelet[3296]: I0325 01:40:31.401570 3296 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/035a6b5c-d525-47f1-9bfb-266d722773ba-hubble-tls\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.402055 kubelet[3296]: I0325 01:40:31.401580 3296 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/035a6b5c-d525-47f1-9bfb-266d722773ba-cilium-cgroup\") on node \"ip-172-31-29-210\" DevicePath \"\"" Mar 25 01:40:31.572366 kubelet[3296]: I0325 01:40:31.571343 3296 scope.go:117] "RemoveContainer" containerID="887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb" Mar 25 01:40:31.581501 systemd[1]: Removed slice kubepods-burstable-pod035a6b5c_d525_47f1_9bfb_266d722773ba.slice - libcontainer container kubepods-burstable-pod035a6b5c_d525_47f1_9bfb_266d722773ba.slice. Mar 25 01:40:31.581648 systemd[1]: kubepods-burstable-pod035a6b5c_d525_47f1_9bfb_266d722773ba.slice: Consumed 8.839s CPU time, 201.4M memory peak, 82M read from disk, 13.3M written to disk. Mar 25 01:40:31.592943 containerd[1912]: time="2025-03-25T01:40:31.592911570Z" level=info msg="RemoveContainer for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\"" Mar 25 01:40:31.613516 systemd[1]: Removed slice kubepods-besteffort-pode5aa7182_9f1f_45ee_9555_3680f7481b43.slice - libcontainer container kubepods-besteffort-pode5aa7182_9f1f_45ee_9555_3680f7481b43.slice. Mar 25 01:40:31.675826 containerd[1912]: time="2025-03-25T01:40:31.675774199Z" level=info msg="RemoveContainer for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" returns successfully" Mar 25 01:40:31.676295 kubelet[3296]: I0325 01:40:31.676269 3296 scope.go:117] "RemoveContainer" containerID="99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9" Mar 25 01:40:31.677817 containerd[1912]: time="2025-03-25T01:40:31.677772972Z" level=info msg="RemoveContainer for \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\"" Mar 25 01:40:31.684705 containerd[1912]: time="2025-03-25T01:40:31.684667687Z" level=info msg="RemoveContainer for \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" returns successfully" Mar 25 01:40:31.684942 kubelet[3296]: I0325 01:40:31.684917 3296 scope.go:117] "RemoveContainer" containerID="e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16" Mar 25 01:40:31.687363 containerd[1912]: time="2025-03-25T01:40:31.687302806Z" level=info msg="RemoveContainer for \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\"" Mar 25 01:40:31.694460 containerd[1912]: time="2025-03-25T01:40:31.694421694Z" level=info msg="RemoveContainer for \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" returns successfully" Mar 25 01:40:31.694668 kubelet[3296]: I0325 01:40:31.694642 3296 scope.go:117] "RemoveContainer" containerID="f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e" Mar 25 01:40:31.696835 containerd[1912]: time="2025-03-25T01:40:31.696165436Z" level=info msg="RemoveContainer for \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\"" Mar 25 01:40:31.701792 containerd[1912]: time="2025-03-25T01:40:31.701753172Z" level=info msg="RemoveContainer for \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" returns successfully" Mar 25 01:40:31.702705 kubelet[3296]: I0325 01:40:31.701972 3296 scope.go:117] "RemoveContainer" containerID="6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9" Mar 25 01:40:31.704552 containerd[1912]: time="2025-03-25T01:40:31.704523159Z" level=info msg="RemoveContainer for \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\"" Mar 25 01:40:31.710240 containerd[1912]: time="2025-03-25T01:40:31.710203721Z" level=info msg="RemoveContainer for \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" returns successfully" Mar 25 01:40:31.710522 kubelet[3296]: I0325 01:40:31.710479 3296 scope.go:117] "RemoveContainer" containerID="887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb" Mar 25 01:40:31.720519 containerd[1912]: time="2025-03-25T01:40:31.710718949Z" level=error msg="ContainerStatus for \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\": not found" Mar 25 01:40:31.729794 kubelet[3296]: E0325 01:40:31.729419 3296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\": not found" containerID="887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb" Mar 25 01:40:31.729794 kubelet[3296]: I0325 01:40:31.729511 3296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb"} err="failed to get container status \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"887a17df0fe75c724f661cef348ea0c58ded2b56b9ed31c0c0abf90b1ad906eb\": not found" Mar 25 01:40:31.729794 kubelet[3296]: I0325 01:40:31.729629 3296 scope.go:117] "RemoveContainer" containerID="99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9" Mar 25 01:40:31.730162 containerd[1912]: time="2025-03-25T01:40:31.730088686Z" level=error msg="ContainerStatus for \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\": not found" Mar 25 01:40:31.730508 kubelet[3296]: E0325 01:40:31.730390 3296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\": not found" containerID="99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9" Mar 25 01:40:31.730619 kubelet[3296]: I0325 01:40:31.730499 3296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9"} err="failed to get container status \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"99294bb160bf3665266ab3d458813f4ea76dfc4b13e4390170b7f16ae5cdadd9\": not found" Mar 25 01:40:31.730619 kubelet[3296]: I0325 01:40:31.730522 3296 scope.go:117] "RemoveContainer" containerID="e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16" Mar 25 01:40:31.731567 containerd[1912]: time="2025-03-25T01:40:31.731505282Z" level=error msg="ContainerStatus for \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\": not found" Mar 25 01:40:31.731879 kubelet[3296]: E0325 01:40:31.731759 3296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\": not found" containerID="e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16" Mar 25 01:40:31.731879 kubelet[3296]: I0325 01:40:31.731789 3296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16"} err="failed to get container status \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\": rpc error: code = NotFound desc = an error occurred when try to find container \"e082796a408fa1321dc7ba9aa487a3ce059e2a07c532fb583153be427d5fad16\": not found" Mar 25 01:40:31.731879 kubelet[3296]: I0325 01:40:31.731810 3296 scope.go:117] "RemoveContainer" containerID="f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e" Mar 25 01:40:31.732460 containerd[1912]: time="2025-03-25T01:40:31.732145104Z" level=error msg="ContainerStatus for \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\": not found" Mar 25 01:40:31.733144 kubelet[3296]: E0325 01:40:31.732588 3296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\": not found" containerID="f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e" Mar 25 01:40:31.733144 kubelet[3296]: I0325 01:40:31.732662 3296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e"} err="failed to get container status \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0d491aeef59125cdaf991868ba527809dda993068f37757ca3f33e5e443914e\": not found" Mar 25 01:40:31.733144 kubelet[3296]: I0325 01:40:31.732685 3296 scope.go:117] "RemoveContainer" containerID="6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9" Mar 25 01:40:31.733144 kubelet[3296]: E0325 01:40:31.733041 3296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\": not found" containerID="6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9" Mar 25 01:40:31.733144 kubelet[3296]: I0325 01:40:31.733064 3296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9"} err="failed to get container status \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\": not found" Mar 25 01:40:31.733144 kubelet[3296]: I0325 01:40:31.733084 3296 scope.go:117] "RemoveContainer" containerID="2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b" Mar 25 01:40:31.733476 containerd[1912]: time="2025-03-25T01:40:31.732853125Z" level=error msg="ContainerStatus for \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a6f030a3e7a77010d0fb814ad210099975922e044f82e9926a4b92b7985b4b9\": not found" Mar 25 01:40:31.737482 containerd[1912]: time="2025-03-25T01:40:31.736918617Z" level=info msg="RemoveContainer for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\"" Mar 25 01:40:31.744396 containerd[1912]: time="2025-03-25T01:40:31.744195730Z" level=info msg="RemoveContainer for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" returns successfully" Mar 25 01:40:31.744842 kubelet[3296]: I0325 01:40:31.744636 3296 scope.go:117] "RemoveContainer" containerID="2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b" Mar 25 01:40:31.745090 containerd[1912]: time="2025-03-25T01:40:31.745003537Z" level=error msg="ContainerStatus for \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\": not found" Mar 25 01:40:31.745252 kubelet[3296]: E0325 01:40:31.745228 3296 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\": not found" containerID="2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b" Mar 25 01:40:31.745355 kubelet[3296]: I0325 01:40:31.745257 3296 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b"} err="failed to get container status \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fd0c19f0b6a0b35dfbfb5fd962df3c0a952b29477ca63466c7bc8dfefe85b2b\": not found" Mar 25 01:40:31.821902 systemd[1]: var-lib-kubelet-pods-e5aa7182\x2d9f1f\x2d45ee\x2d9555\x2d3680f7481b43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d656nr.mount: Deactivated successfully. Mar 25 01:40:31.822030 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a-shm.mount: Deactivated successfully. Mar 25 01:40:31.822119 systemd[1]: var-lib-kubelet-pods-035a6b5c\x2dd525\x2d47f1\x2d9bfb\x2d266d722773ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwcs9k.mount: Deactivated successfully. Mar 25 01:40:31.822207 systemd[1]: var-lib-kubelet-pods-035a6b5c\x2dd525\x2d47f1\x2d9bfb\x2d266d722773ba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:40:31.822306 systemd[1]: var-lib-kubelet-pods-035a6b5c\x2dd525\x2d47f1\x2d9bfb\x2d266d722773ba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:40:32.603730 sshd[5101]: Connection closed by 147.75.109.163 port 35074 Mar 25 01:40:32.612421 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:32.623640 systemd[1]: sshd@23-172.31.29.210:22-147.75.109.163:35074.service: Deactivated successfully. Mar 25 01:40:32.626990 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:40:32.628286 systemd-logind[1895]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:40:32.641290 systemd[1]: Started sshd@24-172.31.29.210:22-147.75.109.163:37930.service - OpenSSH per-connection server daemon (147.75.109.163:37930). Mar 25 01:40:32.642637 systemd-logind[1895]: Removed session 24. Mar 25 01:40:32.825303 sshd[5248]: Accepted publickey for core from 147.75.109.163 port 37930 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:32.826920 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:32.838546 systemd-logind[1895]: New session 25 of user core. Mar 25 01:40:32.847572 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:40:32.947047 kubelet[3296]: I0325 01:40:32.946860 3296 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" path="/var/lib/kubelet/pods/035a6b5c-d525-47f1-9bfb-266d722773ba/volumes" Mar 25 01:40:32.956355 kubelet[3296]: I0325 01:40:32.954991 3296 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5aa7182-9f1f-45ee-9555-3680f7481b43" path="/var/lib/kubelet/pods/e5aa7182-9f1f-45ee-9555-3680f7481b43/volumes" Mar 25 01:40:33.133555 kubelet[3296]: E0325 01:40:33.133237 3296 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:40:33.467286 ntpd[1888]: Deleting interface #11 lxc_health, fe80::f031:c6ff:fe60:75e7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Mar 25 01:40:33.467847 ntpd[1888]: 25 Mar 01:40:33 ntpd[1888]: Deleting interface #11 lxc_health, fe80::f031:c6ff:fe60:75e7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Mar 25 01:40:33.676899 sshd[5251]: Connection closed by 147.75.109.163 port 37930 Mar 25 01:40:33.678168 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:33.685335 kubelet[3296]: I0325 01:40:33.684431 3296 topology_manager.go:215] "Topology Admit Handler" podUID="c6e9ecf5-731a-477c-84fc-aee3d56980f6" podNamespace="kube-system" podName="cilium-69mvh" Mar 25 01:40:33.686866 systemd-logind[1895]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:40:33.687993 systemd[1]: sshd@24-172.31.29.210:22-147.75.109.163:37930.service: Deactivated successfully. Mar 25 01:40:33.689844 kubelet[3296]: E0325 01:40:33.689785 3296 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" containerName="mount-cgroup" Mar 25 01:40:33.689844 kubelet[3296]: E0325 01:40:33.689825 3296 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" containerName="apply-sysctl-overwrites" Mar 25 01:40:33.690191 kubelet[3296]: E0325 01:40:33.690047 3296 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e5aa7182-9f1f-45ee-9555-3680f7481b43" containerName="cilium-operator" Mar 25 01:40:33.690191 kubelet[3296]: E0325 01:40:33.690061 3296 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" containerName="mount-bpf-fs" Mar 25 01:40:33.690191 kubelet[3296]: E0325 01:40:33.690070 3296 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" containerName="clean-cilium-state" Mar 25 01:40:33.690191 kubelet[3296]: E0325 01:40:33.690078 3296 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" containerName="cilium-agent" Mar 25 01:40:33.690191 kubelet[3296]: I0325 01:40:33.690152 3296 memory_manager.go:354] "RemoveStaleState removing state" podUID="035a6b5c-d525-47f1-9bfb-266d722773ba" containerName="cilium-agent" Mar 25 01:40:33.690191 kubelet[3296]: I0325 01:40:33.690165 3296 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5aa7182-9f1f-45ee-9555-3680f7481b43" containerName="cilium-operator" Mar 25 01:40:33.693065 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:40:33.716397 systemd-logind[1895]: Removed session 25. Mar 25 01:40:33.723811 systemd[1]: Started sshd@25-172.31.29.210:22-147.75.109.163:37936.service - OpenSSH per-connection server daemon (147.75.109.163:37936). Mar 25 01:40:33.739349 systemd[1]: Created slice kubepods-burstable-podc6e9ecf5_731a_477c_84fc_aee3d56980f6.slice - libcontainer container kubepods-burstable-podc6e9ecf5_731a_477c_84fc_aee3d56980f6.slice. Mar 25 01:40:33.818899 kubelet[3296]: I0325 01:40:33.818842 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-cilium-cgroup\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.818899 kubelet[3296]: I0325 01:40:33.818891 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-host-proc-sys-kernel\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819399 kubelet[3296]: I0325 01:40:33.818920 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsr6s\" (UniqueName: \"kubernetes.io/projected/c6e9ecf5-731a-477c-84fc-aee3d56980f6-kube-api-access-jsr6s\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819399 kubelet[3296]: I0325 01:40:33.818946 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6e9ecf5-731a-477c-84fc-aee3d56980f6-cilium-config-path\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819399 kubelet[3296]: I0325 01:40:33.818970 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-host-proc-sys-net\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819399 kubelet[3296]: I0325 01:40:33.818992 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6e9ecf5-731a-477c-84fc-aee3d56980f6-hubble-tls\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819399 kubelet[3296]: I0325 01:40:33.819018 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-cilium-run\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819399 kubelet[3296]: I0325 01:40:33.819057 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-cni-path\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819595 kubelet[3296]: I0325 01:40:33.819080 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-lib-modules\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819595 kubelet[3296]: I0325 01:40:33.819097 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6e9ecf5-731a-477c-84fc-aee3d56980f6-cilium-ipsec-secrets\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819595 kubelet[3296]: I0325 01:40:33.819117 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-hostproc\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819595 kubelet[3296]: I0325 01:40:33.819150 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-etc-cni-netd\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819595 kubelet[3296]: I0325 01:40:33.819171 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-bpf-maps\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819595 kubelet[3296]: I0325 01:40:33.819188 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6e9ecf5-731a-477c-84fc-aee3d56980f6-xtables-lock\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.819759 kubelet[3296]: I0325 01:40:33.819207 3296 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6e9ecf5-731a-477c-84fc-aee3d56980f6-clustermesh-secrets\") pod \"cilium-69mvh\" (UID: \"c6e9ecf5-731a-477c-84fc-aee3d56980f6\") " pod="kube-system/cilium-69mvh" Mar 25 01:40:33.913472 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 37936 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:33.915497 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:33.923184 systemd-logind[1895]: New session 26 of user core. Mar 25 01:40:33.925528 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:40:34.048044 containerd[1912]: time="2025-03-25T01:40:34.048002548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-69mvh,Uid:c6e9ecf5-731a-477c-84fc-aee3d56980f6,Namespace:kube-system,Attempt:0,}" Mar 25 01:40:34.081338 containerd[1912]: time="2025-03-25T01:40:34.080468844Z" level=info msg="connecting to shim 78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744" address="unix:///run/containerd/s/ac83fd7586e7e779d0568f1925d440318992a5e487e3ed119bfd768d0eeb7a65" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:40:34.094683 sshd[5267]: Connection closed by 147.75.109.163 port 37936 Mar 25 01:40:34.094581 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:34.101376 systemd[1]: sshd@25-172.31.29.210:22-147.75.109.163:37936.service: Deactivated successfully. Mar 25 01:40:34.106188 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:40:34.109009 systemd-logind[1895]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:40:34.112177 systemd-logind[1895]: Removed session 26. Mar 25 01:40:34.121562 systemd[1]: Started cri-containerd-78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744.scope - libcontainer container 78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744. Mar 25 01:40:34.135887 systemd[1]: Started sshd@26-172.31.29.210:22-147.75.109.163:37938.service - OpenSSH per-connection server daemon (147.75.109.163:37938). Mar 25 01:40:34.167786 containerd[1912]: time="2025-03-25T01:40:34.167743557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-69mvh,Uid:c6e9ecf5-731a-477c-84fc-aee3d56980f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\"" Mar 25 01:40:34.175487 containerd[1912]: time="2025-03-25T01:40:34.174587018Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:40:34.186623 containerd[1912]: time="2025-03-25T01:40:34.186587175Z" level=info msg="Container e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:34.198034 containerd[1912]: time="2025-03-25T01:40:34.197999599Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\"" Mar 25 01:40:34.199933 containerd[1912]: time="2025-03-25T01:40:34.199883815Z" level=info msg="StartContainer for \"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\"" Mar 25 01:40:34.201292 containerd[1912]: time="2025-03-25T01:40:34.201218596Z" level=info msg="connecting to shim e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc" address="unix:///run/containerd/s/ac83fd7586e7e779d0568f1925d440318992a5e487e3ed119bfd768d0eeb7a65" protocol=ttrpc version=3 Mar 25 01:40:34.223416 systemd[1]: Started cri-containerd-e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc.scope - libcontainer container e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc. Mar 25 01:40:34.269047 containerd[1912]: time="2025-03-25T01:40:34.268920912Z" level=info msg="StartContainer for \"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\" returns successfully" Mar 25 01:40:34.289647 systemd[1]: cri-containerd-e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc.scope: Deactivated successfully. Mar 25 01:40:34.290274 systemd[1]: cri-containerd-e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc.scope: Consumed 23ms CPU time, 9.4M memory peak, 2.8M read from disk. Mar 25 01:40:34.292526 containerd[1912]: time="2025-03-25T01:40:34.292386140Z" level=info msg="received exit event container_id:\"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\" id:\"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\" pid:5336 exited_at:{seconds:1742866834 nanos:291999680}" Mar 25 01:40:34.293175 containerd[1912]: time="2025-03-25T01:40:34.292977162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\" id:\"e7c1549066b6cf9da6ddc02b6f4a10ab1674fa098b96b729e5ba30e5285a83fc\" pid:5336 exited_at:{seconds:1742866834 nanos:291999680}" Mar 25 01:40:34.332628 sshd[5311]: Accepted publickey for core from 147.75.109.163 port 37938 ssh2: RSA SHA256:pZSzr7AABY+GJWUQ/10Qq8YqIpXZSwyycEbuJ7d4HJc Mar 25 01:40:34.334611 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:40:34.341296 systemd-logind[1895]: New session 27 of user core. Mar 25 01:40:34.345559 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:40:34.620644 containerd[1912]: time="2025-03-25T01:40:34.620527898Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:40:34.630280 containerd[1912]: time="2025-03-25T01:40:34.630232559Z" level=info msg="Container 93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:34.640924 containerd[1912]: time="2025-03-25T01:40:34.640816090Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\"" Mar 25 01:40:34.642063 containerd[1912]: time="2025-03-25T01:40:34.641997284Z" level=info msg="StartContainer for \"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\"" Mar 25 01:40:34.643155 containerd[1912]: time="2025-03-25T01:40:34.643122017Z" level=info msg="connecting to shim 93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d" address="unix:///run/containerd/s/ac83fd7586e7e779d0568f1925d440318992a5e487e3ed119bfd768d0eeb7a65" protocol=ttrpc version=3 Mar 25 01:40:34.666534 systemd[1]: Started cri-containerd-93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d.scope - libcontainer container 93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d. Mar 25 01:40:34.701539 containerd[1912]: time="2025-03-25T01:40:34.701500789Z" level=info msg="StartContainer for \"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\" returns successfully" Mar 25 01:40:34.715013 systemd[1]: cri-containerd-93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d.scope: Deactivated successfully. Mar 25 01:40:34.715583 systemd[1]: cri-containerd-93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d.scope: Consumed 20ms CPU time, 7.3M memory peak, 2M read from disk. Mar 25 01:40:34.716792 containerd[1912]: time="2025-03-25T01:40:34.716702031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\" id:\"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\" pid:5386 exited_at:{seconds:1742866834 nanos:714795957}" Mar 25 01:40:34.716792 containerd[1912]: time="2025-03-25T01:40:34.716708651Z" level=info msg="received exit event container_id:\"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\" id:\"93512daed4cd714804b664a1116a9ef507bd2b37573c8dea6f6235e423a7830d\" pid:5386 exited_at:{seconds:1742866834 nanos:714795957}" Mar 25 01:40:34.956120 kubelet[3296]: I0325 01:40:34.955889 3296 setters.go:580] "Node became not ready" node="ip-172-31-29-210" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T01:40:34Z","lastTransitionTime":"2025-03-25T01:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 25 01:40:35.625066 containerd[1912]: time="2025-03-25T01:40:35.624298580Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:40:35.641207 containerd[1912]: time="2025-03-25T01:40:35.641157338Z" level=info msg="Container dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:35.658029 containerd[1912]: time="2025-03-25T01:40:35.657982670Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\"" Mar 25 01:40:35.658811 containerd[1912]: time="2025-03-25T01:40:35.658769234Z" level=info msg="StartContainer for \"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\"" Mar 25 01:40:35.661622 containerd[1912]: time="2025-03-25T01:40:35.660550344Z" level=info msg="connecting to shim dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee" address="unix:///run/containerd/s/ac83fd7586e7e779d0568f1925d440318992a5e487e3ed119bfd768d0eeb7a65" protocol=ttrpc version=3 Mar 25 01:40:35.687536 systemd[1]: Started cri-containerd-dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee.scope - libcontainer container dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee. Mar 25 01:40:35.743326 containerd[1912]: time="2025-03-25T01:40:35.743278419Z" level=info msg="StartContainer for \"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\" returns successfully" Mar 25 01:40:35.750539 systemd[1]: cri-containerd-dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee.scope: Deactivated successfully. Mar 25 01:40:35.750876 systemd[1]: cri-containerd-dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee.scope: Consumed 27ms CPU time, 5.8M memory peak, 1.1M read from disk. Mar 25 01:40:35.751938 containerd[1912]: time="2025-03-25T01:40:35.751590367Z" level=info msg="received exit event container_id:\"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\" id:\"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\" pid:5429 exited_at:{seconds:1742866835 nanos:751344963}" Mar 25 01:40:35.751938 containerd[1912]: time="2025-03-25T01:40:35.751899750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\" id:\"dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee\" pid:5429 exited_at:{seconds:1742866835 nanos:751344963}" Mar 25 01:40:35.779706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd426de919f60d6a5e61dd48ff63f935c529089cf21c88a41f6cdf145cdb74ee-rootfs.mount: Deactivated successfully. Mar 25 01:40:36.661349 containerd[1912]: time="2025-03-25T01:40:36.659904898Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:40:36.680305 containerd[1912]: time="2025-03-25T01:40:36.677453124Z" level=info msg="Container 94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:36.696392 containerd[1912]: time="2025-03-25T01:40:36.696351791Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\"" Mar 25 01:40:36.698398 containerd[1912]: time="2025-03-25T01:40:36.698362983Z" level=info msg="StartContainer for \"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\"" Mar 25 01:40:36.699284 containerd[1912]: time="2025-03-25T01:40:36.699252579Z" level=info msg="connecting to shim 94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57" address="unix:///run/containerd/s/ac83fd7586e7e779d0568f1925d440318992a5e487e3ed119bfd768d0eeb7a65" protocol=ttrpc version=3 Mar 25 01:40:36.729523 systemd[1]: Started cri-containerd-94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57.scope - libcontainer container 94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57. Mar 25 01:40:36.761977 systemd[1]: cri-containerd-94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57.scope: Deactivated successfully. Mar 25 01:40:36.764456 containerd[1912]: time="2025-03-25T01:40:36.764270876Z" level=info msg="received exit event container_id:\"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\" id:\"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\" pid:5468 exited_at:{seconds:1742866836 nanos:763787324}" Mar 25 01:40:36.766594 containerd[1912]: time="2025-03-25T01:40:36.766492965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\" id:\"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\" pid:5468 exited_at:{seconds:1742866836 nanos:763787324}" Mar 25 01:40:36.775892 containerd[1912]: time="2025-03-25T01:40:36.775850261Z" level=info msg="StartContainer for \"94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57\" returns successfully" Mar 25 01:40:36.791987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94d9d9aea08cb047fe1d55e7da04134df87e33cc9d965ad2ac1bdb48bb072f57-rootfs.mount: Deactivated successfully. Mar 25 01:40:37.654053 containerd[1912]: time="2025-03-25T01:40:37.654010142Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:40:37.680342 containerd[1912]: time="2025-03-25T01:40:37.675740541Z" level=info msg="Container 738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:40:37.699202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042630849.mount: Deactivated successfully. Mar 25 01:40:37.710733 containerd[1912]: time="2025-03-25T01:40:37.709617055Z" level=info msg="CreateContainer within sandbox \"78b8839bd4d78226158d7d85dc0b77ca8dcafe611a5f4ebb17bedb23b5b23744\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\"" Mar 25 01:40:37.711517 containerd[1912]: time="2025-03-25T01:40:37.711466122Z" level=info msg="StartContainer for \"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\"" Mar 25 01:40:37.712423 containerd[1912]: time="2025-03-25T01:40:37.712393291Z" level=info msg="connecting to shim 738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a" address="unix:///run/containerd/s/ac83fd7586e7e779d0568f1925d440318992a5e487e3ed119bfd768d0eeb7a65" protocol=ttrpc version=3 Mar 25 01:40:37.746524 systemd[1]: Started cri-containerd-738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a.scope - libcontainer container 738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a. Mar 25 01:40:37.804679 containerd[1912]: time="2025-03-25T01:40:37.804641145Z" level=info msg="StartContainer for \"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\" returns successfully" Mar 25 01:40:37.956807 containerd[1912]: time="2025-03-25T01:40:37.956679794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\" id:\"73241d731dfa9e30d479495e63019a6d34fd7f3ab46cd3cc0c14af1ed4cdeca9\" pid:5535 exited_at:{seconds:1742866837 nanos:956354460}" Mar 25 01:40:38.614251 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 25 01:40:38.751329 kubelet[3296]: I0325 01:40:38.751260 3296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-69mvh" podStartSLOduration=5.751235338 podStartE2EDuration="5.751235338s" podCreationTimestamp="2025-03-25 01:40:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:40:38.742462876 +0000 UTC m=+106.131106710" watchObservedRunningTime="2025-03-25 01:40:38.751235338 +0000 UTC m=+106.139879174" Mar 25 01:40:41.454936 containerd[1912]: time="2025-03-25T01:40:41.454893325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\" id:\"16883448727c916d30e63ec85383de5e6df3271da4e8586f09abad4d19077f55\" pid:5767 exit_status:1 exited_at:{seconds:1742866841 nanos:454079186}" Mar 25 01:40:42.422588 systemd-networkd[1756]: lxc_health: Link UP Mar 25 01:40:42.424167 (udev-worker)[6052]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:40:42.425130 systemd-networkd[1756]: lxc_health: Gained carrier Mar 25 01:40:43.822924 containerd[1912]: time="2025-03-25T01:40:43.822869903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\" id:\"278a7bccdd555281ef3e1d52e7dbc62c7bdfea3cdca9be7210bad4fd6a744eb0\" pid:6086 exited_at:{seconds:1742866843 nanos:822428188}" Mar 25 01:40:43.876529 systemd-networkd[1756]: lxc_health: Gained IPv6LL Mar 25 01:40:45.996541 containerd[1912]: time="2025-03-25T01:40:45.996468121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\" id:\"6c9090e3239b5643137c076084d959fda8864cd24d9c92621ef0fcf107a2754c\" pid:6117 exited_at:{seconds:1742866845 nanos:995377571}" Mar 25 01:40:46.467353 ntpd[1888]: Listen normally on 14 lxc_health [fe80::b096:dff:fe0f:56c4%14]:123 Mar 25 01:40:46.467944 ntpd[1888]: 25 Mar 01:40:46 ntpd[1888]: Listen normally on 14 lxc_health [fe80::b096:dff:fe0f:56c4%14]:123 Mar 25 01:40:48.336950 containerd[1912]: time="2025-03-25T01:40:48.336059428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"738e35e8ae14aba33a30c797f9b35b91429cd89c759ae334912572c1b931bd7a\" id:\"258b6456a6f7081fc8eb075c9033f123a46753f160b52e78bbb6e0bbe64b26a8\" pid:6142 exited_at:{seconds:1742866848 nanos:335034047}" Mar 25 01:40:48.375834 sshd[5366]: Connection closed by 147.75.109.163 port 37938 Mar 25 01:40:48.377824 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Mar 25 01:40:48.395165 systemd[1]: sshd@26-172.31.29.210:22-147.75.109.163:37938.service: Deactivated successfully. Mar 25 01:40:48.401185 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:40:48.406881 systemd-logind[1895]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:40:48.408918 systemd-logind[1895]: Removed session 27. Mar 25 01:40:52.887357 containerd[1912]: time="2025-03-25T01:40:52.887151542Z" level=info msg="StopPodSandbox for \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\"" Mar 25 01:40:52.887817 containerd[1912]: time="2025-03-25T01:40:52.887503073Z" level=info msg="TearDown network for sandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" successfully" Mar 25 01:40:52.887817 containerd[1912]: time="2025-03-25T01:40:52.887524021Z" level=info msg="StopPodSandbox for \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" returns successfully" Mar 25 01:40:52.888775 containerd[1912]: time="2025-03-25T01:40:52.888742546Z" level=info msg="RemovePodSandbox for \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\"" Mar 25 01:40:52.898899 containerd[1912]: time="2025-03-25T01:40:52.898847023Z" level=info msg="Forcibly stopping sandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\"" Mar 25 01:40:52.921671 containerd[1912]: time="2025-03-25T01:40:52.921611968Z" level=info msg="TearDown network for sandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" successfully" Mar 25 01:40:52.924768 containerd[1912]: time="2025-03-25T01:40:52.924724311Z" level=info msg="Ensure that sandbox 49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be in task-service has been cleanup successfully" Mar 25 01:40:52.930116 containerd[1912]: time="2025-03-25T01:40:52.930063849Z" level=info msg="RemovePodSandbox \"49d0a7b2e7885b72bac6b4fb6da123fdd0702ccb63d53d14408440ca6ae502be\" returns successfully" Mar 25 01:40:52.930823 containerd[1912]: time="2025-03-25T01:40:52.930609967Z" level=info msg="StopPodSandbox for \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\"" Mar 25 01:40:52.930823 containerd[1912]: time="2025-03-25T01:40:52.930745366Z" level=info msg="TearDown network for sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" successfully" Mar 25 01:40:52.930823 containerd[1912]: time="2025-03-25T01:40:52.930758347Z" level=info msg="StopPodSandbox for \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" returns successfully" Mar 25 01:40:52.931192 containerd[1912]: time="2025-03-25T01:40:52.931157599Z" level=info msg="RemovePodSandbox for \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\"" Mar 25 01:40:52.931192 containerd[1912]: time="2025-03-25T01:40:52.931188079Z" level=info msg="Forcibly stopping sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\"" Mar 25 01:40:52.931319 containerd[1912]: time="2025-03-25T01:40:52.931293138Z" level=info msg="TearDown network for sandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" successfully" Mar 25 01:40:52.932950 containerd[1912]: time="2025-03-25T01:40:52.932918600Z" level=info msg="Ensure that sandbox a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a in task-service has been cleanup successfully" Mar 25 01:40:52.939814 containerd[1912]: time="2025-03-25T01:40:52.939756483Z" level=info msg="RemovePodSandbox \"a68f2ddbf64ef94e4890c7a558449609020a2421a2fb41c93766d6b2ac2bbc9a\" returns successfully" Mar 25 01:41:02.579559 systemd[1]: cri-containerd-98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112.scope: Deactivated successfully. Mar 25 01:41:02.579943 systemd[1]: cri-containerd-98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112.scope: Consumed 3.130s CPU time, 69.3M memory peak, 22.9M read from disk. Mar 25 01:41:02.593790 containerd[1912]: time="2025-03-25T01:41:02.593517975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\" id:\"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\" pid:3141 exit_status:1 exited_at:{seconds:1742866862 nanos:588173316}" Mar 25 01:41:02.593790 containerd[1912]: time="2025-03-25T01:41:02.593640693Z" level=info msg="received exit event container_id:\"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\" id:\"98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112\" pid:3141 exit_status:1 exited_at:{seconds:1742866862 nanos:588173316}" Mar 25 01:41:02.639220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112-rootfs.mount: Deactivated successfully. Mar 25 01:41:02.750809 kubelet[3296]: I0325 01:41:02.750726 3296 scope.go:117] "RemoveContainer" containerID="98937002d889b17eca2b6e9ff1dea103a4d3a6fca32be78f249f0dfaba5aa112" Mar 25 01:41:02.765795 containerd[1912]: time="2025-03-25T01:41:02.765690235Z" level=info msg="CreateContainer within sandbox \"b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 25 01:41:02.789076 containerd[1912]: time="2025-03-25T01:41:02.786590746Z" level=info msg="Container f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:41:02.800520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146970784.mount: Deactivated successfully. Mar 25 01:41:02.812521 containerd[1912]: time="2025-03-25T01:41:02.812472414Z" level=info msg="CreateContainer within sandbox \"b71234b3f236c49d795fcf41c0593cb90adca8984740d505da3fa38340f37fcb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd\"" Mar 25 01:41:02.814682 containerd[1912]: time="2025-03-25T01:41:02.813042622Z" level=info msg="StartContainer for \"f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd\"" Mar 25 01:41:02.814682 containerd[1912]: time="2025-03-25T01:41:02.814152075Z" level=info msg="connecting to shim f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd" address="unix:///run/containerd/s/603d240dcd530393ecdbd24adceffd0135a99b61d31a2d36b6e578133113006f" protocol=ttrpc version=3 Mar 25 01:41:02.838519 systemd[1]: Started cri-containerd-f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd.scope - libcontainer container f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd. Mar 25 01:41:02.899696 containerd[1912]: time="2025-03-25T01:41:02.899659555Z" level=info msg="StartContainer for \"f9d374c4e7ce78f361f314fdf52c954ad6e5c26f05c6cefdb5558ac5742871bd\" returns successfully" Mar 25 01:41:06.557599 kubelet[3296]: E0325 01:41:06.557497 3296 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 25 01:41:08.980467 systemd[1]: cri-containerd-930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3.scope: Deactivated successfully. Mar 25 01:41:08.981348 systemd[1]: cri-containerd-930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3.scope: Consumed 1.746s CPU time, 27.7M memory peak, 10.2M read from disk. Mar 25 01:41:08.984948 containerd[1912]: time="2025-03-25T01:41:08.984911985Z" level=info msg="received exit event container_id:\"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\" id:\"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\" pid:3148 exit_status:1 exited_at:{seconds:1742866868 nanos:984492343}" Mar 25 01:41:08.988934 containerd[1912]: time="2025-03-25T01:41:08.988847357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\" id:\"930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3\" pid:3148 exit_status:1 exited_at:{seconds:1742866868 nanos:984492343}" Mar 25 01:41:09.018584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3-rootfs.mount: Deactivated successfully. Mar 25 01:41:09.776717 kubelet[3296]: I0325 01:41:09.776687 3296 scope.go:117] "RemoveContainer" containerID="930da410ae8fa8d316e80d4f38e86db94aeb9cc8032e5e3dad6c5728639f89c3" Mar 25 01:41:09.785642 containerd[1912]: time="2025-03-25T01:41:09.785598675Z" level=info msg="CreateContainer within sandbox \"5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 25 01:41:09.802645 containerd[1912]: time="2025-03-25T01:41:09.801182151Z" level=info msg="Container e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:41:09.838614 containerd[1912]: time="2025-03-25T01:41:09.838569394Z" level=info msg="CreateContainer within sandbox \"5d32c70322afafaaa40ce3f00f8e37a3702ddacbebdfde8bafb4d3232f4d0bde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4\"" Mar 25 01:41:09.840347 containerd[1912]: time="2025-03-25T01:41:09.839190044Z" level=info msg="StartContainer for \"e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4\"" Mar 25 01:41:09.840347 containerd[1912]: time="2025-03-25T01:41:09.840273330Z" level=info msg="connecting to shim e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4" address="unix:///run/containerd/s/83500819bbcad826cae1644a3fc86638b0fd827220a8381fc014a6e943694079" protocol=ttrpc version=3 Mar 25 01:41:09.880546 systemd[1]: Started cri-containerd-e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4.scope - libcontainer container e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4. Mar 25 01:41:09.967468 containerd[1912]: time="2025-03-25T01:41:09.967429561Z" level=info msg="StartContainer for \"e3d947d000fdaa02852c32e1d621f8fd9990ea45ddeac5f2cc8584d2e30554c4\" returns successfully" Mar 25 01:41:16.558740 kubelet[3296]: E0325 01:41:16.558430 3296 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-210?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"