Apr 30 03:32:08.931161 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:32:08.931199 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:32:08.931220 kernel: BIOS-provided physical RAM map: Apr 30 03:32:08.931232 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:32:08.931243 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 03:32:08.931252 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 30 03:32:08.931265 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 30 03:32:08.931277 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 03:32:08.931288 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 03:32:08.931303 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 03:32:08.931315 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 03:32:08.931326 kernel: NX (Execute Disable) protection: active Apr 30 03:32:08.931337 kernel: APIC: Static calls initialized Apr 30 03:32:08.931349 kernel: efi: EFI v2.7 by EDK II Apr 30 03:32:08.931364 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 03:32:08.931379 kernel: SMBIOS 2.7 present. Apr 30 03:32:08.931392 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 03:32:08.931405 kernel: Hypervisor detected: KVM Apr 30 03:32:08.931417 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:32:08.931431 kernel: kvm-clock: using sched offset of 3702845186 cycles Apr 30 03:32:08.931445 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:32:08.931458 kernel: tsc: Detected 2499.996 MHz processor Apr 30 03:32:08.931471 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:32:08.931485 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:32:08.931500 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 03:32:08.931519 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:32:08.931534 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:32:08.931549 kernel: Using GB pages for direct mapping Apr 30 03:32:08.931564 kernel: Secure boot disabled Apr 30 03:32:08.931579 kernel: ACPI: Early table checksum verification disabled Apr 30 03:32:08.931594 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 03:32:08.931609 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 03:32:08.931624 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 03:32:08.931639 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 03:32:08.931657 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 03:32:08.931672 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 03:32:08.931687 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 03:32:08.931702 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 03:32:08.931717 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 03:32:08.931732 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 03:32:08.931754 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:32:08.931774 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:32:08.931791 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 03:32:08.931822 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 03:32:08.931838 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 03:32:08.931863 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 03:32:08.931875 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 03:32:08.931892 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 03:32:08.931905 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 03:32:08.931921 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 03:32:08.931937 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 03:32:08.931953 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 03:32:08.931966 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 03:32:08.931979 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 03:32:08.931994 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:32:08.932008 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:32:08.932022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 03:32:08.932041 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 03:32:08.932054 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 03:32:08.932068 kernel: Zone ranges: Apr 30 03:32:08.932082 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:32:08.932096 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 03:32:08.932111 kernel: Normal empty Apr 30 03:32:08.932126 kernel: Movable zone start for each node Apr 30 03:32:08.932139 kernel: Early memory node ranges Apr 30 03:32:08.932152 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:32:08.932170 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 03:32:08.932184 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 03:32:08.932197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 03:32:08.932211 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:32:08.932225 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:32:08.932240 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:32:08.932256 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 03:32:08.932271 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:32:08.932287 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:32:08.932303 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 03:32:08.932323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:32:08.932338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:32:08.932351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:32:08.932365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:32:08.932378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:32:08.932392 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:32:08.932406 kernel: TSC deadline timer available Apr 30 03:32:08.932419 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:32:08.932432 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:32:08.932449 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 03:32:08.932463 kernel: Booting paravirtualized kernel on KVM Apr 30 03:32:08.932477 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:32:08.932491 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:32:08.932504 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:32:08.932518 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:32:08.932530 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:32:08.932544 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:32:08.932557 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:32:08.932576 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:32:08.932591 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:32:08.932605 kernel: random: crng init done Apr 30 03:32:08.932619 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:32:08.932633 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:32:08.932647 kernel: Fallback order for Node 0: 0 Apr 30 03:32:08.932661 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 03:32:08.932675 kernel: Policy zone: DMA32 Apr 30 03:32:08.932693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:32:08.932707 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 162936K reserved, 0K cma-reserved) Apr 30 03:32:08.932721 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:32:08.932734 kernel: Kernel/User page tables isolation: enabled Apr 30 03:32:08.932748 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:32:08.932763 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:32:08.932776 kernel: Dynamic Preempt: voluntary Apr 30 03:32:08.932791 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:32:08.935753 kernel: rcu: RCU event tracing is enabled. Apr 30 03:32:08.935793 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:32:08.941023 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:32:08.941051 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:32:08.941066 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:32:08.941079 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:32:08.941092 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:32:08.941108 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:32:08.941141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:32:08.941156 kernel: Console: colour dummy device 80x25 Apr 30 03:32:08.941172 kernel: printk: console [tty0] enabled Apr 30 03:32:08.941190 kernel: printk: console [ttyS0] enabled Apr 30 03:32:08.941204 kernel: ACPI: Core revision 20230628 Apr 30 03:32:08.941221 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 03:32:08.941236 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:32:08.941249 kernel: x2apic enabled Apr 30 03:32:08.941265 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:32:08.941282 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 03:32:08.941301 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 30 03:32:08.941317 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:32:08.941331 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:32:08.941345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:32:08.941358 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:32:08.941371 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:32:08.941385 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:32:08.941400 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:32:08.941415 kernel: RETBleed: Vulnerable Apr 30 03:32:08.941434 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:32:08.941448 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:32:08.941473 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:32:08.941488 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 03:32:08.941501 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:32:08.941515 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:32:08.941531 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:32:08.941548 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:32:08.941562 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:32:08.941576 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:32:08.941590 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:32:08.941609 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:32:08.941626 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 03:32:08.941642 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:32:08.941658 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:32:08.941674 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:32:08.941690 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 03:32:08.941706 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 03:32:08.941723 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 03:32:08.941739 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 03:32:08.941755 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 03:32:08.941771 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:32:08.941788 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:32:08.941824 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:32:08.941838 kernel: landlock: Up and running. Apr 30 03:32:08.941853 kernel: SELinux: Initializing. Apr 30 03:32:08.941869 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:32:08.941886 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:32:08.941902 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:32:08.941919 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:32:08.941935 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:32:08.941952 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:32:08.941969 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:32:08.941989 kernel: signal: max sigframe size: 3632 Apr 30 03:32:08.942006 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:32:08.942023 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:32:08.942040 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:32:08.942057 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:32:08.942074 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:32:08.942090 kernel: .... node #0, CPUs: #1 Apr 30 03:32:08.942107 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:32:08.942125 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:32:08.942145 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:32:08.942162 kernel: smpboot: Max logical packages: 1 Apr 30 03:32:08.942177 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 30 03:32:08.942194 kernel: devtmpfs: initialized Apr 30 03:32:08.942211 kernel: x86/mm: Memory block size: 128MB Apr 30 03:32:08.942228 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 03:32:08.942244 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:32:08.942261 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:32:08.942281 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:32:08.942297 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:32:08.942314 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:32:08.942330 kernel: audit: type=2000 audit(1745983928.981:1): state=initialized audit_enabled=0 res=1 Apr 30 03:32:08.942346 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:32:08.942362 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:32:08.942379 kernel: cpuidle: using governor menu Apr 30 03:32:08.942396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:32:08.942412 kernel: dca service started, version 1.12.1 Apr 30 03:32:08.942428 kernel: PCI: Using configuration type 1 for base access Apr 30 03:32:08.942448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:32:08.942465 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:32:08.942481 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:32:08.942498 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:32:08.942515 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:32:08.942531 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:32:08.942548 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:32:08.942565 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:32:08.942581 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:32:08.942601 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:32:08.942617 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:32:08.942633 kernel: ACPI: Interpreter enabled Apr 30 03:32:08.942650 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:32:08.942667 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:32:08.942683 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:32:08.942700 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:32:08.942716 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:32:08.942733 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:32:08.943370 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:32:08.943526 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:32:08.943662 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:32:08.943684 kernel: acpiphp: Slot [3] registered Apr 30 03:32:08.943699 kernel: acpiphp: Slot [4] registered Apr 30 03:32:08.943714 kernel: acpiphp: Slot [5] registered Apr 30 03:32:08.943731 kernel: acpiphp: Slot [6] registered Apr 30 03:32:08.943750 kernel: acpiphp: Slot [7] registered Apr 30 03:32:08.943766 kernel: acpiphp: Slot [8] registered Apr 30 03:32:08.943781 kernel: acpiphp: Slot [9] registered Apr 30 03:32:08.943797 kernel: acpiphp: Slot [10] registered Apr 30 03:32:08.943825 kernel: acpiphp: Slot [11] registered Apr 30 03:32:08.943839 kernel: acpiphp: Slot [12] registered Apr 30 03:32:08.943873 kernel: acpiphp: Slot [13] registered Apr 30 03:32:08.943889 kernel: acpiphp: Slot [14] registered Apr 30 03:32:08.943905 kernel: acpiphp: Slot [15] registered Apr 30 03:32:08.943925 kernel: acpiphp: Slot [16] registered Apr 30 03:32:08.943941 kernel: acpiphp: Slot [17] registered Apr 30 03:32:08.943958 kernel: acpiphp: Slot [18] registered Apr 30 03:32:08.943974 kernel: acpiphp: Slot [19] registered Apr 30 03:32:08.943989 kernel: acpiphp: Slot [20] registered Apr 30 03:32:08.944004 kernel: acpiphp: Slot [21] registered Apr 30 03:32:08.944020 kernel: acpiphp: Slot [22] registered Apr 30 03:32:08.944036 kernel: acpiphp: Slot [23] registered Apr 30 03:32:08.944052 kernel: acpiphp: Slot [24] registered Apr 30 03:32:08.944069 kernel: acpiphp: Slot [25] registered Apr 30 03:32:08.944088 kernel: acpiphp: Slot [26] registered Apr 30 03:32:08.944104 kernel: acpiphp: Slot [27] registered Apr 30 03:32:08.944121 kernel: acpiphp: Slot [28] registered Apr 30 03:32:08.944138 kernel: acpiphp: Slot [29] registered Apr 30 03:32:08.944154 kernel: acpiphp: Slot [30] registered Apr 30 03:32:08.944171 kernel: acpiphp: Slot [31] registered Apr 30 03:32:08.944187 kernel: PCI host bridge to bus 0000:00 Apr 30 03:32:08.944350 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:32:08.944477 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:32:08.944603 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:32:08.944723 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:32:08.948076 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 03:32:08.948256 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:32:08.948417 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:32:08.948562 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:32:08.948711 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 03:32:08.948899 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:32:08.949039 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 03:32:08.949174 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 03:32:08.949309 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 03:32:08.949444 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 03:32:08.949582 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 03:32:08.949723 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 03:32:08.949981 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 03:32:08.950122 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 03:32:08.950257 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:32:08.950393 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 03:32:08.950530 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:32:08.950676 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 03:32:08.951927 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 03:32:08.952113 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 03:32:08.952260 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 03:32:08.952281 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:32:08.952299 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:32:08.952316 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:32:08.952330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:32:08.952351 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:32:08.952367 kernel: iommu: Default domain type: Translated Apr 30 03:32:08.952383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:32:08.952401 kernel: efivars: Registered efivars operations Apr 30 03:32:08.952418 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:32:08.952434 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:32:08.952453 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 03:32:08.952471 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 03:32:08.952623 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 03:32:08.952775 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 03:32:08.955026 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:32:08.955069 kernel: vgaarb: loaded Apr 30 03:32:08.955089 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 03:32:08.955110 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 03:32:08.955129 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:32:08.955149 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:32:08.955169 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:32:08.955194 kernel: pnp: PnP ACPI init Apr 30 03:32:08.955213 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:32:08.955233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:32:08.955251 kernel: NET: Registered PF_INET protocol family Apr 30 03:32:08.955270 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:32:08.955290 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:32:08.955309 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:32:08.955329 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:32:08.955347 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:32:08.955370 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:32:08.955390 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:32:08.955407 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:32:08.955420 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:32:08.955434 kernel: NET: Registered PF_XDP protocol family Apr 30 03:32:08.955566 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:32:08.955687 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:32:08.955824 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:32:08.955958 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:32:08.956076 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 03:32:08.956221 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:32:08.956243 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:32:08.956260 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:32:08.956277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 30 03:32:08.956294 kernel: clocksource: Switched to clocksource tsc Apr 30 03:32:08.956311 kernel: Initialise system trusted keyrings Apr 30 03:32:08.956327 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:32:08.956349 kernel: Key type asymmetric registered Apr 30 03:32:08.956365 kernel: Asymmetric key parser 'x509' registered Apr 30 03:32:08.956381 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:32:08.956398 kernel: io scheduler mq-deadline registered Apr 30 03:32:08.956414 kernel: io scheduler kyber registered Apr 30 03:32:08.956431 kernel: io scheduler bfq registered Apr 30 03:32:08.956448 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:32:08.956464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:32:08.956480 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:32:08.956500 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:32:08.956515 kernel: i8042: Warning: Keylock active Apr 30 03:32:08.956532 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:32:08.956548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:32:08.956700 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:32:08.957896 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:32:08.958053 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:32:08 UTC (1745983928) Apr 30 03:32:08.958182 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:32:08.958209 kernel: intel_pstate: CPU model not supported Apr 30 03:32:08.958226 kernel: efifb: probing for efifb Apr 30 03:32:08.958243 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 30 03:32:08.958261 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 03:32:08.958277 kernel: efifb: scrolling: redraw Apr 30 03:32:08.958293 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:32:08.958311 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:32:08.958328 kernel: fb0: EFI VGA frame buffer device Apr 30 03:32:08.958344 kernel: pstore: Using crash dump compression: deflate Apr 30 03:32:08.958365 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:32:08.958382 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:32:08.958398 kernel: Segment Routing with IPv6 Apr 30 03:32:08.958415 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:32:08.958432 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:32:08.958449 kernel: Key type dns_resolver registered Apr 30 03:32:08.958493 kernel: IPI shorthand broadcast: enabled Apr 30 03:32:08.958514 kernel: sched_clock: Marking stable (456001916, 122960927)->(647647168, -68684325) Apr 30 03:32:08.958533 kernel: registered taskstats version 1 Apr 30 03:32:08.958553 kernel: Loading compiled-in X.509 certificates Apr 30 03:32:08.958571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:32:08.958588 kernel: Key type .fscrypt registered Apr 30 03:32:08.958608 kernel: Key type fscrypt-provisioning registered Apr 30 03:32:08.958626 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:32:08.958644 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:32:08.958661 kernel: ima: No architecture policies found Apr 30 03:32:08.958679 kernel: clk: Disabling unused clocks Apr 30 03:32:08.958695 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:32:08.958717 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:32:08.958735 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:32:08.958753 kernel: Run /init as init process Apr 30 03:32:08.958770 kernel: with arguments: Apr 30 03:32:08.958787 kernel: /init Apr 30 03:32:08.962015 kernel: with environment: Apr 30 03:32:08.962043 kernel: HOME=/ Apr 30 03:32:08.962060 kernel: TERM=linux Apr 30 03:32:08.962077 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:32:08.962107 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:32:08.962127 systemd[1]: Detected virtualization amazon. Apr 30 03:32:08.962145 systemd[1]: Detected architecture x86-64. Apr 30 03:32:08.962162 systemd[1]: Running in initrd. Apr 30 03:32:08.962180 systemd[1]: No hostname configured, using default hostname. Apr 30 03:32:08.962197 systemd[1]: Hostname set to . Apr 30 03:32:08.962218 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:32:08.962237 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:32:08.962255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:32:08.962273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:32:08.962292 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:32:08.962309 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:32:08.962327 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:32:08.962349 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:32:08.962369 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:32:08.962387 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:32:08.962405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:32:08.962423 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:32:08.962444 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:32:08.962462 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:32:08.962480 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:32:08.962497 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:32:08.962515 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:32:08.962532 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:32:08.962551 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:32:08.962568 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:32:08.962586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:32:08.962607 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:32:08.962625 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:32:08.962642 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:32:08.962660 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:32:08.962678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:32:08.962696 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:32:08.962714 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:32:08.962732 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:32:08.962753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:32:08.962771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:32:08.962788 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:32:08.962823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:32:08.962841 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:32:08.962860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:32:08.962882 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:32:08.962945 systemd-journald[178]: Collecting audit messages is disabled. Apr 30 03:32:08.962984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:32:08.963006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:32:08.963024 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:32:08.963042 systemd-journald[178]: Journal started Apr 30 03:32:08.963079 systemd-journald[178]: Runtime Journal (/run/log/journal/ec288589cc88fe2e8ae4aac66cc354fa) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:32:08.936429 systemd-modules-load[179]: Inserted module 'overlay' Apr 30 03:32:08.971962 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:32:08.974401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:32:08.988867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:32:08.989560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:32:08.994682 kernel: Bridge firewalling registered Apr 30 03:32:08.990833 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 30 03:32:08.996501 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:32:09.002834 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:32:09.007021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:32:09.008183 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:32:09.026054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:32:09.028195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:32:09.035468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:32:09.038246 dracut-cmdline[209]: dracut-dracut-053 Apr 30 03:32:09.041673 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:32:09.043243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:32:09.070470 systemd-resolved[221]: Positive Trust Anchors: Apr 30 03:32:09.071203 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:32:09.071242 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:32:09.076681 systemd-resolved[221]: Defaulting to hostname 'linux'. Apr 30 03:32:09.078265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:32:09.079140 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:32:09.114842 kernel: SCSI subsystem initialized Apr 30 03:32:09.124835 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:32:09.136832 kernel: iscsi: registered transport (tcp) Apr 30 03:32:09.158060 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:32:09.158133 kernel: QLogic iSCSI HBA Driver Apr 30 03:32:09.197822 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:32:09.202042 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:32:09.238908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:32:09.238991 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:32:09.240065 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:32:09.281839 kernel: raid6: avx512x4 gen() 15098 MB/s Apr 30 03:32:09.299830 kernel: raid6: avx512x2 gen() 14980 MB/s Apr 30 03:32:09.317830 kernel: raid6: avx512x1 gen() 15086 MB/s Apr 30 03:32:09.334836 kernel: raid6: avx2x4 gen() 14926 MB/s Apr 30 03:32:09.351830 kernel: raid6: avx2x2 gen() 14912 MB/s Apr 30 03:32:09.369089 kernel: raid6: avx2x1 gen() 11484 MB/s Apr 30 03:32:09.369145 kernel: raid6: using algorithm avx512x4 gen() 15098 MB/s Apr 30 03:32:09.388841 kernel: raid6: .... xor() 7945 MB/s, rmw enabled Apr 30 03:32:09.388903 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:32:09.410848 kernel: xor: automatically using best checksumming function avx Apr 30 03:32:09.572840 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:32:09.583728 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:32:09.593091 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:32:09.606875 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 03:32:09.612202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:32:09.620042 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:32:09.640216 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 30 03:32:09.671233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:32:09.677056 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:32:09.727715 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:32:09.737028 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:32:09.758143 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:32:09.761420 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:32:09.763948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:32:09.765118 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:32:09.772317 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:32:09.797633 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:32:09.832839 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:32:09.843376 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 03:32:09.879744 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 03:32:09.880003 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 03:32:09.880171 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:16:ba:f3:9a:f3 Apr 30 03:32:09.880323 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:32:09.880344 kernel: AES CTR mode by8 optimization enabled Apr 30 03:32:09.856576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:32:09.856754 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:32:09.857623 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:32:09.858267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:32:09.858468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:32:09.859119 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:32:09.866152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:32:09.879207 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:32:09.879404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:32:09.882341 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:32:09.907112 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 03:32:09.907338 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:32:09.889479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:32:09.917831 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 03:32:09.926712 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:32:09.926783 kernel: GPT:9289727 != 16777215 Apr 30 03:32:09.926818 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:32:09.928827 kernel: GPT:9289727 != 16777215 Apr 30 03:32:09.928884 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:32:09.931446 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:32:09.936531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:32:09.946078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:32:09.963339 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:32:09.998836 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (454) Apr 30 03:32:10.032588 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (443) Apr 30 03:32:10.081511 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 03:32:10.101471 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 03:32:10.108153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:32:10.118697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 03:32:10.119339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 03:32:10.124998 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:32:10.133719 disk-uuid[630]: Primary Header is updated. Apr 30 03:32:10.133719 disk-uuid[630]: Secondary Entries is updated. Apr 30 03:32:10.133719 disk-uuid[630]: Secondary Header is updated. Apr 30 03:32:10.138879 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:32:10.144845 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:32:10.150828 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:32:11.154303 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:32:11.154356 disk-uuid[631]: The operation has completed successfully. Apr 30 03:32:11.257907 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:32:11.258016 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:32:11.280096 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:32:11.284920 sh[974]: Success Apr 30 03:32:11.306840 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:32:11.398158 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:32:11.406936 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:32:11.409257 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:32:11.446377 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:32:11.446441 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:32:11.446456 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:32:11.449466 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:32:11.449522 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:32:11.561848 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:32:11.574118 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:32:11.575156 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:32:11.580997 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:32:11.583154 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:32:11.604417 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:32:11.604490 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:32:11.606323 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:32:11.612851 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:32:11.627853 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:32:11.628308 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:32:11.636497 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:32:11.644108 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:32:11.680777 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:32:11.688219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:32:11.709654 systemd-networkd[1166]: lo: Link UP Apr 30 03:32:11.709668 systemd-networkd[1166]: lo: Gained carrier Apr 30 03:32:11.711363 systemd-networkd[1166]: Enumeration completed Apr 30 03:32:11.711920 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:32:11.711925 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:32:11.713758 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:32:11.715421 systemd[1]: Reached target network.target - Network. Apr 30 03:32:11.716422 systemd-networkd[1166]: eth0: Link UP Apr 30 03:32:11.716431 systemd-networkd[1166]: eth0: Gained carrier Apr 30 03:32:11.716445 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:32:11.732045 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.18.209/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:32:12.025615 ignition[1111]: Ignition 2.19.0 Apr 30 03:32:12.025626 ignition[1111]: Stage: fetch-offline Apr 30 03:32:12.025854 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:12.025863 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:12.026091 ignition[1111]: Ignition finished successfully Apr 30 03:32:12.027766 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:32:12.033033 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:32:12.046598 ignition[1174]: Ignition 2.19.0 Apr 30 03:32:12.046613 ignition[1174]: Stage: fetch Apr 30 03:32:12.046972 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:12.046982 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:12.047066 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:12.062880 ignition[1174]: PUT result: OK Apr 30 03:32:12.064926 ignition[1174]: parsed url from cmdline: "" Apr 30 03:32:12.064936 ignition[1174]: no config URL provided Apr 30 03:32:12.064944 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:32:12.064967 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:32:12.065000 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:12.065797 ignition[1174]: PUT result: OK Apr 30 03:32:12.065855 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 03:32:12.066472 ignition[1174]: GET result: OK Apr 30 03:32:12.066542 ignition[1174]: parsing config with SHA512: 65aa90455db694a7b0075204e8188c2866da3f116967a78fd6c4b14f78ba2f72ee546243181129d4bb9981aa39349c375c646bdb752b313518a92c1d228dae3e Apr 30 03:32:12.070903 unknown[1174]: fetched base config from "system" Apr 30 03:32:12.071027 unknown[1174]: fetched base config from "system" Apr 30 03:32:12.071041 unknown[1174]: fetched user config from "aws" Apr 30 03:32:12.071702 ignition[1174]: fetch: fetch complete Apr 30 03:32:12.071708 ignition[1174]: fetch: fetch passed Apr 30 03:32:12.071752 ignition[1174]: Ignition finished successfully Apr 30 03:32:12.073768 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:32:12.078043 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:32:12.093967 ignition[1180]: Ignition 2.19.0 Apr 30 03:32:12.093979 ignition[1180]: Stage: kargs Apr 30 03:32:12.094373 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:12.094383 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:12.094467 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:12.095274 ignition[1180]: PUT result: OK Apr 30 03:32:12.098023 ignition[1180]: kargs: kargs passed Apr 30 03:32:12.098087 ignition[1180]: Ignition finished successfully Apr 30 03:32:12.099646 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:32:12.104985 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:32:12.118606 ignition[1186]: Ignition 2.19.0 Apr 30 03:32:12.118617 ignition[1186]: Stage: disks Apr 30 03:32:12.119021 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:12.119031 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:12.119116 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:12.120106 ignition[1186]: PUT result: OK Apr 30 03:32:12.122370 ignition[1186]: disks: disks passed Apr 30 03:32:12.122430 ignition[1186]: Ignition finished successfully Apr 30 03:32:12.123402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:32:12.124357 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:32:12.124702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:32:12.125166 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:32:12.125654 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:32:12.126232 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:32:12.129973 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:32:12.154618 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:32:12.157840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:32:12.162939 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:32:12.266847 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:32:12.266792 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:32:12.268175 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:32:12.284512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:32:12.286642 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:32:12.288021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:32:12.288404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:32:12.288430 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:32:12.296001 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:32:12.300176 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:32:12.308019 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Apr 30 03:32:12.311065 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:32:12.311135 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:32:12.313482 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:32:12.326838 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:32:12.328668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:32:12.540649 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:32:12.558614 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:32:12.563654 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:32:12.568731 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:32:12.761139 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:32:12.768089 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:32:12.772102 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:32:12.780119 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:32:12.781820 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:32:12.808938 ignition[1325]: INFO : Ignition 2.19.0 Apr 30 03:32:12.810574 ignition[1325]: INFO : Stage: mount Apr 30 03:32:12.810574 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:12.810574 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:12.810574 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:12.815507 ignition[1325]: INFO : PUT result: OK Apr 30 03:32:12.815507 ignition[1325]: INFO : mount: mount passed Apr 30 03:32:12.815507 ignition[1325]: INFO : Ignition finished successfully Apr 30 03:32:12.818523 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:32:12.824043 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:32:12.827361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:32:12.844216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:32:12.861838 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1338) Apr 30 03:32:12.865523 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:32:12.865584 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:32:12.865599 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:32:12.871852 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:32:12.873520 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:32:12.893633 ignition[1355]: INFO : Ignition 2.19.0 Apr 30 03:32:12.893633 ignition[1355]: INFO : Stage: files Apr 30 03:32:12.894698 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:12.894698 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:12.894698 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:12.895650 ignition[1355]: INFO : PUT result: OK Apr 30 03:32:12.897729 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:32:12.910699 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:32:12.910699 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:32:12.937028 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:32:12.937786 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:32:12.937786 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:32:12.937396 unknown[1355]: wrote ssh authorized keys file for user: core Apr 30 03:32:12.940174 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:32:12.940837 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 03:32:12.940837 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:32:12.940837 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:32:13.696072 systemd-networkd[1166]: eth0: Gained IPv6LL Apr 30 03:32:15.123824 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:32:15.327747 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:32:15.328988 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:32:15.328988 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 03:32:15.777294 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 30 03:32:15.948977 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:32:15.948977 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:32:15.951999 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:32:15.960587 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:32:16.329204 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 30 03:32:16.575188 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:32:16.575188 ignition[1355]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:32:16.577892 ignition[1355]: INFO : files: files passed Apr 30 03:32:16.577892 ignition[1355]: INFO : Ignition finished successfully Apr 30 03:32:16.579204 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:32:16.588181 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:32:16.593024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:32:16.595204 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:32:16.595343 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:32:16.617559 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:32:16.617559 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:32:16.620693 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:32:16.620682 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:32:16.622096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:32:16.635070 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:32:16.663013 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:32:16.663150 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:32:16.664382 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:32:16.665484 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:32:16.666236 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:32:16.668074 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:32:16.694620 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:32:16.700169 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:32:16.712644 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:32:16.713343 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:32:16.714319 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:32:16.715193 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:32:16.715370 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:32:16.716707 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:32:16.717586 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:32:16.718383 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:32:16.719137 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:32:16.719952 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:32:16.720654 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:32:16.721408 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:32:16.722182 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:32:16.723286 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:32:16.724140 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:32:16.724840 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:32:16.725018 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:32:16.726096 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:32:16.726874 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:32:16.727534 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:32:16.727681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:32:16.728342 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:32:16.728509 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:32:16.729877 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:32:16.730056 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:32:16.730744 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:32:16.730917 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:32:16.743446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:32:16.748095 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:32:16.749517 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:32:16.749729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:32:16.751858 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:32:16.752037 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:32:16.759951 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:32:16.760077 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:32:16.767868 ignition[1408]: INFO : Ignition 2.19.0 Apr 30 03:32:16.768927 ignition[1408]: INFO : Stage: umount Apr 30 03:32:16.768927 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:32:16.768927 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:32:16.768927 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:32:16.771349 ignition[1408]: INFO : PUT result: OK Apr 30 03:32:16.774530 ignition[1408]: INFO : umount: umount passed Apr 30 03:32:16.775149 ignition[1408]: INFO : Ignition finished successfully Apr 30 03:32:16.777314 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:32:16.777484 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:32:16.779287 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:32:16.779349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:32:16.779937 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:32:16.779993 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:32:16.781418 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:32:16.781472 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:32:16.782463 systemd[1]: Stopped target network.target - Network. Apr 30 03:32:16.783405 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:32:16.783463 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:32:16.785926 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:32:16.786676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:32:16.789877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:32:16.790202 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:32:16.791032 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:32:16.791324 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:32:16.791366 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:32:16.791649 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:32:16.791681 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:32:16.791969 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:32:16.792013 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:32:16.792284 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:32:16.792318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:32:16.793913 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:32:16.794397 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:32:16.796369 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:32:16.796859 systemd-networkd[1166]: eth0: DHCPv6 lease lost Apr 30 03:32:16.797587 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:32:16.797677 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:32:16.798179 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:32:16.798262 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:32:16.799449 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:32:16.799506 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:32:16.799911 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:32:16.799955 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:32:16.804034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:32:16.805262 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:32:16.805916 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:32:16.807341 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:32:16.812306 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:32:16.812441 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:32:16.818681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:32:16.818860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:32:16.820950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:32:16.821018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:32:16.822742 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:32:16.823457 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:32:16.827104 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:32:16.827319 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:32:16.828794 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:32:16.829188 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:32:16.830028 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:32:16.830075 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:32:16.830544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:32:16.830605 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:32:16.832179 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:32:16.832242 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:32:16.833034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:32:16.833093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:32:16.840116 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:32:16.840665 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:32:16.840756 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:32:16.841361 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:32:16.841420 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:32:16.842087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:32:16.842147 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:32:16.842734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:32:16.842789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:32:16.843944 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:32:16.844061 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:32:16.849618 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:32:16.849755 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:32:16.851001 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:32:16.859040 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:32:16.870417 systemd[1]: Switching root. Apr 30 03:32:16.895189 systemd-journald[178]: Journal stopped Apr 30 03:32:18.645130 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 30 03:32:18.645225 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:32:18.645255 kernel: SELinux: policy capability open_perms=1 Apr 30 03:32:18.645275 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:32:18.645298 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:32:18.645321 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:32:18.645341 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:32:18.645358 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:32:18.645376 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:32:18.645396 kernel: audit: type=1403 audit(1745983937.509:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:32:18.645417 systemd[1]: Successfully loaded SELinux policy in 62.256ms. Apr 30 03:32:18.645450 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.891ms. Apr 30 03:32:18.645472 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:32:18.645493 systemd[1]: Detected virtualization amazon. Apr 30 03:32:18.645512 systemd[1]: Detected architecture x86-64. Apr 30 03:32:18.645530 systemd[1]: Detected first boot. Apr 30 03:32:18.645550 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:32:18.645569 zram_generator::config[1468]: No configuration found. Apr 30 03:32:18.645591 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:32:18.645613 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:32:18.645634 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 03:32:18.645653 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:32:18.645673 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:32:18.645692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:32:18.645711 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:32:18.645731 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:32:18.645751 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:32:18.645771 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:32:18.645793 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:32:18.645826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:32:18.645846 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:32:18.645865 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:32:18.645884 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:32:18.645903 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:32:18.645922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:32:18.645941 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:32:18.645961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:32:18.645983 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:32:18.646001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:32:18.646020 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:32:18.646042 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:32:18.646067 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:32:18.646090 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:32:18.646112 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:32:18.646134 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:32:18.646160 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:32:18.646182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:32:18.646204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:32:18.646226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:32:18.646248 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:32:18.646271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:32:18.646292 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:32:18.646313 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:32:18.646334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:32:18.646356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:32:18.646378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:32:18.646400 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:32:18.648344 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:32:18.648387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:32:18.648412 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:32:18.648433 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:32:18.648465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:32:18.648486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:32:18.648511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:32:18.648533 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:32:18.648555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:32:18.648578 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:32:18.648601 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 03:32:18.648628 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 03:32:18.648652 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:32:18.648673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:32:18.648699 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:32:18.648721 kernel: loop: module loaded Apr 30 03:32:18.648743 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:32:18.648764 kernel: fuse: init (API version 7.39) Apr 30 03:32:18.648785 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:32:18.648846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:32:18.648871 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:32:18.648892 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:32:18.648914 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:32:18.648946 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:32:18.648967 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:32:18.648989 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:32:18.649011 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:32:18.649072 systemd-journald[1566]: Collecting audit messages is disabled. Apr 30 03:32:18.649110 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:32:18.649131 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:32:18.649156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:32:18.649175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:32:18.649193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:32:18.649212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:32:18.649232 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:32:18.649253 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:32:18.649273 systemd-journald[1566]: Journal started Apr 30 03:32:18.649310 systemd-journald[1566]: Runtime Journal (/run/log/journal/ec288589cc88fe2e8ae4aac66cc354fa) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:32:18.653412 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:32:18.653458 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:32:18.653697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:32:18.659178 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:32:18.661350 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:32:18.663668 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:32:18.682330 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:32:18.708083 kernel: ACPI: bus type drm_connector registered Apr 30 03:32:18.710693 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:32:18.714935 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:32:18.716123 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:32:18.736261 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:32:18.742074 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:32:18.743273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:32:18.749621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:32:18.754646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:32:18.767082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:32:18.786966 systemd-journald[1566]: Time spent on flushing to /var/log/journal/ec288589cc88fe2e8ae4aac66cc354fa is 77.226ms for 972 entries. Apr 30 03:32:18.786966 systemd-journald[1566]: System Journal (/var/log/journal/ec288589cc88fe2e8ae4aac66cc354fa) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:32:18.886120 systemd-journald[1566]: Received client request to flush runtime journal. Apr 30 03:32:18.782036 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:32:18.789146 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:32:18.790246 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:32:18.797109 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:32:18.797982 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:32:18.798684 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:32:18.823356 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:32:18.824187 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:32:18.839656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:32:18.850043 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:32:18.871367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:32:18.890477 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Apr 30 03:32:18.891127 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:32:18.893392 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Apr 30 03:32:18.897424 udevadm[1629]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:32:18.904567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:32:18.911229 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:32:18.958229 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:32:18.967030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:32:18.993677 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Apr 30 03:32:18.994160 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Apr 30 03:32:19.000833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:32:19.546792 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:32:19.553007 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:32:19.591153 systemd-udevd[1649]: Using default interface naming scheme 'v255'. Apr 30 03:32:19.621729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:32:19.629526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:32:19.666955 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:32:19.668548 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:32:19.673660 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 03:32:19.729831 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 03:32:19.733180 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:32:19.752837 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 03:32:19.764824 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 03:32:19.774832 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:32:19.776822 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 30 03:32:19.780166 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 03:32:19.801824 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:32:19.811196 systemd-networkd[1652]: lo: Link UP Apr 30 03:32:19.811871 systemd-networkd[1652]: lo: Gained carrier Apr 30 03:32:19.813278 systemd-networkd[1652]: Enumeration completed Apr 30 03:32:19.813985 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:32:19.815317 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:32:19.815324 systemd-networkd[1652]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:32:19.822350 systemd-networkd[1652]: eth0: Link UP Apr 30 03:32:19.823022 systemd-networkd[1652]: eth0: Gained carrier Apr 30 03:32:19.823679 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:32:19.824008 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:32:19.834899 systemd-networkd[1652]: eth0: DHCPv4 address 172.31.18.209/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:32:19.838045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:32:19.862835 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1661) Apr 30 03:32:19.962270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:32:19.970123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:32:19.970976 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:32:19.977024 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:32:20.003925 lvm[1773]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:32:20.027739 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:32:20.028504 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:32:20.032997 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:32:20.040056 lvm[1776]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:32:20.068845 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:32:20.069456 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:32:20.070279 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:32:20.070311 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:32:20.070660 systemd[1]: Reached target machines.target - Containers. Apr 30 03:32:20.072150 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:32:20.076986 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:32:20.078859 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:32:20.079361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:32:20.081945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:32:20.086656 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:32:20.090944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:32:20.092317 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:32:20.106826 kernel: loop0: detected capacity change from 0 to 210664 Apr 30 03:32:20.115547 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:32:20.126703 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:32:20.129698 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:32:20.144837 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:32:20.186841 kernel: loop1: detected capacity change from 0 to 140768 Apr 30 03:32:20.275890 kernel: loop2: detected capacity change from 0 to 61336 Apr 30 03:32:20.344842 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 03:32:20.452850 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 03:32:20.489900 kernel: loop5: detected capacity change from 0 to 140768 Apr 30 03:32:20.512024 kernel: loop6: detected capacity change from 0 to 61336 Apr 30 03:32:20.528861 kernel: loop7: detected capacity change from 0 to 142488 Apr 30 03:32:20.550613 (sd-merge)[1797]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 03:32:20.551124 (sd-merge)[1797]: Merged extensions into '/usr'. Apr 30 03:32:20.554969 systemd[1]: Reloading requested from client PID 1784 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:32:20.554987 systemd[1]: Reloading... Apr 30 03:32:20.605828 zram_generator::config[1821]: No configuration found. Apr 30 03:32:20.801464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:32:20.897199 systemd[1]: Reloading finished in 341 ms. Apr 30 03:32:20.925723 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:32:20.942121 systemd[1]: Starting ensure-sysext.service... Apr 30 03:32:20.957891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:32:20.971678 systemd[1]: Reloading requested from client PID 1882 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:32:20.971698 systemd[1]: Reloading... Apr 30 03:32:20.991848 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:32:20.992396 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:32:20.993738 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:32:20.994210 systemd-tmpfiles[1883]: ACLs are not supported, ignoring. Apr 30 03:32:20.994300 systemd-tmpfiles[1883]: ACLs are not supported, ignoring. Apr 30 03:32:21.000002 systemd-tmpfiles[1883]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:32:21.000015 systemd-tmpfiles[1883]: Skipping /boot Apr 30 03:32:21.015757 systemd-tmpfiles[1883]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:32:21.015782 systemd-tmpfiles[1883]: Skipping /boot Apr 30 03:32:21.085913 zram_generator::config[1914]: No configuration found. Apr 30 03:32:21.235509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:32:21.248183 systemd-networkd[1652]: eth0: Gained IPv6LL Apr 30 03:32:21.324832 ldconfig[1780]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:32:21.326969 systemd[1]: Reloading finished in 354 ms. Apr 30 03:32:21.342334 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:32:21.343455 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:32:21.349538 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:32:21.358928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:32:21.366077 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:32:21.370007 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:32:21.380000 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:32:21.386587 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:32:21.403466 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:32:21.403817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:32:21.410145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:32:21.421157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:32:21.435707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:32:21.436836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:32:21.438951 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:32:21.447718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:32:21.448045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:32:21.464539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:32:21.464786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:32:21.468737 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:32:21.474120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:32:21.495433 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:32:21.504960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:32:21.505415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:32:21.513447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:32:21.527132 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:32:21.538110 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:32:21.543905 augenrules[2015]: No rules Apr 30 03:32:21.563019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:32:21.565045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:32:21.565275 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:32:21.565987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:32:21.567876 systemd[1]: Finished ensure-sysext.service. Apr 30 03:32:21.569117 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:32:21.576379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:32:21.579909 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:32:21.580856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:32:21.581087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:32:21.582464 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:32:21.582689 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:32:21.585288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:32:21.585509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:32:21.587483 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:32:21.587704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:32:21.598541 systemd-resolved[1982]: Positive Trust Anchors: Apr 30 03:32:21.598564 systemd-resolved[1982]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:32:21.598616 systemd-resolved[1982]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:32:21.600467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:32:21.600594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:32:21.606632 systemd-resolved[1982]: Defaulting to hostname 'linux'. Apr 30 03:32:21.608040 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:32:21.610093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:32:21.610762 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:32:21.611421 systemd[1]: Reached target network.target - Network. Apr 30 03:32:21.612017 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:32:21.612547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:32:21.621998 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:32:21.622874 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:32:21.623516 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:32:21.624140 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:32:21.624882 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:32:21.625513 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:32:21.626215 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:32:21.626698 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:32:21.626742 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:32:21.627188 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:32:21.628967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:32:21.630783 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:32:21.633043 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:32:21.634850 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:32:21.635355 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:32:21.635886 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:32:21.636544 systemd[1]: System is tainted: cgroupsv1 Apr 30 03:32:21.636592 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:32:21.636626 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:32:21.639934 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:32:21.645003 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:32:21.647100 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:32:21.651398 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:32:21.665022 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:32:21.665577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:32:21.667929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:21.679838 jq[2045]: false Apr 30 03:32:21.681016 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:32:21.691019 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 03:32:21.726195 extend-filesystems[2046]: Found loop4 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found loop5 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found loop6 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found loop7 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p1 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p2 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p3 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found usr Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p4 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p6 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p7 Apr 30 03:32:21.726195 extend-filesystems[2046]: Found nvme0n1p9 Apr 30 03:32:21.726195 extend-filesystems[2046]: Checking size of /dev/nvme0n1p9 Apr 30 03:32:21.728694 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:32:21.745924 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:32:21.764542 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 03:32:21.777070 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:32:21.777361 extend-filesystems[2046]: Resized partition /dev/nvme0n1p9 Apr 30 03:32:21.786155 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:32:21.795585 extend-filesystems[2068]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:32:21.810986 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:32:21.814677 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:32:21.824861 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 03:32:21.828905 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:32:21.837208 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:32:21.845842 dbus-daemon[2043]: [system] SELinux support is enabled Apr 30 03:32:21.850748 ntpd[2051]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: ---------------------------------------------------- Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: corporation. Support and training for ntp-4 are Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: available at https://www.nwtime.org/support Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: ---------------------------------------------------- Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: proto: precision = 0.068 usec (-24) Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: basedate set to 2025-04-17 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: gps base set to 2025-04-20 (week 2363) Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listen normally on 3 eth0 172.31.18.209:123 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listen normally on 4 lo [::1]:123 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listen normally on 5 eth0 [fe80::416:baff:fef3:9af3%2]:123 Apr 30 03:32:21.860795 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: Listening on routing socket on fd #22 for interface updates Apr 30 03:32:21.850788 ntpd[2051]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:32:21.881784 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:32:21.881784 ntpd[2051]: 30 Apr 03:32:21 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:32:21.862665 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:32:21.850799 ntpd[2051]: ---------------------------------------------------- Apr 30 03:32:21.877268 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:32:21.850842 ntpd[2051]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:32:21.878006 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:32:21.850850 ntpd[2051]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:32:21.892483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:32:21.850859 ntpd[2051]: corporation. Support and training for ntp-4 are Apr 30 03:32:21.896429 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:32:21.850868 ntpd[2051]: available at https://www.nwtime.org/support Apr 30 03:32:21.850877 ntpd[2051]: ---------------------------------------------------- Apr 30 03:32:21.853294 ntpd[2051]: proto: precision = 0.068 usec (-24) Apr 30 03:32:21.853612 ntpd[2051]: basedate set to 2025-04-17 Apr 30 03:32:21.853628 ntpd[2051]: gps base set to 2025-04-20 (week 2363) Apr 30 03:32:21.859875 ntpd[2051]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:32:21.859930 ntpd[2051]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:32:21.860153 ntpd[2051]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:32:21.860194 ntpd[2051]: Listen normally on 3 eth0 172.31.18.209:123 Apr 30 03:32:21.860235 ntpd[2051]: Listen normally on 4 lo [::1]:123 Apr 30 03:32:21.860277 ntpd[2051]: Listen normally on 5 eth0 [fe80::416:baff:fef3:9af3%2]:123 Apr 30 03:32:21.860315 ntpd[2051]: Listening on routing socket on fd #22 for interface updates Apr 30 03:32:21.869435 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:32:21.869469 ntpd[2051]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:32:21.882627 dbus-daemon[2043]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1652 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 03:32:21.905278 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:32:21.905630 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:32:21.912164 jq[2081]: true Apr 30 03:32:21.944342 update_engine[2078]: I20250430 03:32:21.940119 2078 main.cc:92] Flatcar Update Engine starting Apr 30 03:32:21.980600 (ntainerd)[2097]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:32:21.991871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:32:21.992297 update_engine[2078]: I20250430 03:32:21.984050 2078 update_check_scheduler.cc:74] Next update check in 4m4s Apr 30 03:32:21.991919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:32:21.998866 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 03:32:21.998942 coreos-metadata[2042]: Apr 30 03:32:21.996 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:32:21.994593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:32:21.999999 dbus-daemon[2043]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:32:21.994619 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:32:21.999367 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:32:22.015862 extend-filesystems[2068]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 03:32:22.015862 extend-filesystems[2068]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:32:22.015862 extend-filesystems[2068]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 03:32:22.063944 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1661) Apr 30 03:32:22.064008 jq[2100]: true Apr 30 03:32:22.017909 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch failed with 404: resource not found Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.027 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.027 INFO Fetch successful Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.027 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 03:32:22.071217 coreos-metadata[2042]: Apr 30 03:32:22.037 INFO Fetch successful Apr 30 03:32:22.074111 extend-filesystems[2046]: Resized filesystem in /dev/nvme0n1p9 Apr 30 03:32:22.023721 systemd-logind[2075]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:32:22.023744 systemd-logind[2075]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 30 03:32:22.023783 systemd-logind[2075]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:32:22.031313 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:32:22.049021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:32:22.052869 systemd-logind[2075]: New seat seat0. Apr 30 03:32:22.072171 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:32:22.073097 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:32:22.073406 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:32:22.077974 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:32:22.136489 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 03:32:22.143935 tar[2087]: linux-amd64/helm Apr 30 03:32:22.151343 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 03:32:22.176787 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:32:22.180557 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:32:22.316858 bash[2184]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:32:22.319457 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:32:22.338946 systemd[1]: Starting sshkeys.service... Apr 30 03:32:22.353166 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:32:22.362310 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:32:22.551738 amazon-ssm-agent[2145]: Initializing new seelog logger Apr 30 03:32:22.552182 amazon-ssm-agent[2145]: New Seelog Logger Creation Complete Apr 30 03:32:22.552182 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.552182 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.552435 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 processing appconfig overrides Apr 30 03:32:22.553874 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.554001 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.554191 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 processing appconfig overrides Apr 30 03:32:22.555827 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.555827 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.555827 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 processing appconfig overrides Apr 30 03:32:22.557986 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO Proxy environment variables: Apr 30 03:32:22.561290 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.562848 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:32:22.563109 amazon-ssm-agent[2145]: 2025/04/30 03:32:22 processing appconfig overrides Apr 30 03:32:22.614593 dbus-daemon[2043]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 03:32:22.614823 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 03:32:22.625831 coreos-metadata[2217]: Apr 30 03:32:22.624 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:32:22.628136 coreos-metadata[2217]: Apr 30 03:32:22.627 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 03:32:22.628436 coreos-metadata[2217]: Apr 30 03:32:22.628 INFO Fetch successful Apr 30 03:32:22.637829 coreos-metadata[2217]: Apr 30 03:32:22.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 03:32:22.636081 dbus-daemon[2043]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2117 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 03:32:22.638643 coreos-metadata[2217]: Apr 30 03:32:22.638 INFO Fetch successful Apr 30 03:32:22.642118 unknown[2217]: wrote ssh authorized keys file for user: core Apr 30 03:32:22.648008 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 03:32:22.659520 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO no_proxy: Apr 30 03:32:22.718095 update-ssh-keys[2260]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:32:22.722236 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:32:22.741463 systemd[1]: Finished sshkeys.service. Apr 30 03:32:22.751057 polkitd[2258]: Started polkitd version 121 Apr 30 03:32:22.760914 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO https_proxy: Apr 30 03:32:22.804063 polkitd[2258]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 03:32:22.804163 polkitd[2258]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 03:32:22.809456 locksmithd[2120]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:32:22.809866 polkitd[2258]: Finished loading, compiling and executing 2 rules Apr 30 03:32:22.821155 dbus-daemon[2043]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 03:32:22.823188 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 03:32:22.821848 polkitd[2258]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 03:32:22.862157 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO http_proxy: Apr 30 03:32:22.882061 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:32:22.922934 systemd-hostnamed[2117]: Hostname set to (transient) Apr 30 03:32:22.924034 systemd-resolved[1982]: System hostname changed to 'ip-172-31-18-209'. Apr 30 03:32:22.962361 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO Checking if agent identity type OnPrem can be assumed Apr 30 03:32:23.062567 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO Checking if agent identity type EC2 can be assumed Apr 30 03:32:23.163261 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO Agent will take identity from EC2 Apr 30 03:32:23.199505 containerd[2097]: time="2025-04-30T03:32:23.195130199Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:32:23.265069 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:32:23.306621 containerd[2097]: time="2025-04-30T03:32:23.304721081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.313167 containerd[2097]: time="2025-04-30T03:32:23.313066923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:32:23.315613 containerd[2097]: time="2025-04-30T03:32:23.315575921Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:32:23.316334 containerd[2097]: time="2025-04-30T03:32:23.315721957Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:32:23.316334 containerd[2097]: time="2025-04-30T03:32:23.315955406Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:32:23.316334 containerd[2097]: time="2025-04-30T03:32:23.315987838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.316334 containerd[2097]: time="2025-04-30T03:32:23.316060729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:32:23.316334 containerd[2097]: time="2025-04-30T03:32:23.316078919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.318416 containerd[2097]: time="2025-04-30T03:32:23.317111103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:32:23.318416 containerd[2097]: time="2025-04-30T03:32:23.317144665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.318416 containerd[2097]: time="2025-04-30T03:32:23.317166954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:32:23.318416 containerd[2097]: time="2025-04-30T03:32:23.317184947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.318416 containerd[2097]: time="2025-04-30T03:32:23.317297753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.318416 containerd[2097]: time="2025-04-30T03:32:23.317585044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:32:23.322288 containerd[2097]: time="2025-04-30T03:32:23.320096385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:32:23.322288 containerd[2097]: time="2025-04-30T03:32:23.320131251Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:32:23.322288 containerd[2097]: time="2025-04-30T03:32:23.320249817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:32:23.322288 containerd[2097]: time="2025-04-30T03:32:23.320308848Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:32:23.328479 containerd[2097]: time="2025-04-30T03:32:23.328439414Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:32:23.328578 containerd[2097]: time="2025-04-30T03:32:23.328506952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:32:23.328578 containerd[2097]: time="2025-04-30T03:32:23.328528592Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:32:23.328578 containerd[2097]: time="2025-04-30T03:32:23.328549037Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:32:23.328578 containerd[2097]: time="2025-04-30T03:32:23.328566984Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:32:23.328764 containerd[2097]: time="2025-04-30T03:32:23.328743229Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:32:23.332149 containerd[2097]: time="2025-04-30T03:32:23.331016213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:32:23.333029 containerd[2097]: time="2025-04-30T03:32:23.332911408Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:32:23.333029 containerd[2097]: time="2025-04-30T03:32:23.332955060Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:32:23.333029 containerd[2097]: time="2025-04-30T03:32:23.332996693Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:32:23.333029 containerd[2097]: time="2025-04-30T03:32:23.333017936Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333037893Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333072798Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333094254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333115095Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333150630Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333169499Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333219 containerd[2097]: time="2025-04-30T03:32:23.333187825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333232317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333254116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333273304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333345462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333381624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333402840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333420460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333460 containerd[2097]: time="2025-04-30T03:32:23.333453088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333473882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333496988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333514577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333550813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333571978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333616190Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333660930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333693573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.333751 containerd[2097]: time="2025-04-30T03:32:23.333711774Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.333785722Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.333899969Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.333919990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.333939044Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.333973407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.334000451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.334020730Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:32:23.336098 containerd[2097]: time="2025-04-30T03:32:23.334049805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:32:23.336404 containerd[2097]: time="2025-04-30T03:32:23.335240038Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:32:23.336404 containerd[2097]: time="2025-04-30T03:32:23.335342895Z" level=info msg="Connect containerd service" Apr 30 03:32:23.338603 containerd[2097]: time="2025-04-30T03:32:23.336919208Z" level=info msg="using legacy CRI server" Apr 30 03:32:23.338603 containerd[2097]: time="2025-04-30T03:32:23.336939462Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:32:23.338603 containerd[2097]: time="2025-04-30T03:32:23.337148641Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.338200227Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.339925049Z" level=info msg="Start subscribing containerd event" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.339981213Z" level=info msg="Start recovering state" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.340064903Z" level=info msg="Start event monitor" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.340077201Z" level=info msg="Start snapshots syncer" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.340089557Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:32:23.341200 containerd[2097]: time="2025-04-30T03:32:23.340099387Z" level=info msg="Start streaming server" Apr 30 03:32:23.346481 containerd[2097]: time="2025-04-30T03:32:23.346431360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:32:23.346562 containerd[2097]: time="2025-04-30T03:32:23.346525251Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:32:23.346738 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:32:23.350594 containerd[2097]: time="2025-04-30T03:32:23.349934773Z" level=info msg="containerd successfully booted in 0.155856s" Apr 30 03:32:23.365023 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:32:23.464292 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:32:23.529750 sshd_keygen[2110]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:32:23.549181 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [Registrar] Starting registrar module Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:22 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:23 INFO [EC2Identity] EC2 registration was successful. Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:23 INFO [CredentialRefresher] credentialRefresher has started Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:23 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 03:32:23.549311 amazon-ssm-agent[2145]: 2025-04-30 03:32:23 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 03:32:23.565070 amazon-ssm-agent[2145]: 2025-04-30 03:32:23 INFO [CredentialRefresher] Next credential rotation will be in 30.141638722116667 minutes Apr 30 03:32:23.567195 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:32:23.579204 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:32:23.584857 systemd[1]: Started sshd@0-172.31.18.209:22-147.75.109.163:60276.service - OpenSSH per-connection server daemon (147.75.109.163:60276). Apr 30 03:32:23.597968 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:32:23.598342 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:32:23.605576 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:32:23.645260 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:32:23.663606 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:32:23.670203 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:32:23.672942 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:32:23.761522 tar[2087]: linux-amd64/LICENSE Apr 30 03:32:23.762003 tar[2087]: linux-amd64/README.md Apr 30 03:32:23.775587 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:32:23.879165 sshd[2298]: Accepted publickey for core from 147.75.109.163 port 60276 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:23.880659 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:23.889432 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:32:23.897177 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:32:23.903563 systemd-logind[2075]: New session 1 of user core. Apr 30 03:32:23.916747 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:32:23.927065 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:32:23.939084 (systemd)[2318]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:32:24.058514 systemd[2318]: Queued start job for default target default.target. Apr 30 03:32:24.059538 systemd[2318]: Created slice app.slice - User Application Slice. Apr 30 03:32:24.059559 systemd[2318]: Reached target paths.target - Paths. Apr 30 03:32:24.059571 systemd[2318]: Reached target timers.target - Timers. Apr 30 03:32:24.063986 systemd[2318]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:32:24.072596 systemd[2318]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:32:24.072660 systemd[2318]: Reached target sockets.target - Sockets. Apr 30 03:32:24.072674 systemd[2318]: Reached target basic.target - Basic System. Apr 30 03:32:24.072716 systemd[2318]: Reached target default.target - Main User Target. Apr 30 03:32:24.072745 systemd[2318]: Startup finished in 125ms. Apr 30 03:32:24.073182 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:32:24.078507 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:32:24.264965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:24.266336 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:32:24.268369 systemd[1]: Startup finished in 9.380s (kernel) + 6.818s (userspace) = 16.198s. Apr 30 03:32:24.268952 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:32:24.283143 systemd[1]: Started sshd@1-172.31.18.209:22-147.75.109.163:60290.service - OpenSSH per-connection server daemon (147.75.109.163:60290). Apr 30 03:32:24.523251 sshd[2341]: Accepted publickey for core from 147.75.109.163 port 60290 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:24.524935 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:24.529564 systemd-logind[2075]: New session 2 of user core. Apr 30 03:32:24.535131 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:32:24.561341 amazon-ssm-agent[2145]: 2025-04-30 03:32:24 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 03:32:24.662020 amazon-ssm-agent[2145]: 2025-04-30 03:32:24 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2352) started Apr 30 03:32:24.716004 sshd[2341]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:24.720025 systemd[1]: sshd@1-172.31.18.209:22-147.75.109.163:60290.service: Deactivated successfully. Apr 30 03:32:24.724397 systemd-logind[2075]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:32:24.724717 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:32:24.730814 systemd-logind[2075]: Removed session 2. Apr 30 03:32:24.760129 systemd[1]: Started sshd@2-172.31.18.209:22-147.75.109.163:60298.service - OpenSSH per-connection server daemon (147.75.109.163:60298). Apr 30 03:32:24.762592 amazon-ssm-agent[2145]: 2025-04-30 03:32:24 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 03:32:25.002220 sshd[2366]: Accepted publickey for core from 147.75.109.163 port 60298 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:25.003931 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:25.009587 systemd-logind[2075]: New session 3 of user core. Apr 30 03:32:25.013334 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:32:25.107218 kubelet[2338]: E0430 03:32:25.107130 2338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:32:25.109538 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:32:25.109759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:32:25.190191 sshd[2366]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:25.193040 systemd[1]: sshd@2-172.31.18.209:22-147.75.109.163:60298.service: Deactivated successfully. Apr 30 03:32:25.195874 systemd-logind[2075]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:32:25.196848 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:32:25.198323 systemd-logind[2075]: Removed session 3. Apr 30 03:32:25.231108 systemd[1]: Started sshd@3-172.31.18.209:22-147.75.109.163:60304.service - OpenSSH per-connection server daemon (147.75.109.163:60304). Apr 30 03:32:25.469660 sshd[2379]: Accepted publickey for core from 147.75.109.163 port 60304 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:25.470619 sshd[2379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:25.475562 systemd-logind[2075]: New session 4 of user core. Apr 30 03:32:25.480525 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:32:25.661172 sshd[2379]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:25.663583 systemd[1]: sshd@3-172.31.18.209:22-147.75.109.163:60304.service: Deactivated successfully. Apr 30 03:32:25.666899 systemd-logind[2075]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:32:25.667284 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:32:25.669020 systemd-logind[2075]: Removed session 4. Apr 30 03:32:25.703087 systemd[1]: Started sshd@4-172.31.18.209:22-147.75.109.163:60312.service - OpenSSH per-connection server daemon (147.75.109.163:60312). Apr 30 03:32:25.945401 sshd[2387]: Accepted publickey for core from 147.75.109.163 port 60312 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:25.946419 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:25.951342 systemd-logind[2075]: New session 5 of user core. Apr 30 03:32:25.959278 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:32:26.136311 sudo[2391]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:32:26.136606 sudo[2391]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:32:26.150312 sudo[2391]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:26.187969 sshd[2387]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:26.190798 systemd[1]: sshd@4-172.31.18.209:22-147.75.109.163:60312.service: Deactivated successfully. Apr 30 03:32:26.193957 systemd-logind[2075]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:32:26.195692 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:32:26.197219 systemd-logind[2075]: Removed session 5. Apr 30 03:32:26.230128 systemd[1]: Started sshd@5-172.31.18.209:22-147.75.109.163:39232.service - OpenSSH per-connection server daemon (147.75.109.163:39232). Apr 30 03:32:26.474950 sshd[2396]: Accepted publickey for core from 147.75.109.163 port 39232 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:26.476441 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:26.481475 systemd-logind[2075]: New session 6 of user core. Apr 30 03:32:26.488177 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:32:26.632576 sudo[2401]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:32:26.632881 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:32:26.636742 sudo[2401]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:26.642432 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:32:26.642726 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:32:26.664736 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:32:26.666308 auditctl[2404]: No rules Apr 30 03:32:26.666820 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:32:26.667156 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:32:26.675722 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:32:26.702611 augenrules[2423]: No rules Apr 30 03:32:26.704383 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:32:26.707584 sudo[2400]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:26.746717 sshd[2396]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:26.750118 systemd[1]: sshd@5-172.31.18.209:22-147.75.109.163:39232.service: Deactivated successfully. Apr 30 03:32:26.753843 systemd-logind[2075]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:32:26.754612 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:32:26.755556 systemd-logind[2075]: Removed session 6. Apr 30 03:32:26.788207 systemd[1]: Started sshd@6-172.31.18.209:22-147.75.109.163:39242.service - OpenSSH per-connection server daemon (147.75.109.163:39242). Apr 30 03:32:27.030022 sshd[2432]: Accepted publickey for core from 147.75.109.163 port 39242 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:27.031464 sshd[2432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:27.036545 systemd-logind[2075]: New session 7 of user core. Apr 30 03:32:27.043158 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:32:27.185247 sudo[2436]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:32:27.185531 sudo[2436]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:32:27.685261 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:32:27.686786 (dockerd)[2452]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:32:28.263040 dockerd[2452]: time="2025-04-30T03:32:28.262981345Z" level=info msg="Starting up" Apr 30 03:32:28.603233 dockerd[2452]: time="2025-04-30T03:32:28.602955009Z" level=info msg="Loading containers: start." Apr 30 03:32:28.780829 kernel: Initializing XFRM netlink socket Apr 30 03:32:28.829357 (udev-worker)[2475]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:32:29.909208 systemd-resolved[1982]: Clock change detected. Flushing caches. Apr 30 03:32:29.947154 systemd-networkd[1652]: docker0: Link UP Apr 30 03:32:29.967652 dockerd[2452]: time="2025-04-30T03:32:29.967606719Z" level=info msg="Loading containers: done." Apr 30 03:32:29.995654 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3488592080-merged.mount: Deactivated successfully. Apr 30 03:32:30.002722 dockerd[2452]: time="2025-04-30T03:32:30.002668503Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:32:30.002978 dockerd[2452]: time="2025-04-30T03:32:30.002808599Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:32:30.002978 dockerd[2452]: time="2025-04-30T03:32:30.002958287Z" level=info msg="Daemon has completed initialization" Apr 30 03:32:30.052027 dockerd[2452]: time="2025-04-30T03:32:30.051594567Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:32:30.051955 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:32:31.243947 containerd[2097]: time="2025-04-30T03:32:31.243909208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:32:31.917081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097650468.mount: Deactivated successfully. Apr 30 03:32:33.631099 containerd[2097]: time="2025-04-30T03:32:33.631045056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:33.632134 containerd[2097]: time="2025-04-30T03:32:33.632095164Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:32:33.633343 containerd[2097]: time="2025-04-30T03:32:33.633203110Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:33.636120 containerd[2097]: time="2025-04-30T03:32:33.636062008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:33.637058 containerd[2097]: time="2025-04-30T03:32:33.637021677Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.393075039s" Apr 30 03:32:33.637133 containerd[2097]: time="2025-04-30T03:32:33.637063157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:32:33.660067 containerd[2097]: time="2025-04-30T03:32:33.660016709Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:32:35.788953 containerd[2097]: time="2025-04-30T03:32:35.788890398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:35.790099 containerd[2097]: time="2025-04-30T03:32:35.790047348Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:32:35.791231 containerd[2097]: time="2025-04-30T03:32:35.791177099Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:35.794072 containerd[2097]: time="2025-04-30T03:32:35.794006530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:35.795395 containerd[2097]: time="2025-04-30T03:32:35.795223818Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.135162669s" Apr 30 03:32:35.795395 containerd[2097]: time="2025-04-30T03:32:35.795271517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:32:35.821642 containerd[2097]: time="2025-04-30T03:32:35.821522665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:32:36.360715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:32:36.372084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:36.574829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:36.587946 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:32:36.635972 kubelet[2676]: E0430 03:32:36.635837 2676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:32:36.640422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:32:36.640723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:32:37.305142 containerd[2097]: time="2025-04-30T03:32:37.305080785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:37.307046 containerd[2097]: time="2025-04-30T03:32:37.307001231Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:32:37.308089 containerd[2097]: time="2025-04-30T03:32:37.308042851Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:37.310792 containerd[2097]: time="2025-04-30T03:32:37.310748739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:37.312030 containerd[2097]: time="2025-04-30T03:32:37.311696079Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.490126843s" Apr 30 03:32:37.312030 containerd[2097]: time="2025-04-30T03:32:37.311728437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:32:37.331964 containerd[2097]: time="2025-04-30T03:32:37.331905967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:32:38.404781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906331411.mount: Deactivated successfully. Apr 30 03:32:38.913577 containerd[2097]: time="2025-04-30T03:32:38.913516253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:38.915383 containerd[2097]: time="2025-04-30T03:32:38.915313289Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:32:38.916491 containerd[2097]: time="2025-04-30T03:32:38.916442522Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:38.919857 containerd[2097]: time="2025-04-30T03:32:38.919791111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:38.920385 containerd[2097]: time="2025-04-30T03:32:38.920280144Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.588337576s" Apr 30 03:32:38.920385 containerd[2097]: time="2025-04-30T03:32:38.920314027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:32:38.943908 containerd[2097]: time="2025-04-30T03:32:38.943870939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:32:39.519596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388138950.mount: Deactivated successfully. Apr 30 03:32:40.413460 containerd[2097]: time="2025-04-30T03:32:40.413383931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:40.414465 containerd[2097]: time="2025-04-30T03:32:40.414409261Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:32:40.415277 containerd[2097]: time="2025-04-30T03:32:40.415218964Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:40.417957 containerd[2097]: time="2025-04-30T03:32:40.417899527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:40.419102 containerd[2097]: time="2025-04-30T03:32:40.418922062Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.475008887s" Apr 30 03:32:40.419102 containerd[2097]: time="2025-04-30T03:32:40.418961848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:32:40.443319 containerd[2097]: time="2025-04-30T03:32:40.443284492Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:32:40.907814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870575113.mount: Deactivated successfully. Apr 30 03:32:40.914875 containerd[2097]: time="2025-04-30T03:32:40.914822597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:40.916602 containerd[2097]: time="2025-04-30T03:32:40.916544792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:32:40.917716 containerd[2097]: time="2025-04-30T03:32:40.917664643Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:40.920876 containerd[2097]: time="2025-04-30T03:32:40.920826629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:40.921581 containerd[2097]: time="2025-04-30T03:32:40.921464957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 478.144204ms" Apr 30 03:32:40.921581 containerd[2097]: time="2025-04-30T03:32:40.921495493Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:32:40.946957 containerd[2097]: time="2025-04-30T03:32:40.946912026Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:32:41.478998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182880545.mount: Deactivated successfully. Apr 30 03:32:44.164422 containerd[2097]: time="2025-04-30T03:32:44.164367214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:44.166617 containerd[2097]: time="2025-04-30T03:32:44.166549637Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:32:44.169035 containerd[2097]: time="2025-04-30T03:32:44.168972262Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:44.176758 containerd[2097]: time="2025-04-30T03:32:44.176719686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:44.178033 containerd[2097]: time="2025-04-30T03:32:44.177905292Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.230956754s" Apr 30 03:32:44.178033 containerd[2097]: time="2025-04-30T03:32:44.177938645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:32:46.860927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:32:46.869721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:46.925864 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:32:46.926054 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:32:46.926550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:46.940888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:46.968293 systemd[1]: Reloading requested from client PID 2881 ('systemctl') (unit session-7.scope)... Apr 30 03:32:46.968492 systemd[1]: Reloading... Apr 30 03:32:47.090698 zram_generator::config[2922]: No configuration found. Apr 30 03:32:47.249011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:32:47.333887 systemd[1]: Reloading finished in 364 ms. Apr 30 03:32:47.378929 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:32:47.379046 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:32:47.379553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:47.388825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:47.611588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:47.615184 (kubelet)[2995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:32:47.667560 kubelet[2995]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:47.667560 kubelet[2995]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:32:47.667560 kubelet[2995]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:47.668125 kubelet[2995]: I0430 03:32:47.667631 2995 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:32:47.862543 kubelet[2995]: I0430 03:32:47.862428 2995 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:32:47.862543 kubelet[2995]: I0430 03:32:47.862461 2995 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:32:47.863030 kubelet[2995]: I0430 03:32:47.862999 2995 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:32:47.892432 kubelet[2995]: I0430 03:32:47.892394 2995 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:32:47.896896 kubelet[2995]: E0430 03:32:47.896703 2995 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.914388 kubelet[2995]: I0430 03:32:47.914227 2995 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:32:47.914976 kubelet[2995]: I0430 03:32:47.914935 2995 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:32:47.917338 kubelet[2995]: I0430 03:32:47.915054 2995 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-209","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:32:47.918025 kubelet[2995]: I0430 03:32:47.918008 2995 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:32:47.918161 kubelet[2995]: I0430 03:32:47.918100 2995 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:32:47.920507 kubelet[2995]: I0430 03:32:47.920424 2995 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:47.921701 kubelet[2995]: I0430 03:32:47.921682 2995 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:32:47.921701 kubelet[2995]: I0430 03:32:47.921701 2995 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:32:47.922332 kubelet[2995]: W0430 03:32:47.922249 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-209&limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.922332 kubelet[2995]: E0430 03:32:47.922306 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-209&limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.922849 kubelet[2995]: I0430 03:32:47.922632 2995 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:32:47.922849 kubelet[2995]: I0430 03:32:47.922666 2995 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:32:47.928282 kubelet[2995]: W0430 03:32:47.928240 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.928437 kubelet[2995]: E0430 03:32:47.928427 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.928605 kubelet[2995]: I0430 03:32:47.928566 2995 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:32:47.931730 kubelet[2995]: I0430 03:32:47.930758 2995 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:32:47.931730 kubelet[2995]: W0430 03:32:47.930816 2995 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:32:47.931730 kubelet[2995]: I0430 03:32:47.931351 2995 server.go:1264] "Started kubelet" Apr 30 03:32:47.936183 kubelet[2995]: I0430 03:32:47.936153 2995 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:32:47.938952 kubelet[2995]: I0430 03:32:47.938889 2995 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:32:47.940374 kubelet[2995]: I0430 03:32:47.939231 2995 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:32:47.940374 kubelet[2995]: E0430 03:32:47.939405 2995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.209:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-209.183afb2b1012c0e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-209,UID:ip-172-31-18-209,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-209,},FirstTimestamp:2025-04-30 03:32:47.931318501 +0000 UTC m=+0.312461301,LastTimestamp:2025-04-30 03:32:47.931318501 +0000 UTC m=+0.312461301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-209,}" Apr 30 03:32:47.940374 kubelet[2995]: I0430 03:32:47.939685 2995 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:32:47.940929 kubelet[2995]: I0430 03:32:47.940911 2995 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:32:47.943263 kubelet[2995]: I0430 03:32:47.942562 2995 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:32:47.943263 kubelet[2995]: I0430 03:32:47.942644 2995 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:32:47.943263 kubelet[2995]: I0430 03:32:47.942689 2995 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:32:47.943263 kubelet[2995]: W0430 03:32:47.942992 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.943263 kubelet[2995]: E0430 03:32:47.943030 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.943263 kubelet[2995]: E0430 03:32:47.943190 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-209?timeout=10s\": dial tcp 172.31.18.209:6443: connect: connection refused" interval="200ms" Apr 30 03:32:47.947127 kubelet[2995]: I0430 03:32:47.947080 2995 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:32:47.947250 kubelet[2995]: I0430 03:32:47.947207 2995 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:32:47.949308 kubelet[2995]: I0430 03:32:47.949260 2995 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:32:47.980821 kubelet[2995]: E0430 03:32:47.980789 2995 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:32:47.983039 kubelet[2995]: I0430 03:32:47.982974 2995 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:32:47.992528 kubelet[2995]: I0430 03:32:47.991375 2995 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:32:47.992528 kubelet[2995]: I0430 03:32:47.991409 2995 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:32:47.992528 kubelet[2995]: I0430 03:32:47.991957 2995 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:32:47.992528 kubelet[2995]: E0430 03:32:47.992091 2995 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:32:47.995621 kubelet[2995]: W0430 03:32:47.995567 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.995742 kubelet[2995]: E0430 03:32:47.995635 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:47.998458 kubelet[2995]: I0430 03:32:47.998346 2995 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:32:47.998717 kubelet[2995]: I0430 03:32:47.998700 2995 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:32:47.998788 kubelet[2995]: I0430 03:32:47.998743 2995 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:48.005586 kubelet[2995]: I0430 03:32:48.005548 2995 policy_none.go:49] "None policy: Start" Apr 30 03:32:48.006351 kubelet[2995]: I0430 03:32:48.006283 2995 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:32:48.006351 kubelet[2995]: I0430 03:32:48.006308 2995 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:32:48.014060 kubelet[2995]: I0430 03:32:48.013498 2995 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:32:48.014060 kubelet[2995]: I0430 03:32:48.013678 2995 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:32:48.014060 kubelet[2995]: I0430 03:32:48.013770 2995 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:32:48.015774 kubelet[2995]: E0430 03:32:48.015758 2995 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-209\" not found" Apr 30 03:32:48.044593 kubelet[2995]: I0430 03:32:48.044540 2995 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-209" Apr 30 03:32:48.044895 kubelet[2995]: E0430 03:32:48.044866 2995 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.209:6443/api/v1/nodes\": dial tcp 172.31.18.209:6443: connect: connection refused" node="ip-172-31-18-209" Apr 30 03:32:48.093333 kubelet[2995]: I0430 03:32:48.093270 2995 topology_manager.go:215] "Topology Admit Handler" podUID="d47901109d9f20265bbd4862ff3c446e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-209" Apr 30 03:32:48.094780 kubelet[2995]: I0430 03:32:48.094730 2995 topology_manager.go:215] "Topology Admit Handler" podUID="9a27c500bdc0da369fde9fa4f036abc6" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:48.096444 kubelet[2995]: I0430 03:32:48.096004 2995 topology_manager.go:215] "Topology Admit Handler" podUID="9cbbe18b90f50f6a0aa54af7794d205b" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-209" Apr 30 03:32:48.144505 kubelet[2995]: E0430 03:32:48.144348 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-209?timeout=10s\": dial tcp 172.31.18.209:6443: connect: connection refused" interval="400ms" Apr 30 03:32:48.245121 kubelet[2995]: I0430 03:32:48.244892 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d47901109d9f20265bbd4862ff3c446e-ca-certs\") pod \"kube-apiserver-ip-172-31-18-209\" (UID: \"d47901109d9f20265bbd4862ff3c446e\") " pod="kube-system/kube-apiserver-ip-172-31-18-209" Apr 30 03:32:48.245121 kubelet[2995]: I0430 03:32:48.244932 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:48.245121 kubelet[2995]: I0430 03:32:48.244955 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9cbbe18b90f50f6a0aa54af7794d205b-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-209\" (UID: \"9cbbe18b90f50f6a0aa54af7794d205b\") " pod="kube-system/kube-scheduler-ip-172-31-18-209" Apr 30 03:32:48.245121 kubelet[2995]: I0430 03:32:48.244971 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d47901109d9f20265bbd4862ff3c446e-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-209\" (UID: \"d47901109d9f20265bbd4862ff3c446e\") " pod="kube-system/kube-apiserver-ip-172-31-18-209" Apr 30 03:32:48.245121 kubelet[2995]: I0430 03:32:48.244988 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d47901109d9f20265bbd4862ff3c446e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-209\" (UID: \"d47901109d9f20265bbd4862ff3c446e\") " pod="kube-system/kube-apiserver-ip-172-31-18-209" Apr 30 03:32:48.245407 kubelet[2995]: I0430 03:32:48.245004 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:48.245407 kubelet[2995]: I0430 03:32:48.245020 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:48.245407 kubelet[2995]: I0430 03:32:48.245038 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:48.245407 kubelet[2995]: I0430 03:32:48.245054 2995 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:48.246292 kubelet[2995]: I0430 03:32:48.246276 2995 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-209" Apr 30 03:32:48.246720 kubelet[2995]: E0430 03:32:48.246693 2995 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.209:6443/api/v1/nodes\": dial tcp 172.31.18.209:6443: connect: connection refused" node="ip-172-31-18-209" Apr 30 03:32:48.399351 containerd[2097]: time="2025-04-30T03:32:48.399209995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-209,Uid:d47901109d9f20265bbd4862ff3c446e,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:48.406496 containerd[2097]: time="2025-04-30T03:32:48.406452534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-209,Uid:9a27c500bdc0da369fde9fa4f036abc6,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:48.406861 containerd[2097]: time="2025-04-30T03:32:48.406453131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-209,Uid:9cbbe18b90f50f6a0aa54af7794d205b,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:48.544917 kubelet[2995]: E0430 03:32:48.544874 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-209?timeout=10s\": dial tcp 172.31.18.209:6443: connect: connection refused" interval="800ms" Apr 30 03:32:48.648734 kubelet[2995]: I0430 03:32:48.648621 2995 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-209" Apr 30 03:32:48.648957 kubelet[2995]: E0430 03:32:48.648919 2995 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.209:6443/api/v1/nodes\": dial tcp 172.31.18.209:6443: connect: connection refused" node="ip-172-31-18-209" Apr 30 03:32:48.884146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437085054.mount: Deactivated successfully. Apr 30 03:32:48.900028 containerd[2097]: time="2025-04-30T03:32:48.899970078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:48.902074 containerd[2097]: time="2025-04-30T03:32:48.902027410Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:48.903932 containerd[2097]: time="2025-04-30T03:32:48.903858724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:32:48.907133 containerd[2097]: time="2025-04-30T03:32:48.907084537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:32:48.908672 containerd[2097]: time="2025-04-30T03:32:48.908636344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:48.910654 kubelet[2995]: W0430 03:32:48.910471 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:48.910654 kubelet[2995]: E0430 03:32:48.910506 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:48.911041 containerd[2097]: time="2025-04-30T03:32:48.910820970Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:48.913245 containerd[2097]: time="2025-04-30T03:32:48.913106282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:32:48.915736 containerd[2097]: time="2025-04-30T03:32:48.915701820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:48.916324 containerd[2097]: time="2025-04-30T03:32:48.916293886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.009333ms" Apr 30 03:32:48.918595 containerd[2097]: time="2025-04-30T03:32:48.918515440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.844898ms" Apr 30 03:32:48.920550 containerd[2097]: time="2025-04-30T03:32:48.920512846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.755728ms" Apr 30 03:32:49.019985 kubelet[2995]: W0430 03:32:49.017132 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:49.019985 kubelet[2995]: E0430 03:32:49.017299 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:49.111294 containerd[2097]: time="2025-04-30T03:32:49.110767923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:49.114250 containerd[2097]: time="2025-04-30T03:32:49.113869564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:49.114250 containerd[2097]: time="2025-04-30T03:32:49.113911881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:49.114250 containerd[2097]: time="2025-04-30T03:32:49.114038750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:49.118410 containerd[2097]: time="2025-04-30T03:32:49.116700369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:49.118410 containerd[2097]: time="2025-04-30T03:32:49.116765937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:49.118410 containerd[2097]: time="2025-04-30T03:32:49.116797970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:49.118410 containerd[2097]: time="2025-04-30T03:32:49.116924552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:49.130546 containerd[2097]: time="2025-04-30T03:32:49.129622754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:49.130546 containerd[2097]: time="2025-04-30T03:32:49.130488234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:49.135698 containerd[2097]: time="2025-04-30T03:32:49.133609045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:49.136211 containerd[2097]: time="2025-04-30T03:32:49.136080457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:49.228166 containerd[2097]: time="2025-04-30T03:32:49.226828656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-209,Uid:9a27c500bdc0da369fde9fa4f036abc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a36cb6b7efcbe6c32525813faecd3a4e6c1075938a3aa1f4f3d67479f4c5fd85\"" Apr 30 03:32:49.248623 containerd[2097]: time="2025-04-30T03:32:49.248585519Z" level=info msg="CreateContainer within sandbox \"a36cb6b7efcbe6c32525813faecd3a4e6c1075938a3aa1f4f3d67479f4c5fd85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:32:49.253473 kubelet[2995]: W0430 03:32:49.253410 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-209&limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:49.253641 kubelet[2995]: E0430 03:32:49.253485 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-209&limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:49.262573 containerd[2097]: time="2025-04-30T03:32:49.262522440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-209,Uid:d47901109d9f20265bbd4862ff3c446e,Namespace:kube-system,Attempt:0,} returns sandbox id \"445047f90d2a31280be6c2c464e4b6e1b0f09410db3c5d5f1997b439443b68d9\"" Apr 30 03:32:49.266429 containerd[2097]: time="2025-04-30T03:32:49.266393753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-209,Uid:9cbbe18b90f50f6a0aa54af7794d205b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e32fcc9eb294c5d512604424a39f95bc448b1f9f2fbc0041726116ac15df0ce\"" Apr 30 03:32:49.267004 containerd[2097]: time="2025-04-30T03:32:49.266983595Z" level=info msg="CreateContainer within sandbox \"445047f90d2a31280be6c2c464e4b6e1b0f09410db3c5d5f1997b439443b68d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:32:49.270901 containerd[2097]: time="2025-04-30T03:32:49.270872123Z" level=info msg="CreateContainer within sandbox \"8e32fcc9eb294c5d512604424a39f95bc448b1f9f2fbc0041726116ac15df0ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:32:49.295418 containerd[2097]: time="2025-04-30T03:32:49.295379073Z" level=info msg="CreateContainer within sandbox \"a36cb6b7efcbe6c32525813faecd3a4e6c1075938a3aa1f4f3d67479f4c5fd85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a\"" Apr 30 03:32:49.296197 containerd[2097]: time="2025-04-30T03:32:49.296165741Z" level=info msg="StartContainer for \"6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a\"" Apr 30 03:32:49.312791 kubelet[2995]: W0430 03:32:49.312703 2995 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:49.312791 kubelet[2995]: E0430 03:32:49.312766 2995 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.209:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:49.322071 containerd[2097]: time="2025-04-30T03:32:49.321877726Z" level=info msg="CreateContainer within sandbox \"8e32fcc9eb294c5d512604424a39f95bc448b1f9f2fbc0041726116ac15df0ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89\"" Apr 30 03:32:49.324219 containerd[2097]: time="2025-04-30T03:32:49.322915833Z" level=info msg="StartContainer for \"90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89\"" Apr 30 03:32:49.346101 kubelet[2995]: E0430 03:32:49.346040 2995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-209?timeout=10s\": dial tcp 172.31.18.209:6443: connect: connection refused" interval="1.6s" Apr 30 03:32:49.363915 containerd[2097]: time="2025-04-30T03:32:49.363856921Z" level=info msg="CreateContainer within sandbox \"445047f90d2a31280be6c2c464e4b6e1b0f09410db3c5d5f1997b439443b68d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e745c5ab512c6c2691c645f96537f73db1f51ca900625a7cab50c9217abcf979\"" Apr 30 03:32:49.364968 containerd[2097]: time="2025-04-30T03:32:49.364869077Z" level=info msg="StartContainer for \"e745c5ab512c6c2691c645f96537f73db1f51ca900625a7cab50c9217abcf979\"" Apr 30 03:32:49.418676 containerd[2097]: time="2025-04-30T03:32:49.417955623Z" level=info msg="StartContainer for \"6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a\" returns successfully" Apr 30 03:32:49.455125 kubelet[2995]: I0430 03:32:49.453919 2995 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-209" Apr 30 03:32:49.455125 kubelet[2995]: E0430 03:32:49.455092 2995 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.209:6443/api/v1/nodes\": dial tcp 172.31.18.209:6443: connect: connection refused" node="ip-172-31-18-209" Apr 30 03:32:49.462621 containerd[2097]: time="2025-04-30T03:32:49.461804914Z" level=info msg="StartContainer for \"90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89\" returns successfully" Apr 30 03:32:49.497757 containerd[2097]: time="2025-04-30T03:32:49.497382368Z" level=info msg="StartContainer for \"e745c5ab512c6c2691c645f96537f73db1f51ca900625a7cab50c9217abcf979\" returns successfully" Apr 30 03:32:49.537241 kubelet[2995]: E0430 03:32:49.537106 2995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.209:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-209.183afb2b1012c0e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-209,UID:ip-172-31-18-209,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-209,},FirstTimestamp:2025-04-30 03:32:47.931318501 +0000 UTC m=+0.312461301,LastTimestamp:2025-04-30 03:32:47.931318501 +0000 UTC m=+0.312461301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-209,}" Apr 30 03:32:50.037563 kubelet[2995]: E0430 03:32:50.037512 2995 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.209:6443: connect: connection refused Apr 30 03:32:51.058444 kubelet[2995]: I0430 03:32:51.058170 2995 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-209" Apr 30 03:32:52.059346 kubelet[2995]: I0430 03:32:52.057680 2995 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-209" Apr 30 03:32:52.930460 kubelet[2995]: I0430 03:32:52.930215 2995 apiserver.go:52] "Watching apiserver" Apr 30 03:32:52.943401 kubelet[2995]: I0430 03:32:52.943328 2995 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:32:54.011864 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 03:32:54.058301 systemd[1]: Reloading requested from client PID 3277 ('systemctl') (unit session-7.scope)... Apr 30 03:32:54.058319 systemd[1]: Reloading... Apr 30 03:32:54.178388 zram_generator::config[3324]: No configuration found. Apr 30 03:32:54.305043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:32:54.399807 systemd[1]: Reloading finished in 340 ms. Apr 30 03:32:54.436662 kubelet[2995]: I0430 03:32:54.436581 2995 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:32:54.437313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:54.450857 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:32:54.451311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:54.459189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:54.711562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:54.716516 (kubelet)[3387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:32:54.798764 kubelet[3387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:54.798764 kubelet[3387]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:32:54.798764 kubelet[3387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:54.799184 kubelet[3387]: I0430 03:32:54.798851 3387 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:32:54.804083 kubelet[3387]: I0430 03:32:54.804043 3387 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:32:54.804083 kubelet[3387]: I0430 03:32:54.804068 3387 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:32:54.804295 kubelet[3387]: I0430 03:32:54.804264 3387 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:32:54.807420 kubelet[3387]: I0430 03:32:54.807353 3387 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:32:54.809102 kubelet[3387]: I0430 03:32:54.808541 3387 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:32:54.816737 kubelet[3387]: I0430 03:32:54.815914 3387 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:32:54.816737 kubelet[3387]: I0430 03:32:54.816414 3387 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:32:54.816737 kubelet[3387]: I0430 03:32:54.816442 3387 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-209","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:32:54.816737 kubelet[3387]: I0430 03:32:54.816653 3387 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:32:54.816997 kubelet[3387]: I0430 03:32:54.816663 3387 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:32:54.817045 kubelet[3387]: I0430 03:32:54.817034 3387 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:54.817225 kubelet[3387]: I0430 03:32:54.817210 3387 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:32:54.817328 kubelet[3387]: I0430 03:32:54.817318 3387 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:32:54.817432 kubelet[3387]: I0430 03:32:54.817424 3387 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:32:54.817522 kubelet[3387]: I0430 03:32:54.817512 3387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:32:54.819506 kubelet[3387]: I0430 03:32:54.819481 3387 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:32:54.823054 kubelet[3387]: I0430 03:32:54.823014 3387 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:32:54.825381 kubelet[3387]: I0430 03:32:54.823513 3387 server.go:1264] "Started kubelet" Apr 30 03:32:54.826104 kubelet[3387]: I0430 03:32:54.826077 3387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:32:54.844004 kubelet[3387]: I0430 03:32:54.843906 3387 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:32:54.848384 kubelet[3387]: I0430 03:32:54.847170 3387 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:32:54.848757 kubelet[3387]: I0430 03:32:54.848704 3387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:32:54.848960 kubelet[3387]: I0430 03:32:54.848943 3387 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:32:54.855700 kubelet[3387]: I0430 03:32:54.855424 3387 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:32:54.857742 kubelet[3387]: I0430 03:32:54.857334 3387 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:32:54.857951 kubelet[3387]: I0430 03:32:54.857935 3387 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:32:54.862550 kubelet[3387]: I0430 03:32:54.862270 3387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:32:54.863051 kubelet[3387]: I0430 03:32:54.863030 3387 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:32:54.863767 kubelet[3387]: I0430 03:32:54.863740 3387 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:32:54.864155 kubelet[3387]: I0430 03:32:54.864133 3387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:32:54.864224 kubelet[3387]: I0430 03:32:54.864183 3387 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:32:54.864224 kubelet[3387]: I0430 03:32:54.864206 3387 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:32:54.864316 kubelet[3387]: E0430 03:32:54.864250 3387 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:32:54.866193 kubelet[3387]: E0430 03:32:54.866173 3387 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:32:54.872213 kubelet[3387]: I0430 03:32:54.872181 3387 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:32:54.938981 kubelet[3387]: I0430 03:32:54.938958 3387 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:32:54.939116 kubelet[3387]: I0430 03:32:54.939107 3387 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:32:54.939176 kubelet[3387]: I0430 03:32:54.939171 3387 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:54.939354 kubelet[3387]: I0430 03:32:54.939344 3387 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:32:54.939543 kubelet[3387]: I0430 03:32:54.939401 3387 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:32:54.939543 kubelet[3387]: I0430 03:32:54.939431 3387 policy_none.go:49] "None policy: Start" Apr 30 03:32:54.940164 kubelet[3387]: I0430 03:32:54.940140 3387 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:32:54.940164 kubelet[3387]: I0430 03:32:54.940168 3387 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:32:54.940346 kubelet[3387]: I0430 03:32:54.940328 3387 state_mem.go:75] "Updated machine memory state" Apr 30 03:32:54.941660 kubelet[3387]: I0430 03:32:54.941633 3387 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:32:54.941848 kubelet[3387]: I0430 03:32:54.941812 3387 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:32:54.941948 kubelet[3387]: I0430 03:32:54.941932 3387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:32:54.963514 kubelet[3387]: I0430 03:32:54.960186 3387 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-209" Apr 30 03:32:54.967394 kubelet[3387]: I0430 03:32:54.967318 3387 topology_manager.go:215] "Topology Admit Handler" podUID="d47901109d9f20265bbd4862ff3c446e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-209" Apr 30 03:32:54.967743 kubelet[3387]: I0430 03:32:54.967458 3387 topology_manager.go:215] "Topology Admit Handler" podUID="9a27c500bdc0da369fde9fa4f036abc6" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:54.967743 kubelet[3387]: I0430 03:32:54.967535 3387 topology_manager.go:215] "Topology Admit Handler" podUID="9cbbe18b90f50f6a0aa54af7794d205b" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-209" Apr 30 03:32:54.985417 kubelet[3387]: I0430 03:32:54.985230 3387 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-209" Apr 30 03:32:54.985417 kubelet[3387]: I0430 03:32:54.985299 3387 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-209" Apr 30 03:32:55.078671 sudo[3419]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 03:32:55.078997 sudo[3419]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 03:32:55.159940 kubelet[3387]: I0430 03:32:55.159209 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d47901109d9f20265bbd4862ff3c446e-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-209\" (UID: \"d47901109d9f20265bbd4862ff3c446e\") " pod="kube-system/kube-apiserver-ip-172-31-18-209" Apr 30 03:32:55.159940 kubelet[3387]: I0430 03:32:55.159280 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:55.159940 kubelet[3387]: I0430 03:32:55.159558 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:55.159940 kubelet[3387]: I0430 03:32:55.159587 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:55.159940 kubelet[3387]: I0430 03:32:55.159637 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:55.160425 kubelet[3387]: I0430 03:32:55.159663 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d47901109d9f20265bbd4862ff3c446e-ca-certs\") pod \"kube-apiserver-ip-172-31-18-209\" (UID: \"d47901109d9f20265bbd4862ff3c446e\") " pod="kube-system/kube-apiserver-ip-172-31-18-209" Apr 30 03:32:55.160425 kubelet[3387]: I0430 03:32:55.159768 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d47901109d9f20265bbd4862ff3c446e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-209\" (UID: \"d47901109d9f20265bbd4862ff3c446e\") " pod="kube-system/kube-apiserver-ip-172-31-18-209" Apr 30 03:32:55.160425 kubelet[3387]: I0430 03:32:55.159794 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a27c500bdc0da369fde9fa4f036abc6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-209\" (UID: \"9a27c500bdc0da369fde9fa4f036abc6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:55.160425 kubelet[3387]: I0430 03:32:55.159817 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9cbbe18b90f50f6a0aa54af7794d205b-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-209\" (UID: \"9cbbe18b90f50f6a0aa54af7794d205b\") " pod="kube-system/kube-scheduler-ip-172-31-18-209" Apr 30 03:32:55.747621 sudo[3419]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:55.827381 kubelet[3387]: I0430 03:32:55.827316 3387 apiserver.go:52] "Watching apiserver" Apr 30 03:32:55.859259 kubelet[3387]: I0430 03:32:55.859045 3387 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:32:55.919018 kubelet[3387]: E0430 03:32:55.918908 3387 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-209\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-209" Apr 30 03:32:55.962815 kubelet[3387]: I0430 03:32:55.962642 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-209" podStartSLOduration=1.9626190380000001 podStartE2EDuration="1.962619038s" podCreationTimestamp="2025-04-30 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:55.948405117 +0000 UTC m=+1.225562129" watchObservedRunningTime="2025-04-30 03:32:55.962619038 +0000 UTC m=+1.239776049" Apr 30 03:32:55.963042 kubelet[3387]: I0430 03:32:55.962898 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-209" podStartSLOduration=1.962887495 podStartE2EDuration="1.962887495s" podCreationTimestamp="2025-04-30 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:55.960834666 +0000 UTC m=+1.237991683" watchObservedRunningTime="2025-04-30 03:32:55.962887495 +0000 UTC m=+1.240044513" Apr 30 03:32:55.974374 kubelet[3387]: I0430 03:32:55.974186 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-209" podStartSLOduration=1.974150657 podStartE2EDuration="1.974150657s" podCreationTimestamp="2025-04-30 03:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:55.973921763 +0000 UTC m=+1.251078781" watchObservedRunningTime="2025-04-30 03:32:55.974150657 +0000 UTC m=+1.251307675" Apr 30 03:32:57.638529 sudo[2436]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:57.676415 sshd[2432]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:57.679393 systemd[1]: sshd@6-172.31.18.209:22-147.75.109.163:39242.service: Deactivated successfully. Apr 30 03:32:57.684424 systemd-logind[2075]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:32:57.684512 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:32:57.686120 systemd-logind[2075]: Removed session 7. Apr 30 03:33:08.135917 update_engine[2078]: I20250430 03:33:08.135840 2078 update_attempter.cc:509] Updating boot flags... Apr 30 03:33:08.215092 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3469) Apr 30 03:33:08.393514 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3470) Apr 30 03:33:08.736048 kubelet[3387]: I0430 03:33:08.736006 3387 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:33:08.736711 containerd[2097]: time="2025-04-30T03:33:08.736519325Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:33:08.737195 kubelet[3387]: I0430 03:33:08.736813 3387 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:33:09.409277 kubelet[3387]: I0430 03:33:09.409239 3387 topology_manager.go:215] "Topology Admit Handler" podUID="0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1" podNamespace="kube-system" podName="kube-proxy-5skxl" Apr 30 03:33:09.415447 kubelet[3387]: I0430 03:33:09.414202 3387 topology_manager.go:215] "Topology Admit Handler" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" podNamespace="kube-system" podName="cilium-4vxdt" Apr 30 03:33:09.561736 kubelet[3387]: I0430 03:33:09.561699 3387 topology_manager.go:215] "Topology Admit Handler" podUID="580ceb57-785f-4135-b9c1-cd9729fc2aa3" podNamespace="kube-system" podName="cilium-operator-599987898-9xvfk" Apr 30 03:33:09.566094 kubelet[3387]: I0430 03:33:09.566046 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-etc-cni-netd\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566094 kubelet[3387]: I0430 03:33:09.566091 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-config-path\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566255 kubelet[3387]: I0430 03:33:09.566113 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1-lib-modules\") pod \"kube-proxy-5skxl\" (UID: \"0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1\") " pod="kube-system/kube-proxy-5skxl" Apr 30 03:33:09.566255 kubelet[3387]: I0430 03:33:09.566133 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1-kube-proxy\") pod \"kube-proxy-5skxl\" (UID: \"0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1\") " pod="kube-system/kube-proxy-5skxl" Apr 30 03:33:09.566255 kubelet[3387]: I0430 03:33:09.566147 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-hostproc\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566255 kubelet[3387]: I0430 03:33:09.566163 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2660554-1217-4f03-9ba9-9714b88b5a02-clustermesh-secrets\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566255 kubelet[3387]: I0430 03:33:09.566179 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-kernel\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566255 kubelet[3387]: I0430 03:33:09.566196 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-net\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566430 kubelet[3387]: I0430 03:33:09.566211 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-run\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566430 kubelet[3387]: I0430 03:33:09.566225 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-cgroup\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566430 kubelet[3387]: I0430 03:33:09.566249 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1-xtables-lock\") pod \"kube-proxy-5skxl\" (UID: \"0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1\") " pod="kube-system/kube-proxy-5skxl" Apr 30 03:33:09.566430 kubelet[3387]: I0430 03:33:09.566263 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-lib-modules\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566430 kubelet[3387]: I0430 03:33:09.566296 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-hubble-tls\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566430 kubelet[3387]: I0430 03:33:09.566322 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcnw\" (UniqueName: \"kubernetes.io/projected/0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1-kube-api-access-vbcnw\") pod \"kube-proxy-5skxl\" (UID: \"0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1\") " pod="kube-system/kube-proxy-5skxl" Apr 30 03:33:09.566576 kubelet[3387]: I0430 03:33:09.566337 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-bpf-maps\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566576 kubelet[3387]: I0430 03:33:09.566352 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cni-path\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566576 kubelet[3387]: I0430 03:33:09.566380 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-xtables-lock\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.566576 kubelet[3387]: I0430 03:33:09.566398 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxf2\" (UniqueName: \"kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-kube-api-access-rhxf2\") pod \"cilium-4vxdt\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " pod="kube-system/cilium-4vxdt" Apr 30 03:33:09.667117 kubelet[3387]: I0430 03:33:09.666882 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580ceb57-785f-4135-b9c1-cd9729fc2aa3-cilium-config-path\") pod \"cilium-operator-599987898-9xvfk\" (UID: \"580ceb57-785f-4135-b9c1-cd9729fc2aa3\") " pod="kube-system/cilium-operator-599987898-9xvfk" Apr 30 03:33:09.667117 kubelet[3387]: I0430 03:33:09.666932 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqt9w\" (UniqueName: \"kubernetes.io/projected/580ceb57-785f-4135-b9c1-cd9729fc2aa3-kube-api-access-cqt9w\") pod \"cilium-operator-599987898-9xvfk\" (UID: \"580ceb57-785f-4135-b9c1-cd9729fc2aa3\") " pod="kube-system/cilium-operator-599987898-9xvfk" Apr 30 03:33:09.727173 containerd[2097]: time="2025-04-30T03:33:09.727137445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5skxl,Uid:0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:09.727985 containerd[2097]: time="2025-04-30T03:33:09.727948779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vxdt,Uid:c2660554-1217-4f03-9ba9-9714b88b5a02,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:09.792541 containerd[2097]: time="2025-04-30T03:33:09.792451833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:09.795527 containerd[2097]: time="2025-04-30T03:33:09.792513629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:09.795527 containerd[2097]: time="2025-04-30T03:33:09.795244536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:09.795527 containerd[2097]: time="2025-04-30T03:33:09.795344927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:09.802384 containerd[2097]: time="2025-04-30T03:33:09.800640531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:09.802384 containerd[2097]: time="2025-04-30T03:33:09.800752973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:09.802384 containerd[2097]: time="2025-04-30T03:33:09.800766388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:09.802384 containerd[2097]: time="2025-04-30T03:33:09.800855457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:09.846338 containerd[2097]: time="2025-04-30T03:33:09.846298701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vxdt,Uid:c2660554-1217-4f03-9ba9-9714b88b5a02,Namespace:kube-system,Attempt:0,} returns sandbox id \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\"" Apr 30 03:33:09.847471 containerd[2097]: time="2025-04-30T03:33:09.847446991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5skxl,Uid:0ed96bd9-51f8-4a7f-9ec3-db866c95cdf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e621876bf32b474af3cefdac4833a6680d1b641f24354751fd581d02bcf01ff\"" Apr 30 03:33:09.865342 containerd[2097]: time="2025-04-30T03:33:09.865313028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 03:33:09.871980 containerd[2097]: time="2025-04-30T03:33:09.871576772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9xvfk,Uid:580ceb57-785f-4135-b9c1-cd9729fc2aa3,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:09.872828 containerd[2097]: time="2025-04-30T03:33:09.872778222Z" level=info msg="CreateContainer within sandbox \"0e621876bf32b474af3cefdac4833a6680d1b641f24354751fd581d02bcf01ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:33:09.912451 containerd[2097]: time="2025-04-30T03:33:09.912158994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:09.912451 containerd[2097]: time="2025-04-30T03:33:09.912310431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:09.912451 containerd[2097]: time="2025-04-30T03:33:09.912328355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:09.913552 containerd[2097]: time="2025-04-30T03:33:09.913328339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:09.921485 containerd[2097]: time="2025-04-30T03:33:09.920883671Z" level=info msg="CreateContainer within sandbox \"0e621876bf32b474af3cefdac4833a6680d1b641f24354751fd581d02bcf01ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"beb657a3ebc03ab1d3b5bc37f41be3939e9608e3460945c347949a902b24a875\"" Apr 30 03:33:09.925527 containerd[2097]: time="2025-04-30T03:33:09.925490255Z" level=info msg="StartContainer for \"beb657a3ebc03ab1d3b5bc37f41be3939e9608e3460945c347949a902b24a875\"" Apr 30 03:33:10.021221 containerd[2097]: time="2025-04-30T03:33:10.021183740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9xvfk,Uid:580ceb57-785f-4135-b9c1-cd9729fc2aa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\"" Apr 30 03:33:10.042880 containerd[2097]: time="2025-04-30T03:33:10.042821140Z" level=info msg="StartContainer for \"beb657a3ebc03ab1d3b5bc37f41be3939e9608e3460945c347949a902b24a875\" returns successfully" Apr 30 03:33:14.926710 kubelet[3387]: I0430 03:33:14.922299 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5skxl" podStartSLOduration=5.922278442 podStartE2EDuration="5.922278442s" podCreationTimestamp="2025-04-30 03:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:33:10.958015829 +0000 UTC m=+16.235172846" watchObservedRunningTime="2025-04-30 03:33:14.922278442 +0000 UTC m=+20.199435460" Apr 30 03:33:15.573178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045287015.mount: Deactivated successfully. Apr 30 03:33:18.096181 containerd[2097]: time="2025-04-30T03:33:18.096120813Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:18.098929 containerd[2097]: time="2025-04-30T03:33:18.098853179Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 03:33:18.101447 containerd[2097]: time="2025-04-30T03:33:18.101378177Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:18.103211 containerd[2097]: time="2025-04-30T03:33:18.103016425Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.237538669s" Apr 30 03:33:18.103211 containerd[2097]: time="2025-04-30T03:33:18.103055222Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 03:33:18.104592 containerd[2097]: time="2025-04-30T03:33:18.104568418Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 03:33:18.112860 containerd[2097]: time="2025-04-30T03:33:18.112770870Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:33:18.211270 containerd[2097]: time="2025-04-30T03:33:18.211217563Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\"" Apr 30 03:33:18.212515 containerd[2097]: time="2025-04-30T03:33:18.211948183Z" level=info msg="StartContainer for \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\"" Apr 30 03:33:18.321533 containerd[2097]: time="2025-04-30T03:33:18.321491994Z" level=info msg="StartContainer for \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\" returns successfully" Apr 30 03:33:18.426900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf-rootfs.mount: Deactivated successfully. Apr 30 03:33:18.481890 containerd[2097]: time="2025-04-30T03:33:18.460216964Z" level=info msg="shim disconnected" id=bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf namespace=k8s.io Apr 30 03:33:18.482141 containerd[2097]: time="2025-04-30T03:33:18.481890843Z" level=warning msg="cleaning up after shim disconnected" id=bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf namespace=k8s.io Apr 30 03:33:18.482141 containerd[2097]: time="2025-04-30T03:33:18.481916887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:33:19.012876 containerd[2097]: time="2025-04-30T03:33:19.012710176Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:33:19.045808 containerd[2097]: time="2025-04-30T03:33:19.045753347Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\"" Apr 30 03:33:19.048816 containerd[2097]: time="2025-04-30T03:33:19.047552313Z" level=info msg="StartContainer for \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\"" Apr 30 03:33:19.098247 containerd[2097]: time="2025-04-30T03:33:19.098200994Z" level=info msg="StartContainer for \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\" returns successfully" Apr 30 03:33:19.110829 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:33:19.111277 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:33:19.111425 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:33:19.125806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:33:19.159597 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:33:19.168983 containerd[2097]: time="2025-04-30T03:33:19.168913802Z" level=info msg="shim disconnected" id=882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db namespace=k8s.io Apr 30 03:33:19.168983 containerd[2097]: time="2025-04-30T03:33:19.168964061Z" level=warning msg="cleaning up after shim disconnected" id=882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db namespace=k8s.io Apr 30 03:33:19.168983 containerd[2097]: time="2025-04-30T03:33:19.168973123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:33:19.955565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3157564802.mount: Deactivated successfully. Apr 30 03:33:20.021656 containerd[2097]: time="2025-04-30T03:33:20.021582563Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:33:20.072707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831049461.mount: Deactivated successfully. Apr 30 03:33:20.083695 containerd[2097]: time="2025-04-30T03:33:20.083656660Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\"" Apr 30 03:33:20.084399 containerd[2097]: time="2025-04-30T03:33:20.084188832Z" level=info msg="StartContainer for \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\"" Apr 30 03:33:20.182938 containerd[2097]: time="2025-04-30T03:33:20.182896280Z" level=info msg="StartContainer for \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\" returns successfully" Apr 30 03:33:20.230027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f-rootfs.mount: Deactivated successfully. Apr 30 03:33:20.258975 containerd[2097]: time="2025-04-30T03:33:20.258899598Z" level=info msg="shim disconnected" id=ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f namespace=k8s.io Apr 30 03:33:20.259466 containerd[2097]: time="2025-04-30T03:33:20.259051959Z" level=warning msg="cleaning up after shim disconnected" id=ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f namespace=k8s.io Apr 30 03:33:20.259466 containerd[2097]: time="2025-04-30T03:33:20.259086710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:33:21.023421 containerd[2097]: time="2025-04-30T03:33:21.022258738Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:33:21.058967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023319225.mount: Deactivated successfully. Apr 30 03:33:21.063213 containerd[2097]: time="2025-04-30T03:33:21.063158703Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\"" Apr 30 03:33:21.065055 containerd[2097]: time="2025-04-30T03:33:21.064951104Z" level=info msg="StartContainer for \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\"" Apr 30 03:33:21.138323 containerd[2097]: time="2025-04-30T03:33:21.137255974Z" level=info msg="StartContainer for \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\" returns successfully" Apr 30 03:33:21.204299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71-rootfs.mount: Deactivated successfully. Apr 30 03:33:21.213518 containerd[2097]: time="2025-04-30T03:33:21.213469277Z" level=info msg="shim disconnected" id=3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71 namespace=k8s.io Apr 30 03:33:21.213518 containerd[2097]: time="2025-04-30T03:33:21.213516259Z" level=warning msg="cleaning up after shim disconnected" id=3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71 namespace=k8s.io Apr 30 03:33:21.213518 containerd[2097]: time="2025-04-30T03:33:21.213524767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:33:22.024810 containerd[2097]: time="2025-04-30T03:33:22.024761771Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:33:22.047963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394267075.mount: Deactivated successfully. Apr 30 03:33:22.053926 containerd[2097]: time="2025-04-30T03:33:22.053789814Z" level=info msg="CreateContainer within sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\"" Apr 30 03:33:22.054952 containerd[2097]: time="2025-04-30T03:33:22.054384936Z" level=info msg="StartContainer for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\"" Apr 30 03:33:22.121429 containerd[2097]: time="2025-04-30T03:33:22.121384035Z" level=info msg="StartContainer for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" returns successfully" Apr 30 03:33:22.353156 kubelet[3387]: I0430 03:33:22.352920 3387 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:33:22.420702 kubelet[3387]: I0430 03:33:22.420653 3387 topology_manager.go:215] "Topology Admit Handler" podUID="ab7f9f9a-bb5d-4498-a98b-d5fe61e77081" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l7j4x" Apr 30 03:33:22.424519 kubelet[3387]: I0430 03:33:22.421869 3387 topology_manager.go:215] "Topology Admit Handler" podUID="dc30ebfa-461f-4b80-bd25-5c90be99dfff" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6jml4" Apr 30 03:33:22.474386 kubelet[3387]: I0430 03:33:22.472581 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc30ebfa-461f-4b80-bd25-5c90be99dfff-config-volume\") pod \"coredns-7db6d8ff4d-6jml4\" (UID: \"dc30ebfa-461f-4b80-bd25-5c90be99dfff\") " pod="kube-system/coredns-7db6d8ff4d-6jml4" Apr 30 03:33:22.474386 kubelet[3387]: I0430 03:33:22.472629 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqr6r\" (UniqueName: \"kubernetes.io/projected/dc30ebfa-461f-4b80-bd25-5c90be99dfff-kube-api-access-sqr6r\") pod \"coredns-7db6d8ff4d-6jml4\" (UID: \"dc30ebfa-461f-4b80-bd25-5c90be99dfff\") " pod="kube-system/coredns-7db6d8ff4d-6jml4" Apr 30 03:33:22.474386 kubelet[3387]: I0430 03:33:22.472674 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfj8c\" (UniqueName: \"kubernetes.io/projected/ab7f9f9a-bb5d-4498-a98b-d5fe61e77081-kube-api-access-rfj8c\") pod \"coredns-7db6d8ff4d-l7j4x\" (UID: \"ab7f9f9a-bb5d-4498-a98b-d5fe61e77081\") " pod="kube-system/coredns-7db6d8ff4d-l7j4x" Apr 30 03:33:22.474386 kubelet[3387]: I0430 03:33:22.472698 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab7f9f9a-bb5d-4498-a98b-d5fe61e77081-config-volume\") pod \"coredns-7db6d8ff4d-l7j4x\" (UID: \"ab7f9f9a-bb5d-4498-a98b-d5fe61e77081\") " pod="kube-system/coredns-7db6d8ff4d-l7j4x" Apr 30 03:33:22.731341 containerd[2097]: time="2025-04-30T03:33:22.731131820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7j4x,Uid:ab7f9f9a-bb5d-4498-a98b-d5fe61e77081,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:22.736296 containerd[2097]: time="2025-04-30T03:33:22.735993520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6jml4,Uid:dc30ebfa-461f-4b80-bd25-5c90be99dfff,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:23.042492 kubelet[3387]: I0430 03:33:23.042343 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4vxdt" podStartSLOduration=5.802593045 podStartE2EDuration="14.042327898s" podCreationTimestamp="2025-04-30 03:33:09 +0000 UTC" firstStartedPulling="2025-04-30 03:33:09.864273236 +0000 UTC m=+15.141430234" lastFinishedPulling="2025-04-30 03:33:18.104008078 +0000 UTC m=+23.381165087" observedRunningTime="2025-04-30 03:33:23.042073633 +0000 UTC m=+28.319230650" watchObservedRunningTime="2025-04-30 03:33:23.042327898 +0000 UTC m=+28.319484913" Apr 30 03:33:26.025715 containerd[2097]: time="2025-04-30T03:33:26.025661198Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:26.027436 containerd[2097]: time="2025-04-30T03:33:26.027380095Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 03:33:26.029579 containerd[2097]: time="2025-04-30T03:33:26.029534162Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:26.031368 containerd[2097]: time="2025-04-30T03:33:26.031162958Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.926564424s" Apr 30 03:33:26.031368 containerd[2097]: time="2025-04-30T03:33:26.031194234Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 03:33:26.033266 containerd[2097]: time="2025-04-30T03:33:26.033240480Z" level=info msg="CreateContainer within sandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 03:33:26.067106 containerd[2097]: time="2025-04-30T03:33:26.067055683Z" level=info msg="CreateContainer within sandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\"" Apr 30 03:33:26.067553 containerd[2097]: time="2025-04-30T03:33:26.067534708Z" level=info msg="StartContainer for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\"" Apr 30 03:33:26.123628 containerd[2097]: time="2025-04-30T03:33:26.123476313Z" level=info msg="StartContainer for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" returns successfully" Apr 30 03:33:27.613614 systemd[1]: Started sshd@7-172.31.18.209:22-147.75.109.163:39056.service - OpenSSH per-connection server daemon (147.75.109.163:39056). Apr 30 03:33:27.873171 sshd[4380]: Accepted publickey for core from 147.75.109.163 port 39056 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:27.875186 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:27.882894 systemd-logind[2075]: New session 8 of user core. Apr 30 03:33:27.889562 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:33:28.664516 sshd[4380]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:28.671439 systemd[1]: sshd@7-172.31.18.209:22-147.75.109.163:39056.service: Deactivated successfully. Apr 30 03:33:28.674356 systemd-logind[2075]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:33:28.674626 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:33:28.676003 systemd-logind[2075]: Removed session 8. Apr 30 03:33:30.222084 systemd-networkd[1652]: cilium_host: Link UP Apr 30 03:33:30.222305 systemd-networkd[1652]: cilium_net: Link UP Apr 30 03:33:30.223489 systemd-networkd[1652]: cilium_net: Gained carrier Apr 30 03:33:30.224214 (udev-worker)[4397]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:33:30.224227 systemd-networkd[1652]: cilium_host: Gained carrier Apr 30 03:33:30.225511 (udev-worker)[4398]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:33:30.352260 systemd-networkd[1652]: cilium_vxlan: Link UP Apr 30 03:33:30.352282 systemd-networkd[1652]: cilium_vxlan: Gained carrier Apr 30 03:33:30.849569 systemd-networkd[1652]: cilium_net: Gained IPv6LL Apr 30 03:33:30.850260 systemd-networkd[1652]: cilium_host: Gained IPv6LL Apr 30 03:33:30.875414 kernel: NET: Registered PF_ALG protocol family Apr 30 03:33:31.598959 systemd-networkd[1652]: lxc_health: Link UP Apr 30 03:33:31.603890 (udev-worker)[4408]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:33:31.606222 systemd-networkd[1652]: lxc_health: Gained carrier Apr 30 03:33:31.745515 systemd-networkd[1652]: cilium_vxlan: Gained IPv6LL Apr 30 03:33:31.784085 kubelet[3387]: I0430 03:33:31.783403 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9xvfk" podStartSLOduration=6.776803193 podStartE2EDuration="22.783377613s" podCreationTimestamp="2025-04-30 03:33:09 +0000 UTC" firstStartedPulling="2025-04-30 03:33:10.025486846 +0000 UTC m=+15.302643846" lastFinishedPulling="2025-04-30 03:33:26.032061266 +0000 UTC m=+31.309218266" observedRunningTime="2025-04-30 03:33:27.045134455 +0000 UTC m=+32.322291473" watchObservedRunningTime="2025-04-30 03:33:31.783377613 +0000 UTC m=+37.060534625" Apr 30 03:33:31.888305 systemd-networkd[1652]: lxcb71f06341892: Link UP Apr 30 03:33:31.895924 kernel: eth0: renamed from tmp1e205 Apr 30 03:33:31.900421 systemd-networkd[1652]: lxc47cc0b353648: Link UP Apr 30 03:33:31.908395 kernel: eth0: renamed from tmp83bd8 Apr 30 03:33:31.916549 systemd-networkd[1652]: lxcb71f06341892: Gained carrier Apr 30 03:33:31.916839 systemd-networkd[1652]: lxc47cc0b353648: Gained carrier Apr 30 03:33:31.925151 (udev-worker)[4409]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:33:32.833637 systemd-networkd[1652]: lxc_health: Gained IPv6LL Apr 30 03:33:33.712019 systemd[1]: Started sshd@8-172.31.18.209:22-147.75.109.163:39064.service - OpenSSH per-connection server daemon (147.75.109.163:39064). Apr 30 03:33:33.865512 systemd-networkd[1652]: lxcb71f06341892: Gained IPv6LL Apr 30 03:33:33.865893 systemd-networkd[1652]: lxc47cc0b353648: Gained IPv6LL Apr 30 03:33:33.985045 sshd[4757]: Accepted publickey for core from 147.75.109.163 port 39064 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:33.987620 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:33.996809 systemd-logind[2075]: New session 9 of user core. Apr 30 03:33:34.001098 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:33:34.369124 sshd[4757]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:34.374352 systemd-logind[2075]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:33:34.376801 systemd[1]: sshd@8-172.31.18.209:22-147.75.109.163:39064.service: Deactivated successfully. Apr 30 03:33:34.388432 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:33:34.390453 systemd-logind[2075]: Removed session 9. Apr 30 03:33:35.908914 ntpd[2051]: Listen normally on 6 cilium_host 192.168.0.69:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 6 cilium_host 192.168.0.69:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 7 cilium_net [fe80::a4b4:7cff:fee2:7ba8%4]:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 8 cilium_host [fe80::5461:a7ff:fe99:c34d%5]:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 9 cilium_vxlan [fe80::188a:3aff:fe7e:2fcc%6]:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 10 lxc_health [fe80::9080:afff:fecd:bab0%8]:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 11 lxcb71f06341892 [fe80::bb:43ff:fe11:7254%10]:123 Apr 30 03:33:35.909784 ntpd[2051]: 30 Apr 03:33:35 ntpd[2051]: Listen normally on 12 lxc47cc0b353648 [fe80::6401:84ff:fe25:61fd%12]:123 Apr 30 03:33:35.909009 ntpd[2051]: Listen normally on 7 cilium_net [fe80::a4b4:7cff:fee2:7ba8%4]:123 Apr 30 03:33:35.909065 ntpd[2051]: Listen normally on 8 cilium_host [fe80::5461:a7ff:fe99:c34d%5]:123 Apr 30 03:33:35.909106 ntpd[2051]: Listen normally on 9 cilium_vxlan [fe80::188a:3aff:fe7e:2fcc%6]:123 Apr 30 03:33:35.909146 ntpd[2051]: Listen normally on 10 lxc_health [fe80::9080:afff:fecd:bab0%8]:123 Apr 30 03:33:35.909185 ntpd[2051]: Listen normally on 11 lxcb71f06341892 [fe80::bb:43ff:fe11:7254%10]:123 Apr 30 03:33:35.909222 ntpd[2051]: Listen normally on 12 lxc47cc0b353648 [fe80::6401:84ff:fe25:61fd%12]:123 Apr 30 03:33:36.572471 containerd[2097]: time="2025-04-30T03:33:36.572293691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:36.574919 containerd[2097]: time="2025-04-30T03:33:36.572664790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:36.574919 containerd[2097]: time="2025-04-30T03:33:36.572713028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:36.574919 containerd[2097]: time="2025-04-30T03:33:36.572820382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:36.597901 systemd[1]: run-containerd-runc-k8s.io-1e20577f2ca13824871b92df066804ee84f0f8c5d241f76bbeb2a04bede1bf4b-runc.sDcfxc.mount: Deactivated successfully. Apr 30 03:33:36.618066 containerd[2097]: time="2025-04-30T03:33:36.608849587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:36.618066 containerd[2097]: time="2025-04-30T03:33:36.608943933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:36.618066 containerd[2097]: time="2025-04-30T03:33:36.608961084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:36.618066 containerd[2097]: time="2025-04-30T03:33:36.609060707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:36.733021 containerd[2097]: time="2025-04-30T03:33:36.732980124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l7j4x,Uid:ab7f9f9a-bb5d-4498-a98b-d5fe61e77081,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e20577f2ca13824871b92df066804ee84f0f8c5d241f76bbeb2a04bede1bf4b\"" Apr 30 03:33:36.734208 containerd[2097]: time="2025-04-30T03:33:36.734183385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6jml4,Uid:dc30ebfa-461f-4b80-bd25-5c90be99dfff,Namespace:kube-system,Attempt:0,} returns sandbox id \"83bd81cfd6df04442ea27be252cacd5eaae73343df1b7057739b40ede7234c9f\"" Apr 30 03:33:36.736125 containerd[2097]: time="2025-04-30T03:33:36.735912766Z" level=info msg="CreateContainer within sandbox \"1e20577f2ca13824871b92df066804ee84f0f8c5d241f76bbeb2a04bede1bf4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:33:36.737598 containerd[2097]: time="2025-04-30T03:33:36.737427811Z" level=info msg="CreateContainer within sandbox \"83bd81cfd6df04442ea27be252cacd5eaae73343df1b7057739b40ede7234c9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:33:36.778555 containerd[2097]: time="2025-04-30T03:33:36.778517571Z" level=info msg="CreateContainer within sandbox \"83bd81cfd6df04442ea27be252cacd5eaae73343df1b7057739b40ede7234c9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d088d0d51e195f195c203332354aefa72a113e61aae3dbebb63b437144b309c\"" Apr 30 03:33:36.779297 containerd[2097]: time="2025-04-30T03:33:36.779143338Z" level=info msg="StartContainer for \"4d088d0d51e195f195c203332354aefa72a113e61aae3dbebb63b437144b309c\"" Apr 30 03:33:36.781530 containerd[2097]: time="2025-04-30T03:33:36.781379638Z" level=info msg="CreateContainer within sandbox \"1e20577f2ca13824871b92df066804ee84f0f8c5d241f76bbeb2a04bede1bf4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1d9a10e16d31e90b334475e3b0e809038741f1874ee77336d964a1ae8f8e43b\"" Apr 30 03:33:36.782861 containerd[2097]: time="2025-04-30T03:33:36.782627363Z" level=info msg="StartContainer for \"f1d9a10e16d31e90b334475e3b0e809038741f1874ee77336d964a1ae8f8e43b\"" Apr 30 03:33:36.853795 containerd[2097]: time="2025-04-30T03:33:36.853694876Z" level=info msg="StartContainer for \"f1d9a10e16d31e90b334475e3b0e809038741f1874ee77336d964a1ae8f8e43b\" returns successfully" Apr 30 03:33:36.853795 containerd[2097]: time="2025-04-30T03:33:36.853774272Z" level=info msg="StartContainer for \"4d088d0d51e195f195c203332354aefa72a113e61aae3dbebb63b437144b309c\" returns successfully" Apr 30 03:33:37.078723 kubelet[3387]: I0430 03:33:37.078249 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l7j4x" podStartSLOduration=28.078228654 podStartE2EDuration="28.078228654s" podCreationTimestamp="2025-04-30 03:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:33:37.076355486 +0000 UTC m=+42.353512527" watchObservedRunningTime="2025-04-30 03:33:37.078228654 +0000 UTC m=+42.355385671" Apr 30 03:33:37.100052 kubelet[3387]: I0430 03:33:37.099649 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6jml4" podStartSLOduration=28.099611878 podStartE2EDuration="28.099611878s" podCreationTimestamp="2025-04-30 03:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:33:37.097733723 +0000 UTC m=+42.374890758" watchObservedRunningTime="2025-04-30 03:33:37.099611878 +0000 UTC m=+42.376768895" Apr 30 03:33:39.412690 systemd[1]: Started sshd@9-172.31.18.209:22-147.75.109.163:39246.service - OpenSSH per-connection server daemon (147.75.109.163:39246). Apr 30 03:33:39.689505 sshd[4952]: Accepted publickey for core from 147.75.109.163 port 39246 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:39.691510 sshd[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:39.697499 systemd-logind[2075]: New session 10 of user core. Apr 30 03:33:39.701693 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:33:40.031948 sshd[4952]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:40.034734 systemd[1]: sshd@9-172.31.18.209:22-147.75.109.163:39246.service: Deactivated successfully. Apr 30 03:33:40.038884 systemd-logind[2075]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:33:40.039679 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:33:40.040718 systemd-logind[2075]: Removed session 10. Apr 30 03:33:45.074767 systemd[1]: Started sshd@10-172.31.18.209:22-147.75.109.163:39256.service - OpenSSH per-connection server daemon (147.75.109.163:39256). Apr 30 03:33:45.331096 sshd[4969]: Accepted publickey for core from 147.75.109.163 port 39256 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:45.332453 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:45.337038 systemd-logind[2075]: New session 11 of user core. Apr 30 03:33:45.345743 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:33:45.588171 sshd[4969]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:45.591745 systemd[1]: sshd@10-172.31.18.209:22-147.75.109.163:39256.service: Deactivated successfully. Apr 30 03:33:45.597001 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:33:45.598649 systemd-logind[2075]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:33:45.599748 systemd-logind[2075]: Removed session 11. Apr 30 03:33:50.632761 systemd[1]: Started sshd@11-172.31.18.209:22-147.75.109.163:43924.service - OpenSSH per-connection server daemon (147.75.109.163:43924). Apr 30 03:33:50.869815 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 43924 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:50.871497 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:50.876107 systemd-logind[2075]: New session 12 of user core. Apr 30 03:33:50.881008 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:33:51.128108 sshd[4986]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:51.132706 systemd[1]: sshd@11-172.31.18.209:22-147.75.109.163:43924.service: Deactivated successfully. Apr 30 03:33:51.136514 systemd-logind[2075]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:33:51.137176 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:33:51.138415 systemd-logind[2075]: Removed session 12. Apr 30 03:33:51.169982 systemd[1]: Started sshd@12-172.31.18.209:22-147.75.109.163:43934.service - OpenSSH per-connection server daemon (147.75.109.163:43934). Apr 30 03:33:51.424558 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 43934 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:51.426087 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:51.431199 systemd-logind[2075]: New session 13 of user core. Apr 30 03:33:51.436703 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:33:51.788381 sshd[5001]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:51.792927 systemd[1]: sshd@12-172.31.18.209:22-147.75.109.163:43934.service: Deactivated successfully. Apr 30 03:33:51.796660 systemd-logind[2075]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:33:51.797023 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:33:51.799047 systemd-logind[2075]: Removed session 13. Apr 30 03:33:51.830772 systemd[1]: Started sshd@13-172.31.18.209:22-147.75.109.163:43946.service - OpenSSH per-connection server daemon (147.75.109.163:43946). Apr 30 03:33:52.078403 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 43946 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:52.080032 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:52.084931 systemd-logind[2075]: New session 14 of user core. Apr 30 03:33:52.091653 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:33:52.342272 sshd[5012]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:52.346215 systemd[1]: sshd@13-172.31.18.209:22-147.75.109.163:43946.service: Deactivated successfully. Apr 30 03:33:52.349860 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:33:52.351014 systemd-logind[2075]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:33:52.351954 systemd-logind[2075]: Removed session 14. Apr 30 03:33:57.384842 systemd[1]: Started sshd@14-172.31.18.209:22-147.75.109.163:57178.service - OpenSSH per-connection server daemon (147.75.109.163:57178). Apr 30 03:33:57.633742 sshd[5028]: Accepted publickey for core from 147.75.109.163 port 57178 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:57.635480 sshd[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:57.641055 systemd-logind[2075]: New session 15 of user core. Apr 30 03:33:57.649015 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:33:57.889877 sshd[5028]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:57.893039 systemd[1]: sshd@14-172.31.18.209:22-147.75.109.163:57178.service: Deactivated successfully. Apr 30 03:33:57.896339 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:33:57.896595 systemd-logind[2075]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:33:57.898397 systemd-logind[2075]: Removed session 15. Apr 30 03:33:57.933830 systemd[1]: Started sshd@15-172.31.18.209:22-147.75.109.163:57190.service - OpenSSH per-connection server daemon (147.75.109.163:57190). Apr 30 03:33:58.170407 sshd[5041]: Accepted publickey for core from 147.75.109.163 port 57190 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:58.171779 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:58.175892 systemd-logind[2075]: New session 16 of user core. Apr 30 03:33:58.184897 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:33:58.804990 sshd[5041]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:58.811909 systemd[1]: sshd@15-172.31.18.209:22-147.75.109.163:57190.service: Deactivated successfully. Apr 30 03:33:58.814671 systemd-logind[2075]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:33:58.814996 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:33:58.816567 systemd-logind[2075]: Removed session 16. Apr 30 03:33:58.850874 systemd[1]: Started sshd@16-172.31.18.209:22-147.75.109.163:57198.service - OpenSSH per-connection server daemon (147.75.109.163:57198). Apr 30 03:33:59.101938 sshd[5053]: Accepted publickey for core from 147.75.109.163 port 57198 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:33:59.103093 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:33:59.107813 systemd-logind[2075]: New session 17 of user core. Apr 30 03:33:59.116709 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:34:01.058978 sshd[5053]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:01.064644 systemd-logind[2075]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:34:01.066141 systemd[1]: sshd@16-172.31.18.209:22-147.75.109.163:57198.service: Deactivated successfully. Apr 30 03:34:01.069002 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:34:01.070224 systemd-logind[2075]: Removed session 17. Apr 30 03:34:01.101709 systemd[1]: Started sshd@17-172.31.18.209:22-147.75.109.163:57204.service - OpenSSH per-connection server daemon (147.75.109.163:57204). Apr 30 03:34:01.345922 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 57204 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:01.347614 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:01.353057 systemd-logind[2075]: New session 18 of user core. Apr 30 03:34:01.362920 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:34:01.938694 sshd[5072]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:01.945695 systemd[1]: sshd@17-172.31.18.209:22-147.75.109.163:57204.service: Deactivated successfully. Apr 30 03:34:01.950646 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:34:01.951542 systemd-logind[2075]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:34:01.953082 systemd-logind[2075]: Removed session 18. Apr 30 03:34:01.993418 systemd[1]: Started sshd@18-172.31.18.209:22-147.75.109.163:57218.service - OpenSSH per-connection server daemon (147.75.109.163:57218). Apr 30 03:34:02.257617 sshd[5084]: Accepted publickey for core from 147.75.109.163 port 57218 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:02.259615 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:02.265133 systemd-logind[2075]: New session 19 of user core. Apr 30 03:34:02.269922 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:34:02.538233 sshd[5084]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:02.541507 systemd[1]: sshd@18-172.31.18.209:22-147.75.109.163:57218.service: Deactivated successfully. Apr 30 03:34:02.547356 systemd-logind[2075]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:34:02.547755 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:34:02.550301 systemd-logind[2075]: Removed session 19. Apr 30 03:34:07.579814 systemd[1]: Started sshd@19-172.31.18.209:22-147.75.109.163:57736.service - OpenSSH per-connection server daemon (147.75.109.163:57736). Apr 30 03:34:07.821660 sshd[5101]: Accepted publickey for core from 147.75.109.163 port 57736 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:07.823216 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:07.828338 systemd-logind[2075]: New session 20 of user core. Apr 30 03:34:07.834725 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:34:08.074246 sshd[5101]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:08.078618 systemd[1]: sshd@19-172.31.18.209:22-147.75.109.163:57736.service: Deactivated successfully. Apr 30 03:34:08.081671 systemd-logind[2075]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:34:08.082586 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:34:08.083562 systemd-logind[2075]: Removed session 20. Apr 30 03:34:13.118722 systemd[1]: Started sshd@20-172.31.18.209:22-147.75.109.163:57746.service - OpenSSH per-connection server daemon (147.75.109.163:57746). Apr 30 03:34:13.359488 sshd[5117]: Accepted publickey for core from 147.75.109.163 port 57746 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:13.360953 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:13.365774 systemd-logind[2075]: New session 21 of user core. Apr 30 03:34:13.368733 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:34:13.608971 sshd[5117]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:13.613346 systemd[1]: sshd@20-172.31.18.209:22-147.75.109.163:57746.service: Deactivated successfully. Apr 30 03:34:13.618671 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:34:13.619625 systemd-logind[2075]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:34:13.620849 systemd-logind[2075]: Removed session 21. Apr 30 03:34:18.650652 systemd[1]: Started sshd@21-172.31.18.209:22-147.75.109.163:40174.service - OpenSSH per-connection server daemon (147.75.109.163:40174). Apr 30 03:34:18.889610 sshd[5130]: Accepted publickey for core from 147.75.109.163 port 40174 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:18.891493 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:18.895958 systemd-logind[2075]: New session 22 of user core. Apr 30 03:34:18.899640 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:34:19.137383 sshd[5130]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:19.141420 systemd[1]: sshd@21-172.31.18.209:22-147.75.109.163:40174.service: Deactivated successfully. Apr 30 03:34:19.144210 systemd-logind[2075]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:34:19.145037 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:34:19.146308 systemd-logind[2075]: Removed session 22. Apr 30 03:34:19.183025 systemd[1]: Started sshd@22-172.31.18.209:22-147.75.109.163:40180.service - OpenSSH per-connection server daemon (147.75.109.163:40180). Apr 30 03:34:19.417105 sshd[5144]: Accepted publickey for core from 147.75.109.163 port 40180 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:19.418781 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:19.423411 systemd-logind[2075]: New session 23 of user core. Apr 30 03:34:19.429688 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:34:21.004267 systemd[1]: run-containerd-runc-k8s.io-dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab-runc.PVHGWX.mount: Deactivated successfully. Apr 30 03:34:21.035273 containerd[2097]: time="2025-04-30T03:34:21.035216539Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:34:21.096018 containerd[2097]: time="2025-04-30T03:34:21.095784486Z" level=info msg="StopContainer for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" with timeout 2 (s)" Apr 30 03:34:21.096018 containerd[2097]: time="2025-04-30T03:34:21.095875790Z" level=info msg="StopContainer for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" with timeout 30 (s)" Apr 30 03:34:21.096378 containerd[2097]: time="2025-04-30T03:34:21.096291800Z" level=info msg="Stop container \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" with signal terminated" Apr 30 03:34:21.096719 containerd[2097]: time="2025-04-30T03:34:21.096563256Z" level=info msg="Stop container \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" with signal terminated" Apr 30 03:34:21.105316 systemd-networkd[1652]: lxc_health: Link DOWN Apr 30 03:34:21.105324 systemd-networkd[1652]: lxc_health: Lost carrier Apr 30 03:34:21.139797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165-rootfs.mount: Deactivated successfully. Apr 30 03:34:21.154882 containerd[2097]: time="2025-04-30T03:34:21.154727733Z" level=info msg="shim disconnected" id=3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165 namespace=k8s.io Apr 30 03:34:21.154882 containerd[2097]: time="2025-04-30T03:34:21.154775961Z" level=warning msg="cleaning up after shim disconnected" id=3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165 namespace=k8s.io Apr 30 03:34:21.155186 containerd[2097]: time="2025-04-30T03:34:21.154784036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:21.161173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab-rootfs.mount: Deactivated successfully. Apr 30 03:34:21.171579 containerd[2097]: time="2025-04-30T03:34:21.171331672Z" level=info msg="shim disconnected" id=dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab namespace=k8s.io Apr 30 03:34:21.171579 containerd[2097]: time="2025-04-30T03:34:21.171524046Z" level=warning msg="cleaning up after shim disconnected" id=dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab namespace=k8s.io Apr 30 03:34:21.171579 containerd[2097]: time="2025-04-30T03:34:21.171544864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:21.194302 containerd[2097]: time="2025-04-30T03:34:21.194185067Z" level=info msg="StopContainer for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" returns successfully" Apr 30 03:34:21.199289 containerd[2097]: time="2025-04-30T03:34:21.199220912Z" level=info msg="StopPodSandbox for \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\"" Apr 30 03:34:21.200323 containerd[2097]: time="2025-04-30T03:34:21.199312075Z" level=info msg="Container to stop \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:34:21.201293 containerd[2097]: time="2025-04-30T03:34:21.201198990Z" level=info msg="StopContainer for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" returns successfully" Apr 30 03:34:21.203999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19-shm.mount: Deactivated successfully. Apr 30 03:34:21.204583 containerd[2097]: time="2025-04-30T03:34:21.204559275Z" level=info msg="StopPodSandbox for \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\"" Apr 30 03:34:21.204855 containerd[2097]: time="2025-04-30T03:34:21.204598862Z" level=info msg="Container to stop \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:34:21.204896 containerd[2097]: time="2025-04-30T03:34:21.204859211Z" level=info msg="Container to stop \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:34:21.204896 containerd[2097]: time="2025-04-30T03:34:21.204870948Z" level=info msg="Container to stop \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:34:21.204896 containerd[2097]: time="2025-04-30T03:34:21.204879969Z" level=info msg="Container to stop \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:34:21.205799 containerd[2097]: time="2025-04-30T03:34:21.204888255Z" level=info msg="Container to stop \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:34:21.247339 containerd[2097]: time="2025-04-30T03:34:21.247280760Z" level=info msg="shim disconnected" id=0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19 namespace=k8s.io Apr 30 03:34:21.247339 containerd[2097]: time="2025-04-30T03:34:21.247337244Z" level=warning msg="cleaning up after shim disconnected" id=0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19 namespace=k8s.io Apr 30 03:34:21.247339 containerd[2097]: time="2025-04-30T03:34:21.247345999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:21.248136 containerd[2097]: time="2025-04-30T03:34:21.247992840Z" level=info msg="shim disconnected" id=e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac namespace=k8s.io Apr 30 03:34:21.248669 containerd[2097]: time="2025-04-30T03:34:21.248351624Z" level=warning msg="cleaning up after shim disconnected" id=e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac namespace=k8s.io Apr 30 03:34:21.248772 containerd[2097]: time="2025-04-30T03:34:21.248757799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:21.266662 containerd[2097]: time="2025-04-30T03:34:21.265673806Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:34:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:34:21.266878 containerd[2097]: time="2025-04-30T03:34:21.266752572Z" level=info msg="TearDown network for sandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" successfully" Apr 30 03:34:21.266878 containerd[2097]: time="2025-04-30T03:34:21.266773384Z" level=info msg="StopPodSandbox for \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" returns successfully" Apr 30 03:34:21.269890 containerd[2097]: time="2025-04-30T03:34:21.269858913Z" level=info msg="TearDown network for sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" successfully" Apr 30 03:34:21.269890 containerd[2097]: time="2025-04-30T03:34:21.269884725Z" level=info msg="StopPodSandbox for \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" returns successfully" Apr 30 03:34:21.363554 kubelet[3387]: I0430 03:34:21.363499 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-config-path\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.363554 kubelet[3387]: I0430 03:34:21.363550 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-lib-modules\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364026 kubelet[3387]: I0430 03:34:21.363570 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cni-path\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364026 kubelet[3387]: I0430 03:34:21.363595 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhxf2\" (UniqueName: \"kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-kube-api-access-rhxf2\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364026 kubelet[3387]: I0430 03:34:21.363611 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-hubble-tls\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364026 kubelet[3387]: I0430 03:34:21.363625 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-kernel\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364026 kubelet[3387]: I0430 03:34:21.363641 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqt9w\" (UniqueName: \"kubernetes.io/projected/580ceb57-785f-4135-b9c1-cd9729fc2aa3-kube-api-access-cqt9w\") pod \"580ceb57-785f-4135-b9c1-cd9729fc2aa3\" (UID: \"580ceb57-785f-4135-b9c1-cd9729fc2aa3\") " Apr 30 03:34:21.364026 kubelet[3387]: I0430 03:34:21.363656 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-net\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364188 kubelet[3387]: I0430 03:34:21.363670 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-xtables-lock\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364188 kubelet[3387]: I0430 03:34:21.363687 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2660554-1217-4f03-9ba9-9714b88b5a02-clustermesh-secrets\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364188 kubelet[3387]: I0430 03:34:21.363701 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-hostproc\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364188 kubelet[3387]: I0430 03:34:21.363716 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-run\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.364188 kubelet[3387]: I0430 03:34:21.363733 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580ceb57-785f-4135-b9c1-cd9729fc2aa3-cilium-config-path\") pod \"580ceb57-785f-4135-b9c1-cd9729fc2aa3\" (UID: \"580ceb57-785f-4135-b9c1-cd9729fc2aa3\") " Apr 30 03:34:21.364188 kubelet[3387]: I0430 03:34:21.363745 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-etc-cni-netd\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.365617 kubelet[3387]: I0430 03:34:21.363758 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-cgroup\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.365617 kubelet[3387]: I0430 03:34:21.363772 3387 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-bpf-maps\") pod \"c2660554-1217-4f03-9ba9-9714b88b5a02\" (UID: \"c2660554-1217-4f03-9ba9-9714b88b5a02\") " Apr 30 03:34:21.365688 kubelet[3387]: I0430 03:34:21.363850 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.365716 kubelet[3387]: I0430 03:34:21.365702 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.365923 kubelet[3387]: I0430 03:34:21.363930 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.371463 kubelet[3387]: I0430 03:34:21.370945 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:34:21.371463 kubelet[3387]: I0430 03:34:21.371010 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.371463 kubelet[3387]: I0430 03:34:21.371027 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.373018 kubelet[3387]: I0430 03:34:21.372990 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2660554-1217-4f03-9ba9-9714b88b5a02-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:34:21.373183 kubelet[3387]: I0430 03:34:21.373170 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.373248 kubelet[3387]: I0430 03:34:21.373240 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.375336 kubelet[3387]: I0430 03:34:21.375284 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580ceb57-785f-4135-b9c1-cd9729fc2aa3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "580ceb57-785f-4135-b9c1-cd9729fc2aa3" (UID: "580ceb57-785f-4135-b9c1-cd9729fc2aa3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:34:21.375336 kubelet[3387]: I0430 03:34:21.375332 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.376037 kubelet[3387]: I0430 03:34:21.375348 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.376037 kubelet[3387]: I0430 03:34:21.375375 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:34:21.383148 kubelet[3387]: I0430 03:34:21.383078 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-kube-api-access-rhxf2" (OuterVolumeSpecName: "kube-api-access-rhxf2") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "kube-api-access-rhxf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:34:21.383148 kubelet[3387]: I0430 03:34:21.383145 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2660554-1217-4f03-9ba9-9714b88b5a02" (UID: "c2660554-1217-4f03-9ba9-9714b88b5a02"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:34:21.383148 kubelet[3387]: I0430 03:34:21.383158 3387 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580ceb57-785f-4135-b9c1-cd9729fc2aa3-kube-api-access-cqt9w" (OuterVolumeSpecName: "kube-api-access-cqt9w") pod "580ceb57-785f-4135-b9c1-cd9729fc2aa3" (UID: "580ceb57-785f-4135-b9c1-cd9729fc2aa3"). InnerVolumeSpecName "kube-api-access-cqt9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:34:21.464508 kubelet[3387]: I0430 03:34:21.464443 3387 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-config-path\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464508 kubelet[3387]: I0430 03:34:21.464481 3387 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-lib-modules\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464508 kubelet[3387]: I0430 03:34:21.464492 3387 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cni-path\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464508 kubelet[3387]: I0430 03:34:21.464500 3387 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rhxf2\" (UniqueName: \"kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-kube-api-access-rhxf2\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464508 kubelet[3387]: I0430 03:34:21.464510 3387 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2660554-1217-4f03-9ba9-9714b88b5a02-hubble-tls\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464508 kubelet[3387]: I0430 03:34:21.464518 3387 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-kernel\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464529 3387 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cqt9w\" (UniqueName: \"kubernetes.io/projected/580ceb57-785f-4135-b9c1-cd9729fc2aa3-kube-api-access-cqt9w\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464537 3387 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-host-proc-sys-net\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464546 3387 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-xtables-lock\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464553 3387 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2660554-1217-4f03-9ba9-9714b88b5a02-clustermesh-secrets\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464560 3387 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-hostproc\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464580 3387 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-run\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464587 3387 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580ceb57-785f-4135-b9c1-cd9729fc2aa3-cilium-config-path\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.464875 kubelet[3387]: I0430 03:34:21.464605 3387 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-etc-cni-netd\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.465078 kubelet[3387]: I0430 03:34:21.464619 3387 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-cilium-cgroup\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.465078 kubelet[3387]: I0430 03:34:21.464629 3387 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2660554-1217-4f03-9ba9-9714b88b5a02-bpf-maps\") on node \"ip-172-31-18-209\" DevicePath \"\"" Apr 30 03:34:21.996211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19-rootfs.mount: Deactivated successfully. Apr 30 03:34:21.996677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac-rootfs.mount: Deactivated successfully. Apr 30 03:34:21.996894 systemd[1]: var-lib-kubelet-pods-580ceb57\x2d785f\x2d4135\x2db9c1\x2dcd9729fc2aa3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqt9w.mount: Deactivated successfully. Apr 30 03:34:21.997074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac-shm.mount: Deactivated successfully. Apr 30 03:34:21.997279 systemd[1]: var-lib-kubelet-pods-c2660554\x2d1217\x2d4f03\x2d9ba9\x2d9714b88b5a02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhxf2.mount: Deactivated successfully. Apr 30 03:34:21.997409 systemd[1]: var-lib-kubelet-pods-c2660554\x2d1217\x2d4f03\x2d9ba9\x2d9714b88b5a02-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 03:34:21.997510 systemd[1]: var-lib-kubelet-pods-c2660554\x2d1217\x2d4f03\x2d9ba9\x2d9714b88b5a02-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 03:34:22.170597 kubelet[3387]: I0430 03:34:22.170540 3387 scope.go:117] "RemoveContainer" containerID="3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165" Apr 30 03:34:22.179038 containerd[2097]: time="2025-04-30T03:34:22.178756185Z" level=info msg="RemoveContainer for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\"" Apr 30 03:34:22.190095 containerd[2097]: time="2025-04-30T03:34:22.189949845Z" level=info msg="RemoveContainer for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" returns successfully" Apr 30 03:34:22.196053 kubelet[3387]: I0430 03:34:22.195956 3387 scope.go:117] "RemoveContainer" containerID="3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165" Apr 30 03:34:22.205652 containerd[2097]: time="2025-04-30T03:34:22.196381342Z" level=error msg="ContainerStatus for \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\": not found" Apr 30 03:34:22.224743 kubelet[3387]: E0430 03:34:22.222596 3387 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\": not found" containerID="3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165" Apr 30 03:34:22.227693 kubelet[3387]: I0430 03:34:22.224461 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165"} err="failed to get container status \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dbe2a071a15158a95defe3f776ef7f97cf72b7824ca818d6d4219ecf550b165\": not found" Apr 30 03:34:22.227693 kubelet[3387]: I0430 03:34:22.227687 3387 scope.go:117] "RemoveContainer" containerID="dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab" Apr 30 03:34:22.229126 containerd[2097]: time="2025-04-30T03:34:22.229085956Z" level=info msg="RemoveContainer for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\"" Apr 30 03:34:22.234416 containerd[2097]: time="2025-04-30T03:34:22.234340952Z" level=info msg="RemoveContainer for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" returns successfully" Apr 30 03:34:22.234595 kubelet[3387]: I0430 03:34:22.234572 3387 scope.go:117] "RemoveContainer" containerID="3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71" Apr 30 03:34:22.235751 containerd[2097]: time="2025-04-30T03:34:22.235543555Z" level=info msg="RemoveContainer for \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\"" Apr 30 03:34:22.240782 containerd[2097]: time="2025-04-30T03:34:22.240592899Z" level=info msg="RemoveContainer for \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\" returns successfully" Apr 30 03:34:22.241286 kubelet[3387]: I0430 03:34:22.240877 3387 scope.go:117] "RemoveContainer" containerID="ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f" Apr 30 03:34:22.242211 containerd[2097]: time="2025-04-30T03:34:22.241988090Z" level=info msg="RemoveContainer for \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\"" Apr 30 03:34:22.247212 containerd[2097]: time="2025-04-30T03:34:22.246964178Z" level=info msg="RemoveContainer for \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\" returns successfully" Apr 30 03:34:22.247274 kubelet[3387]: I0430 03:34:22.247134 3387 scope.go:117] "RemoveContainer" containerID="882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db" Apr 30 03:34:22.249025 containerd[2097]: time="2025-04-30T03:34:22.248976150Z" level=info msg="RemoveContainer for \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\"" Apr 30 03:34:22.253994 containerd[2097]: time="2025-04-30T03:34:22.253953393Z" level=info msg="RemoveContainer for \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\" returns successfully" Apr 30 03:34:22.254166 kubelet[3387]: I0430 03:34:22.254149 3387 scope.go:117] "RemoveContainer" containerID="bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf" Apr 30 03:34:22.255463 containerd[2097]: time="2025-04-30T03:34:22.255234894Z" level=info msg="RemoveContainer for \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\"" Apr 30 03:34:22.260124 containerd[2097]: time="2025-04-30T03:34:22.260092448Z" level=info msg="RemoveContainer for \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\" returns successfully" Apr 30 03:34:22.260328 kubelet[3387]: I0430 03:34:22.260255 3387 scope.go:117] "RemoveContainer" containerID="dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab" Apr 30 03:34:22.260530 containerd[2097]: time="2025-04-30T03:34:22.260486207Z" level=error msg="ContainerStatus for \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\": not found" Apr 30 03:34:22.260819 kubelet[3387]: E0430 03:34:22.260768 3387 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\": not found" containerID="dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab" Apr 30 03:34:22.260819 kubelet[3387]: I0430 03:34:22.260799 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab"} err="failed to get container status \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc60d6cca8fade8855ccd7ea18cafc406070ff8e4df2fdbcf389f0a7734a4cab\": not found" Apr 30 03:34:22.260819 kubelet[3387]: I0430 03:34:22.260819 3387 scope.go:117] "RemoveContainer" containerID="3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71" Apr 30 03:34:22.261025 containerd[2097]: time="2025-04-30T03:34:22.260995520Z" level=error msg="ContainerStatus for \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\": not found" Apr 30 03:34:22.261128 kubelet[3387]: E0430 03:34:22.261105 3387 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\": not found" containerID="3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71" Apr 30 03:34:22.261169 kubelet[3387]: I0430 03:34:22.261129 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71"} err="failed to get container status \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fabe40a3b4cb347f1d0d4d884bc538b67ea7c5cbd9b52ab93f6e2b067a25a71\": not found" Apr 30 03:34:22.261169 kubelet[3387]: I0430 03:34:22.261148 3387 scope.go:117] "RemoveContainer" containerID="ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f" Apr 30 03:34:22.261318 containerd[2097]: time="2025-04-30T03:34:22.261291725Z" level=error msg="ContainerStatus for \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\": not found" Apr 30 03:34:22.261436 kubelet[3387]: E0430 03:34:22.261415 3387 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\": not found" containerID="ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f" Apr 30 03:34:22.261477 kubelet[3387]: I0430 03:34:22.261436 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f"} err="failed to get container status \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed1191103ee7fa34564b8cdcffd8e93c79a8663db572a6d1c50cc47edb65ea9f\": not found" Apr 30 03:34:22.261477 kubelet[3387]: I0430 03:34:22.261451 3387 scope.go:117] "RemoveContainer" containerID="882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db" Apr 30 03:34:22.261640 containerd[2097]: time="2025-04-30T03:34:22.261604118Z" level=error msg="ContainerStatus for \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\": not found" Apr 30 03:34:22.261854 kubelet[3387]: E0430 03:34:22.261758 3387 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\": not found" containerID="882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db" Apr 30 03:34:22.261854 kubelet[3387]: I0430 03:34:22.261788 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db"} err="failed to get container status \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\": rpc error: code = NotFound desc = an error occurred when try to find container \"882b1c30bb55fdd32bfe4155283a7498c1d9cfa0c6c090c0716b9b16a3e975db\": not found" Apr 30 03:34:22.261854 kubelet[3387]: I0430 03:34:22.261803 3387 scope.go:117] "RemoveContainer" containerID="bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf" Apr 30 03:34:22.261963 containerd[2097]: time="2025-04-30T03:34:22.261929012Z" level=error msg="ContainerStatus for \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\": not found" Apr 30 03:34:22.262086 kubelet[3387]: E0430 03:34:22.262048 3387 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\": not found" containerID="bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf" Apr 30 03:34:22.262086 kubelet[3387]: I0430 03:34:22.262073 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf"} err="failed to get container status \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdbb81eca362882f792d081d97877493882598ff0f5ea60e36eef026340981bf\": not found" Apr 30 03:34:22.867381 kubelet[3387]: I0430 03:34:22.867310 3387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580ceb57-785f-4135-b9c1-cd9729fc2aa3" path="/var/lib/kubelet/pods/580ceb57-785f-4135-b9c1-cd9729fc2aa3/volumes" Apr 30 03:34:22.867778 kubelet[3387]: I0430 03:34:22.867738 3387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" path="/var/lib/kubelet/pods/c2660554-1217-4f03-9ba9-9714b88b5a02/volumes" Apr 30 03:34:22.897431 sshd[5144]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:22.900453 systemd[1]: sshd@22-172.31.18.209:22-147.75.109.163:40180.service: Deactivated successfully. Apr 30 03:34:22.903408 systemd-logind[2075]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:34:22.903755 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:34:22.905557 systemd-logind[2075]: Removed session 23. Apr 30 03:34:22.941246 systemd[1]: Started sshd@23-172.31.18.209:22-147.75.109.163:40184.service - OpenSSH per-connection server daemon (147.75.109.163:40184). Apr 30 03:34:23.191937 sshd[5316]: Accepted publickey for core from 147.75.109.163 port 40184 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:23.193492 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:23.198261 systemd-logind[2075]: New session 24 of user core. Apr 30 03:34:23.203647 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:34:23.908858 ntpd[2051]: Deleting interface #10 lxc_health, fe80::9080:afff:fecd:bab0%8#123, interface stats: received=0, sent=0, dropped=0, active_time=48 secs Apr 30 03:34:23.909760 ntpd[2051]: 30 Apr 03:34:23 ntpd[2051]: Deleting interface #10 lxc_health, fe80::9080:afff:fecd:bab0%8#123, interface stats: received=0, sent=0, dropped=0, active_time=48 secs Apr 30 03:34:23.919925 kubelet[3387]: I0430 03:34:23.914181 3387 topology_manager.go:215] "Topology Admit Handler" podUID="2435780b-0d39-4182-8071-283942ed8bd7" podNamespace="kube-system" podName="cilium-nzbjm" Apr 30 03:34:23.928597 sshd[5316]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:23.930264 kubelet[3387]: E0430 03:34:23.929906 3387 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" containerName="mount-bpf-fs" Apr 30 03:34:23.930264 kubelet[3387]: E0430 03:34:23.929940 3387 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" containerName="clean-cilium-state" Apr 30 03:34:23.930264 kubelet[3387]: E0430 03:34:23.929948 3387 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" containerName="mount-cgroup" Apr 30 03:34:23.930264 kubelet[3387]: E0430 03:34:23.929954 3387 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" containerName="apply-sysctl-overwrites" Apr 30 03:34:23.930264 kubelet[3387]: E0430 03:34:23.929960 3387 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" containerName="cilium-agent" Apr 30 03:34:23.930264 kubelet[3387]: E0430 03:34:23.929966 3387 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="580ceb57-785f-4135-b9c1-cd9729fc2aa3" containerName="cilium-operator" Apr 30 03:34:23.930264 kubelet[3387]: I0430 03:34:23.929990 3387 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2660554-1217-4f03-9ba9-9714b88b5a02" containerName="cilium-agent" Apr 30 03:34:23.930264 kubelet[3387]: I0430 03:34:23.929996 3387 memory_manager.go:354] "RemoveStaleState removing state" podUID="580ceb57-785f-4135-b9c1-cd9729fc2aa3" containerName="cilium-operator" Apr 30 03:34:23.936992 systemd-logind[2075]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:34:23.943060 systemd[1]: sshd@23-172.31.18.209:22-147.75.109.163:40184.service: Deactivated successfully. Apr 30 03:34:23.949161 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:34:23.954763 systemd-logind[2075]: Removed session 24. Apr 30 03:34:23.968737 systemd[1]: Started sshd@24-172.31.18.209:22-147.75.109.163:40192.service - OpenSSH per-connection server daemon (147.75.109.163:40192). Apr 30 03:34:24.093479 kubelet[3387]: I0430 03:34:24.093425 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2435780b-0d39-4182-8071-283942ed8bd7-hubble-tls\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093479 kubelet[3387]: I0430 03:34:24.093478 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-cni-path\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093642 kubelet[3387]: I0430 03:34:24.093498 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-xtables-lock\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093642 kubelet[3387]: I0430 03:34:24.093514 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2435780b-0d39-4182-8071-283942ed8bd7-clustermesh-secrets\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093642 kubelet[3387]: I0430 03:34:24.093531 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-cilium-cgroup\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093642 kubelet[3387]: I0430 03:34:24.093551 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-etc-cni-netd\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093642 kubelet[3387]: I0430 03:34:24.093570 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-cilium-run\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093642 kubelet[3387]: I0430 03:34:24.093587 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-bpf-maps\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093805 kubelet[3387]: I0430 03:34:24.093605 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdf4r\" (UniqueName: \"kubernetes.io/projected/2435780b-0d39-4182-8071-283942ed8bd7-kube-api-access-vdf4r\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093805 kubelet[3387]: I0430 03:34:24.093620 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-host-proc-sys-kernel\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093805 kubelet[3387]: I0430 03:34:24.093637 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-hostproc\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093805 kubelet[3387]: I0430 03:34:24.093652 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-lib-modules\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093805 kubelet[3387]: I0430 03:34:24.093667 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2435780b-0d39-4182-8071-283942ed8bd7-cilium-config-path\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093966 kubelet[3387]: I0430 03:34:24.093681 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2435780b-0d39-4182-8071-283942ed8bd7-cilium-ipsec-secrets\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.093966 kubelet[3387]: I0430 03:34:24.093696 3387 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2435780b-0d39-4182-8071-283942ed8bd7-host-proc-sys-net\") pod \"cilium-nzbjm\" (UID: \"2435780b-0d39-4182-8071-283942ed8bd7\") " pod="kube-system/cilium-nzbjm" Apr 30 03:34:24.216629 sshd[5329]: Accepted publickey for core from 147.75.109.163 port 40192 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:24.222381 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:24.246434 systemd-logind[2075]: New session 25 of user core. Apr 30 03:34:24.252741 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:34:24.261277 containerd[2097]: time="2025-04-30T03:34:24.260951205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nzbjm,Uid:2435780b-0d39-4182-8071-283942ed8bd7,Namespace:kube-system,Attempt:0,}" Apr 30 03:34:24.288878 containerd[2097]: time="2025-04-30T03:34:24.288752880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:34:24.288878 containerd[2097]: time="2025-04-30T03:34:24.288819357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:34:24.289072 containerd[2097]: time="2025-04-30T03:34:24.288900627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:24.289821 containerd[2097]: time="2025-04-30T03:34:24.289755813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:34:24.328479 containerd[2097]: time="2025-04-30T03:34:24.328338219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nzbjm,Uid:2435780b-0d39-4182-8071-283942ed8bd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\"" Apr 30 03:34:24.336563 containerd[2097]: time="2025-04-30T03:34:24.336525124Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:34:24.356788 containerd[2097]: time="2025-04-30T03:34:24.356686177Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"41c303a77adad55deda95ded31ba1443e20c7c9d9f88d41a3cd4d43251f71b75\"" Apr 30 03:34:24.358759 containerd[2097]: time="2025-04-30T03:34:24.357747188Z" level=info msg="StartContainer for \"41c303a77adad55deda95ded31ba1443e20c7c9d9f88d41a3cd4d43251f71b75\"" Apr 30 03:34:24.414819 sshd[5329]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:24.422806 systemd[1]: sshd@24-172.31.18.209:22-147.75.109.163:40192.service: Deactivated successfully. Apr 30 03:34:24.430560 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:34:24.432831 systemd-logind[2075]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:34:24.434789 systemd-logind[2075]: Removed session 25. Apr 30 03:34:24.435189 containerd[2097]: time="2025-04-30T03:34:24.435152836Z" level=info msg="StartContainer for \"41c303a77adad55deda95ded31ba1443e20c7c9d9f88d41a3cd4d43251f71b75\" returns successfully" Apr 30 03:34:24.458401 systemd[1]: Started sshd@25-172.31.18.209:22-147.75.109.163:40194.service - OpenSSH per-connection server daemon (147.75.109.163:40194). Apr 30 03:34:24.494979 containerd[2097]: time="2025-04-30T03:34:24.494853719Z" level=info msg="shim disconnected" id=41c303a77adad55deda95ded31ba1443e20c7c9d9f88d41a3cd4d43251f71b75 namespace=k8s.io Apr 30 03:34:24.494979 containerd[2097]: time="2025-04-30T03:34:24.494904255Z" level=warning msg="cleaning up after shim disconnected" id=41c303a77adad55deda95ded31ba1443e20c7c9d9f88d41a3cd4d43251f71b75 namespace=k8s.io Apr 30 03:34:24.494979 containerd[2097]: time="2025-04-30T03:34:24.494912745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:24.697216 sshd[5420]: Accepted publickey for core from 147.75.109.163 port 40194 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:34:24.699040 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:34:24.704704 systemd-logind[2075]: New session 26 of user core. Apr 30 03:34:24.708796 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:34:24.970695 kubelet[3387]: E0430 03:34:24.970638 3387 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:34:25.195584 containerd[2097]: time="2025-04-30T03:34:25.195554419Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:34:25.227080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770919185.mount: Deactivated successfully. Apr 30 03:34:25.234561 containerd[2097]: time="2025-04-30T03:34:25.234512919Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fccfaefcaa9483346be5ad874472c58ae573719c35f56a8325c6a6ba573424de\"" Apr 30 03:34:25.235168 containerd[2097]: time="2025-04-30T03:34:25.235140154Z" level=info msg="StartContainer for \"fccfaefcaa9483346be5ad874472c58ae573719c35f56a8325c6a6ba573424de\"" Apr 30 03:34:25.295167 containerd[2097]: time="2025-04-30T03:34:25.295116790Z" level=info msg="StartContainer for \"fccfaefcaa9483346be5ad874472c58ae573719c35f56a8325c6a6ba573424de\" returns successfully" Apr 30 03:34:25.346743 containerd[2097]: time="2025-04-30T03:34:25.346691481Z" level=info msg="shim disconnected" id=fccfaefcaa9483346be5ad874472c58ae573719c35f56a8325c6a6ba573424de namespace=k8s.io Apr 30 03:34:25.346971 containerd[2097]: time="2025-04-30T03:34:25.346790483Z" level=warning msg="cleaning up after shim disconnected" id=fccfaefcaa9483346be5ad874472c58ae573719c35f56a8325c6a6ba573424de namespace=k8s.io Apr 30 03:34:25.346971 containerd[2097]: time="2025-04-30T03:34:25.346809351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:26.200162 containerd[2097]: time="2025-04-30T03:34:26.200130244Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:34:26.205658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fccfaefcaa9483346be5ad874472c58ae573719c35f56a8325c6a6ba573424de-rootfs.mount: Deactivated successfully. Apr 30 03:34:26.234469 containerd[2097]: time="2025-04-30T03:34:26.234427566Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d15ca18605c2697f51fecb54940120e329d3c8c89dab802ebb85705e5266adf4\"" Apr 30 03:34:26.235557 containerd[2097]: time="2025-04-30T03:34:26.235065511Z" level=info msg="StartContainer for \"d15ca18605c2697f51fecb54940120e329d3c8c89dab802ebb85705e5266adf4\"" Apr 30 03:34:26.322228 containerd[2097]: time="2025-04-30T03:34:26.321701799Z" level=info msg="StartContainer for \"d15ca18605c2697f51fecb54940120e329d3c8c89dab802ebb85705e5266adf4\" returns successfully" Apr 30 03:34:26.377645 containerd[2097]: time="2025-04-30T03:34:26.377255325Z" level=info msg="shim disconnected" id=d15ca18605c2697f51fecb54940120e329d3c8c89dab802ebb85705e5266adf4 namespace=k8s.io Apr 30 03:34:26.377645 containerd[2097]: time="2025-04-30T03:34:26.377316561Z" level=warning msg="cleaning up after shim disconnected" id=d15ca18605c2697f51fecb54940120e329d3c8c89dab802ebb85705e5266adf4 namespace=k8s.io Apr 30 03:34:26.377645 containerd[2097]: time="2025-04-30T03:34:26.377330856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:26.404832 containerd[2097]: time="2025-04-30T03:34:26.404770512Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:34:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:34:26.497532 kubelet[3387]: I0430 03:34:26.497394 3387 setters.go:580] "Node became not ready" node="ip-172-31-18-209" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T03:34:26Z","lastTransitionTime":"2025-04-30T03:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 03:34:27.204407 containerd[2097]: time="2025-04-30T03:34:27.202303226Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:34:27.205616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15ca18605c2697f51fecb54940120e329d3c8c89dab802ebb85705e5266adf4-rootfs.mount: Deactivated successfully. Apr 30 03:34:27.224904 containerd[2097]: time="2025-04-30T03:34:27.224692529Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba4aa60920deae7168b93789bd8362c017e59343420259bbba867553af6535f8\"" Apr 30 03:34:27.233557 containerd[2097]: time="2025-04-30T03:34:27.233503904Z" level=info msg="StartContainer for \"ba4aa60920deae7168b93789bd8362c017e59343420259bbba867553af6535f8\"" Apr 30 03:34:27.290338 containerd[2097]: time="2025-04-30T03:34:27.290304390Z" level=info msg="StartContainer for \"ba4aa60920deae7168b93789bd8362c017e59343420259bbba867553af6535f8\" returns successfully" Apr 30 03:34:27.307683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba4aa60920deae7168b93789bd8362c017e59343420259bbba867553af6535f8-rootfs.mount: Deactivated successfully. Apr 30 03:34:27.325056 containerd[2097]: time="2025-04-30T03:34:27.324998318Z" level=info msg="shim disconnected" id=ba4aa60920deae7168b93789bd8362c017e59343420259bbba867553af6535f8 namespace=k8s.io Apr 30 03:34:27.325056 containerd[2097]: time="2025-04-30T03:34:27.325055139Z" level=warning msg="cleaning up after shim disconnected" id=ba4aa60920deae7168b93789bd8362c017e59343420259bbba867553af6535f8 namespace=k8s.io Apr 30 03:34:27.325056 containerd[2097]: time="2025-04-30T03:34:27.325064391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:28.209821 containerd[2097]: time="2025-04-30T03:34:28.209128010Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:34:28.233726 containerd[2097]: time="2025-04-30T03:34:28.233537189Z" level=info msg="CreateContainer within sandbox \"c2d87269b9746f00a324eb7d65be4d95eaf51acb514a5dd08797bf8a57ee5db7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12e10e8bad070527c23c9f8e1aef63bb8e97e1080e71bb87f318c84969fffcd9\"" Apr 30 03:34:28.234922 containerd[2097]: time="2025-04-30T03:34:28.234102860Z" level=info msg="StartContainer for \"12e10e8bad070527c23c9f8e1aef63bb8e97e1080e71bb87f318c84969fffcd9\"" Apr 30 03:34:28.300398 containerd[2097]: time="2025-04-30T03:34:28.300322226Z" level=info msg="StartContainer for \"12e10e8bad070527c23c9f8e1aef63bb8e97e1080e71bb87f318c84969fffcd9\" returns successfully" Apr 30 03:34:28.868395 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 03:34:29.226328 systemd[1]: run-containerd-runc-k8s.io-12e10e8bad070527c23c9f8e1aef63bb8e97e1080e71bb87f318c84969fffcd9-runc.ZR6oQO.mount: Deactivated successfully. Apr 30 03:34:31.600903 kubelet[3387]: E0430 03:34:31.600830 3387 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34370->127.0.0.1:42035: write tcp 127.0.0.1:34370->127.0.0.1:42035: write: connection reset by peer Apr 30 03:34:31.721870 systemd-networkd[1652]: lxc_health: Link UP Apr 30 03:34:31.724418 systemd-networkd[1652]: lxc_health: Gained carrier Apr 30 03:34:31.726931 (udev-worker)[6183]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:34:32.293481 kubelet[3387]: I0430 03:34:32.293415 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nzbjm" podStartSLOduration=9.293392593 podStartE2EDuration="9.293392593s" podCreationTimestamp="2025-04-30 03:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:34:29.225258817 +0000 UTC m=+94.502415834" watchObservedRunningTime="2025-04-30 03:34:32.293392593 +0000 UTC m=+97.570549617" Apr 30 03:34:33.121561 systemd-networkd[1652]: lxc_health: Gained IPv6LL Apr 30 03:34:33.889471 kubelet[3387]: E0430 03:34:33.889201 3387 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44684->127.0.0.1:42035: write tcp 127.0.0.1:44684->127.0.0.1:42035: write: broken pipe Apr 30 03:34:35.908948 ntpd[2051]: Listen normally on 13 lxc_health [fe80::94db:9ff:fe0c:b953%14]:123 Apr 30 03:34:35.911828 ntpd[2051]: 30 Apr 03:34:35 ntpd[2051]: Listen normally on 13 lxc_health [fe80::94db:9ff:fe0c:b953%14]:123 Apr 30 03:34:36.010013 systemd[1]: run-containerd-runc-k8s.io-12e10e8bad070527c23c9f8e1aef63bb8e97e1080e71bb87f318c84969fffcd9-runc.CQlq5d.mount: Deactivated successfully. Apr 30 03:34:38.235769 sshd[5420]: pam_unix(sshd:session): session closed for user core Apr 30 03:34:38.240653 systemd[1]: sshd@25-172.31.18.209:22-147.75.109.163:40194.service: Deactivated successfully. Apr 30 03:34:38.243568 systemd-logind[2075]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:34:38.244186 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:34:38.245946 systemd-logind[2075]: Removed session 26. Apr 30 03:34:51.426421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a-rootfs.mount: Deactivated successfully. Apr 30 03:34:51.452437 containerd[2097]: time="2025-04-30T03:34:51.452347775Z" level=info msg="shim disconnected" id=6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a namespace=k8s.io Apr 30 03:34:51.452437 containerd[2097]: time="2025-04-30T03:34:51.452432001Z" level=warning msg="cleaning up after shim disconnected" id=6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a namespace=k8s.io Apr 30 03:34:51.452437 containerd[2097]: time="2025-04-30T03:34:51.452444651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:52.259750 kubelet[3387]: I0430 03:34:52.259702 3387 scope.go:117] "RemoveContainer" containerID="6e74406c3b608dfee733a8075ade4cc0ec37ba71c592f8ba91ed9b020b13a62a" Apr 30 03:34:52.262382 containerd[2097]: time="2025-04-30T03:34:52.262331103Z" level=info msg="CreateContainer within sandbox \"a36cb6b7efcbe6c32525813faecd3a4e6c1075938a3aa1f4f3d67479f4c5fd85\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:34:52.279828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183086517.mount: Deactivated successfully. Apr 30 03:34:52.285089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount920445444.mount: Deactivated successfully. Apr 30 03:34:52.295910 containerd[2097]: time="2025-04-30T03:34:52.295862733Z" level=info msg="CreateContainer within sandbox \"a36cb6b7efcbe6c32525813faecd3a4e6c1075938a3aa1f4f3d67479f4c5fd85\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4ae33f2b8668291a6e6a69ce193cc42c98dd3f38334e55ee00a8813c3e886d07\"" Apr 30 03:34:52.297516 containerd[2097]: time="2025-04-30T03:34:52.296516828Z" level=info msg="StartContainer for \"4ae33f2b8668291a6e6a69ce193cc42c98dd3f38334e55ee00a8813c3e886d07\"" Apr 30 03:34:52.379804 containerd[2097]: time="2025-04-30T03:34:52.379766941Z" level=info msg="StartContainer for \"4ae33f2b8668291a6e6a69ce193cc42c98dd3f38334e55ee00a8813c3e886d07\" returns successfully" Apr 30 03:34:54.902341 containerd[2097]: time="2025-04-30T03:34:54.902302288Z" level=info msg="StopPodSandbox for \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\"" Apr 30 03:34:54.903031 containerd[2097]: time="2025-04-30T03:34:54.902411822Z" level=info msg="TearDown network for sandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" successfully" Apr 30 03:34:54.903031 containerd[2097]: time="2025-04-30T03:34:54.902424257Z" level=info msg="StopPodSandbox for \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" returns successfully" Apr 30 03:34:54.910132 containerd[2097]: time="2025-04-30T03:34:54.910088680Z" level=info msg="RemovePodSandbox for \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\"" Apr 30 03:34:54.914442 containerd[2097]: time="2025-04-30T03:34:54.914382891Z" level=info msg="Forcibly stopping sandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\"" Apr 30 03:34:54.914592 containerd[2097]: time="2025-04-30T03:34:54.914482672Z" level=info msg="TearDown network for sandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" successfully" Apr 30 03:34:54.921539 containerd[2097]: time="2025-04-30T03:34:54.921489798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:34:54.921669 containerd[2097]: time="2025-04-30T03:34:54.921560120Z" level=info msg="RemovePodSandbox \"0c4eb54c6fe1eb479308cb21f150a1d4e5c090c70095922ad694381c0b592e19\" returns successfully" Apr 30 03:34:54.922090 containerd[2097]: time="2025-04-30T03:34:54.922067852Z" level=info msg="StopPodSandbox for \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\"" Apr 30 03:34:54.922182 containerd[2097]: time="2025-04-30T03:34:54.922150944Z" level=info msg="TearDown network for sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" successfully" Apr 30 03:34:54.922182 containerd[2097]: time="2025-04-30T03:34:54.922161246Z" level=info msg="StopPodSandbox for \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" returns successfully" Apr 30 03:34:54.922490 containerd[2097]: time="2025-04-30T03:34:54.922469869Z" level=info msg="RemovePodSandbox for \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\"" Apr 30 03:34:54.922541 containerd[2097]: time="2025-04-30T03:34:54.922491464Z" level=info msg="Forcibly stopping sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\"" Apr 30 03:34:54.922541 containerd[2097]: time="2025-04-30T03:34:54.922533706Z" level=info msg="TearDown network for sandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" successfully" Apr 30 03:34:54.927104 containerd[2097]: time="2025-04-30T03:34:54.927043327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:34:54.927104 containerd[2097]: time="2025-04-30T03:34:54.927098399Z" level=info msg="RemovePodSandbox \"e87c04033fa24e1e590342e67bff2a4977eb25c739b838c5e1fc2f484c30e7ac\" returns successfully" Apr 30 03:34:56.712855 kubelet[3387]: E0430 03:34:56.712793 3387 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-209?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 30 03:34:57.452725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89-rootfs.mount: Deactivated successfully. Apr 30 03:34:57.476320 containerd[2097]: time="2025-04-30T03:34:57.476247523Z" level=info msg="shim disconnected" id=90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89 namespace=k8s.io Apr 30 03:34:57.476320 containerd[2097]: time="2025-04-30T03:34:57.476300052Z" level=warning msg="cleaning up after shim disconnected" id=90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89 namespace=k8s.io Apr 30 03:34:57.476320 containerd[2097]: time="2025-04-30T03:34:57.476325043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:34:58.275041 kubelet[3387]: I0430 03:34:58.274994 3387 scope.go:117] "RemoveContainer" containerID="90636ad9422db2b5c57795148c6ef17a100c9e8e67bf0e0499ff8f2555bbcd89" Apr 30 03:34:58.277259 containerd[2097]: time="2025-04-30T03:34:58.277226073Z" level=info msg="CreateContainer within sandbox \"8e32fcc9eb294c5d512604424a39f95bc448b1f9f2fbc0041726116ac15df0ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:34:58.305621 containerd[2097]: time="2025-04-30T03:34:58.305574744Z" level=info msg="CreateContainer within sandbox \"8e32fcc9eb294c5d512604424a39f95bc448b1f9f2fbc0041726116ac15df0ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"af4d231fce466c67787c6f35f53bc979806145e551d218f5adc291a4bfc2ce7b\"" Apr 30 03:34:58.306159 containerd[2097]: time="2025-04-30T03:34:58.306127941Z" level=info msg="StartContainer for \"af4d231fce466c67787c6f35f53bc979806145e551d218f5adc291a4bfc2ce7b\"" Apr 30 03:34:58.388524 containerd[2097]: time="2025-04-30T03:34:58.388469195Z" level=info msg="StartContainer for \"af4d231fce466c67787c6f35f53bc979806145e551d218f5adc291a4bfc2ce7b\" returns successfully" Apr 30 03:35:06.714425 kubelet[3387]: E0430 03:35:06.714283 3387 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-209?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"