Apr 30 03:30:21.924782 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:30:21.924846 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:21.924867 kernel: BIOS-provided physical RAM map: Apr 30 03:30:21.924877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 03:30:21.924887 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 30 03:30:21.924898 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 30 03:30:21.924911 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 30 03:30:21.924923 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 30 03:30:21.924935 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 30 03:30:21.924949 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 30 03:30:21.924960 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 30 03:30:21.924972 kernel: NX (Execute Disable) protection: active Apr 30 03:30:21.924983 kernel: APIC: Static calls initialized Apr 30 03:30:21.924995 kernel: efi: EFI v2.7 by EDK II Apr 30 03:30:21.925010 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Apr 30 03:30:21.925025 kernel: SMBIOS 2.7 present. Apr 30 03:30:21.925037 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 30 03:30:21.925049 kernel: Hypervisor detected: KVM Apr 30 03:30:21.925062 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:30:21.925075 kernel: kvm-clock: using sched offset of 3762576229 cycles Apr 30 03:30:21.925088 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:30:21.925102 kernel: tsc: Detected 2500.004 MHz processor Apr 30 03:30:21.925116 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:30:21.925129 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:30:21.925142 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 30 03:30:21.925158 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 03:30:21.925172 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:30:21.925185 kernel: Using GB pages for direct mapping Apr 30 03:30:21.925199 kernel: Secure boot disabled Apr 30 03:30:21.925212 kernel: ACPI: Early table checksum verification disabled Apr 30 03:30:21.925225 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 30 03:30:21.925239 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 03:30:21.925252 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 03:30:21.925265 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 30 03:30:21.925282 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 30 03:30:21.925296 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 30 03:30:21.925310 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 03:30:21.925323 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 03:30:21.925337 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 30 03:30:21.925351 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 30 03:30:21.925370 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:30:21.925388 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 30 03:30:21.925403 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 30 03:30:21.925418 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 30 03:30:21.925433 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 30 03:30:21.925448 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 30 03:30:21.925463 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 30 03:30:21.925477 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 30 03:30:21.925494 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 30 03:30:21.925509 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 30 03:30:21.925524 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 30 03:30:21.925539 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 30 03:30:21.925552 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Apr 30 03:30:21.925566 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 30 03:30:21.925580 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 03:30:21.925595 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 03:30:21.925610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 30 03:30:21.925628 kernel: NUMA: Initialized distance table, cnt=1 Apr 30 03:30:21.925642 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Apr 30 03:30:21.925657 kernel: Zone ranges: Apr 30 03:30:21.925670 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:30:21.925682 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 30 03:30:21.925697 kernel: Normal empty Apr 30 03:30:21.925711 kernel: Movable zone start for each node Apr 30 03:30:21.925725 kernel: Early memory node ranges Apr 30 03:30:21.925739 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 03:30:21.925753 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 30 03:30:21.925771 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 30 03:30:21.925784 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 30 03:30:21.925798 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:30:21.925831 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 03:30:21.925844 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 30 03:30:21.925855 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 30 03:30:21.925867 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 30 03:30:21.925879 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:30:21.925892 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 30 03:30:21.925908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:30:21.925921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:30:21.925935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:30:21.925950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:30:21.925964 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:30:21.925978 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:30:21.925993 kernel: TSC deadline timer available Apr 30 03:30:21.926007 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:30:21.926022 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:30:21.926040 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 30 03:30:21.926054 kernel: Booting paravirtualized kernel on KVM Apr 30 03:30:21.926068 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:30:21.926083 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:30:21.926098 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:30:21.926112 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:30:21.926127 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:30:21.926140 kernel: kvm-guest: PV spinlocks enabled Apr 30 03:30:21.926155 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 03:30:21.926175 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:21.926191 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:30:21.926205 kernel: random: crng init done Apr 30 03:30:21.926220 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:30:21.926234 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:30:21.926248 kernel: Fallback order for Node 0: 0 Apr 30 03:30:21.926262 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 30 03:30:21.926277 kernel: Policy zone: DMA32 Apr 30 03:30:21.926295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:30:21.926309 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 162936K reserved, 0K cma-reserved) Apr 30 03:30:21.926324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:30:21.926338 kernel: Kernel/User page tables isolation: enabled Apr 30 03:30:21.926353 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:30:21.926378 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:30:21.926392 kernel: Dynamic Preempt: voluntary Apr 30 03:30:21.926406 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:30:21.926422 kernel: rcu: RCU event tracing is enabled. Apr 30 03:30:21.926440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:30:21.926455 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:30:21.926469 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:30:21.926483 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:30:21.926496 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:30:21.926508 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:30:21.926522 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:30:21.926552 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:30:21.926568 kernel: Console: colour dummy device 80x25 Apr 30 03:30:21.926584 kernel: printk: console [tty0] enabled Apr 30 03:30:21.926599 kernel: printk: console [ttyS0] enabled Apr 30 03:30:21.926615 kernel: ACPI: Core revision 20230628 Apr 30 03:30:21.926634 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 30 03:30:21.926650 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:30:21.926666 kernel: x2apic enabled Apr 30 03:30:21.926682 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:30:21.926699 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Apr 30 03:30:21.926719 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Apr 30 03:30:21.926735 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 03:30:21.926752 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 03:30:21.926768 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:30:21.926784 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:30:21.926799 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:30:21.926851 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:30:21.926868 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 30 03:30:21.926884 kernel: RETBleed: Vulnerable Apr 30 03:30:21.926904 kernel: Speculative Store Bypass: Vulnerable Apr 30 03:30:21.926919 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:30:21.926933 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 30 03:30:21.926949 kernel: GDS: Unknown: Dependent on hypervisor status Apr 30 03:30:21.926964 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:30:21.926980 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:30:21.926997 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:30:21.927013 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 03:30:21.927028 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 03:30:21.927043 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 30 03:30:21.927058 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 30 03:30:21.927077 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 30 03:30:21.927093 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 30 03:30:21.927108 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:30:21.927124 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 03:30:21.927141 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 03:30:21.927157 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 30 03:30:21.927173 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 30 03:30:21.927189 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 30 03:30:21.927205 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 30 03:30:21.927221 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 30 03:30:21.927237 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:30:21.927253 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:30:21.927273 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:30:21.927289 kernel: landlock: Up and running. Apr 30 03:30:21.927305 kernel: SELinux: Initializing. Apr 30 03:30:21.927322 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:30:21.927338 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:30:21.927355 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 30 03:30:21.927371 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:21.927388 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:21.927404 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:30:21.927421 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 30 03:30:21.927440 kernel: signal: max sigframe size: 3632 Apr 30 03:30:21.927457 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:30:21.927474 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:30:21.927490 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 03:30:21.927507 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:30:21.927523 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:30:21.927539 kernel: .... node #0, CPUs: #1 Apr 30 03:30:21.927556 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 30 03:30:21.927574 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 03:30:21.927593 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:30:21.927610 kernel: smpboot: Max logical packages: 1 Apr 30 03:30:21.927626 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Apr 30 03:30:21.927642 kernel: devtmpfs: initialized Apr 30 03:30:21.927659 kernel: x86/mm: Memory block size: 128MB Apr 30 03:30:21.927675 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 30 03:30:21.927692 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:30:21.927708 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:30:21.927728 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:30:21.927743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:30:21.927760 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:30:21.927777 kernel: audit: type=2000 audit(1745983822.412:1): state=initialized audit_enabled=0 res=1 Apr 30 03:30:21.927792 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:30:21.927819 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:30:21.927836 kernel: cpuidle: using governor menu Apr 30 03:30:21.927852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:30:21.927869 kernel: dca service started, version 1.12.1 Apr 30 03:30:21.927888 kernel: PCI: Using configuration type 1 for base access Apr 30 03:30:21.927904 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:30:21.927920 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:30:21.927937 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:30:21.927953 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:30:21.927969 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:30:21.927985 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:30:21.928002 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:30:21.928019 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:30:21.928038 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:30:21.928053 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 30 03:30:21.928069 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:30:21.928085 kernel: ACPI: Interpreter enabled Apr 30 03:30:21.928102 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:30:21.928118 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:30:21.928135 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:30:21.928151 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:30:21.928168 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 30 03:30:21.928184 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:30:21.928413 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:30:21.928562 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 30 03:30:21.928701 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 30 03:30:21.928722 kernel: acpiphp: Slot [3] registered Apr 30 03:30:21.928738 kernel: acpiphp: Slot [4] registered Apr 30 03:30:21.928755 kernel: acpiphp: Slot [5] registered Apr 30 03:30:21.928772 kernel: acpiphp: Slot [6] registered Apr 30 03:30:21.928792 kernel: acpiphp: Slot [7] registered Apr 30 03:30:21.928828 kernel: acpiphp: Slot [8] registered Apr 30 03:30:21.928840 kernel: acpiphp: Slot [9] registered Apr 30 03:30:21.928852 kernel: acpiphp: Slot [10] registered Apr 30 03:30:21.928865 kernel: acpiphp: Slot [11] registered Apr 30 03:30:21.928878 kernel: acpiphp: Slot [12] registered Apr 30 03:30:21.928894 kernel: acpiphp: Slot [13] registered Apr 30 03:30:21.928907 kernel: acpiphp: Slot [14] registered Apr 30 03:30:21.928922 kernel: acpiphp: Slot [15] registered Apr 30 03:30:21.928939 kernel: acpiphp: Slot [16] registered Apr 30 03:30:21.928953 kernel: acpiphp: Slot [17] registered Apr 30 03:30:21.928968 kernel: acpiphp: Slot [18] registered Apr 30 03:30:21.928982 kernel: acpiphp: Slot [19] registered Apr 30 03:30:21.928997 kernel: acpiphp: Slot [20] registered Apr 30 03:30:21.929014 kernel: acpiphp: Slot [21] registered Apr 30 03:30:21.929030 kernel: acpiphp: Slot [22] registered Apr 30 03:30:21.929047 kernel: acpiphp: Slot [23] registered Apr 30 03:30:21.929063 kernel: acpiphp: Slot [24] registered Apr 30 03:30:21.929079 kernel: acpiphp: Slot [25] registered Apr 30 03:30:21.929100 kernel: acpiphp: Slot [26] registered Apr 30 03:30:21.929117 kernel: acpiphp: Slot [27] registered Apr 30 03:30:21.929130 kernel: acpiphp: Slot [28] registered Apr 30 03:30:21.929144 kernel: acpiphp: Slot [29] registered Apr 30 03:30:21.929157 kernel: acpiphp: Slot [30] registered Apr 30 03:30:21.929169 kernel: acpiphp: Slot [31] registered Apr 30 03:30:21.929185 kernel: PCI host bridge to bus 0000:00 Apr 30 03:30:21.929356 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:30:21.929496 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:30:21.929616 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:30:21.929739 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 30 03:30:21.929876 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 30 03:30:21.929996 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:30:21.930166 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 30 03:30:21.930314 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 30 03:30:21.930472 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 30 03:30:21.930600 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 30 03:30:21.930728 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 30 03:30:21.930867 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 30 03:30:21.930996 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 30 03:30:21.931124 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 30 03:30:21.931249 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 30 03:30:21.931378 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 30 03:30:21.931513 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 30 03:30:21.931648 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 30 03:30:21.931782 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 03:30:21.931940 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 30 03:30:21.932079 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:30:21.932231 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 03:30:21.932371 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 30 03:30:21.932514 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 03:30:21.932655 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 30 03:30:21.932677 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:30:21.932695 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:30:21.932712 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:30:21.932730 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:30:21.932751 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 30 03:30:21.932768 kernel: iommu: Default domain type: Translated Apr 30 03:30:21.932785 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:30:21.932801 kernel: efivars: Registered efivars operations Apr 30 03:30:21.932839 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:30:21.932852 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:30:21.932867 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 30 03:30:21.932880 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 30 03:30:21.933024 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 30 03:30:21.933165 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 30 03:30:21.933300 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:30:21.933322 kernel: vgaarb: loaded Apr 30 03:30:21.933339 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 03:30:21.933357 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 30 03:30:21.933373 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:30:21.933390 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:30:21.933408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:30:21.933429 kernel: pnp: PnP ACPI init Apr 30 03:30:21.933446 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:30:21.933462 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:30:21.933479 kernel: NET: Registered PF_INET protocol family Apr 30 03:30:21.933496 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:30:21.933513 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:30:21.933530 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:30:21.933548 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:30:21.933565 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:30:21.933585 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:30:21.933603 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:30:21.933620 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:30:21.933637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:30:21.933653 kernel: NET: Registered PF_XDP protocol family Apr 30 03:30:21.933785 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:30:21.933934 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:30:21.934058 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:30:21.934188 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 30 03:30:21.934313 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 30 03:30:21.934471 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 30 03:30:21.934494 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:30:21.934512 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 03:30:21.934530 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Apr 30 03:30:21.934546 kernel: clocksource: Switched to clocksource tsc Apr 30 03:30:21.934563 kernel: Initialise system trusted keyrings Apr 30 03:30:21.934580 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:30:21.934602 kernel: Key type asymmetric registered Apr 30 03:30:21.934618 kernel: Asymmetric key parser 'x509' registered Apr 30 03:30:21.934635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:30:21.934651 kernel: io scheduler mq-deadline registered Apr 30 03:30:21.934667 kernel: io scheduler kyber registered Apr 30 03:30:21.934684 kernel: io scheduler bfq registered Apr 30 03:30:21.934701 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:30:21.934719 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:30:21.934735 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:30:21.934756 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:30:21.934773 kernel: i8042: Warning: Keylock active Apr 30 03:30:21.934789 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:30:21.934889 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:30:21.935043 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 30 03:30:21.935172 kernel: rtc_cmos 00:00: registered as rtc0 Apr 30 03:30:21.935297 kernel: rtc_cmos 00:00: setting system clock to 2025-04-30T03:30:21 UTC (1745983821) Apr 30 03:30:21.935421 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 30 03:30:21.935446 kernel: intel_pstate: CPU model not supported Apr 30 03:30:21.935462 kernel: efifb: probing for efifb Apr 30 03:30:21.935478 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 30 03:30:21.935493 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 30 03:30:21.935508 kernel: efifb: scrolling: redraw Apr 30 03:30:21.935524 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 03:30:21.935540 kernel: Console: switching to colour frame buffer device 100x37 Apr 30 03:30:21.935556 kernel: fb0: EFI VGA frame buffer device Apr 30 03:30:21.935571 kernel: pstore: Using crash dump compression: deflate Apr 30 03:30:21.935589 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 03:30:21.935604 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:30:21.935619 kernel: Segment Routing with IPv6 Apr 30 03:30:21.935633 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:30:21.935646 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:30:21.935663 kernel: Key type dns_resolver registered Apr 30 03:30:21.935705 kernel: IPI shorthand broadcast: enabled Apr 30 03:30:21.935729 kernel: sched_clock: Marking stable (464002020, 123933385)->(669032902, -81097497) Apr 30 03:30:21.935745 kernel: registered taskstats version 1 Apr 30 03:30:21.935767 kernel: Loading compiled-in X.509 certificates Apr 30 03:30:21.935785 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:30:21.935802 kernel: Key type .fscrypt registered Apr 30 03:30:21.935880 kernel: Key type fscrypt-provisioning registered Apr 30 03:30:21.935898 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:30:21.935915 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:30:21.935933 kernel: ima: No architecture policies found Apr 30 03:30:21.935950 kernel: clk: Disabling unused clocks Apr 30 03:30:21.935972 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:30:21.935990 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:30:21.936007 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:30:21.936025 kernel: Run /init as init process Apr 30 03:30:21.936043 kernel: with arguments: Apr 30 03:30:21.936059 kernel: /init Apr 30 03:30:21.936076 kernel: with environment: Apr 30 03:30:21.936093 kernel: HOME=/ Apr 30 03:30:21.936110 kernel: TERM=linux Apr 30 03:30:21.936127 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:30:21.936151 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:30:21.936172 systemd[1]: Detected virtualization amazon. Apr 30 03:30:21.936190 systemd[1]: Detected architecture x86-64. Apr 30 03:30:21.936208 systemd[1]: Running in initrd. Apr 30 03:30:21.936225 systemd[1]: No hostname configured, using default hostname. Apr 30 03:30:21.936239 systemd[1]: Hostname set to . Apr 30 03:30:21.936257 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:30:21.936273 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:30:21.936289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:21.936305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:21.936320 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:30:21.936343 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:30:21.936366 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:30:21.936394 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:30:21.936417 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:30:21.936435 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:30:21.936451 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:21.936469 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:21.936490 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:30:21.936508 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:30:21.936526 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:30:21.936544 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:30:21.936562 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:21.936579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:21.936597 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:30:21.936615 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:30:21.936632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:21.936654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:21.936671 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:21.936689 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:30:21.936707 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:30:21.936724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:30:21.936743 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:30:21.936760 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:30:21.936777 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:30:21.936798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:30:21.936837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:21.936889 systemd-journald[178]: Collecting audit messages is disabled. Apr 30 03:30:21.936927 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:21.936949 systemd-journald[178]: Journal started Apr 30 03:30:21.936985 systemd-journald[178]: Runtime Journal (/run/log/journal/ec263d20df0aefce173d97a9b33c4b7d) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:30:21.945391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:21.945488 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:30:21.945699 systemd-modules-load[179]: Inserted module 'overlay' Apr 30 03:30:21.947618 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:30:21.962025 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:30:21.973089 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:30:21.976038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:21.992556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:21.993629 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:22.005792 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:30:22.006829 kernel: Bridge firewalling registered Apr 30 03:30:22.006853 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:30:21.997666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:30:21.999597 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 30 03:30:22.006785 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:22.014240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:30:22.019000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:30:22.035622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:22.041024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:30:22.042577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:22.048783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:22.058018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:30:22.062630 dracut-cmdline[209]: dracut-dracut-053 Apr 30 03:30:22.067420 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:30:22.102979 systemd-resolved[216]: Positive Trust Anchors: Apr 30 03:30:22.102999 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:30:22.103064 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:30:22.108207 systemd-resolved[216]: Defaulting to hostname 'linux'. Apr 30 03:30:22.112770 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:30:22.115218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:22.155843 kernel: SCSI subsystem initialized Apr 30 03:30:22.165833 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:30:22.176839 kernel: iscsi: registered transport (tcp) Apr 30 03:30:22.199306 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:30:22.199401 kernel: QLogic iSCSI HBA Driver Apr 30 03:30:22.238233 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:22.244067 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:30:22.271400 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:30:22.271477 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:30:22.271499 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:30:22.313839 kernel: raid6: avx512x4 gen() 15462 MB/s Apr 30 03:30:22.330840 kernel: raid6: avx512x2 gen() 15338 MB/s Apr 30 03:30:22.348828 kernel: raid6: avx512x1 gen() 15430 MB/s Apr 30 03:30:22.365835 kernel: raid6: avx2x4 gen() 15272 MB/s Apr 30 03:30:22.382836 kernel: raid6: avx2x2 gen() 15339 MB/s Apr 30 03:30:22.400034 kernel: raid6: avx2x1 gen() 11707 MB/s Apr 30 03:30:22.400072 kernel: raid6: using algorithm avx512x4 gen() 15462 MB/s Apr 30 03:30:22.419856 kernel: raid6: .... xor() 7520 MB/s, rmw enabled Apr 30 03:30:22.419896 kernel: raid6: using avx512x2 recovery algorithm Apr 30 03:30:22.440836 kernel: xor: automatically using best checksumming function avx Apr 30 03:30:22.601839 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:30:22.612493 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:22.622063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:22.634660 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 30 03:30:22.639720 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:22.650345 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:30:22.666200 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 30 03:30:22.695687 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:22.701031 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:30:22.751934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:22.760313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:30:22.787275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:22.790487 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:22.792348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:22.793523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:30:22.799097 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:30:22.834626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:22.856931 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:30:22.871171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:22.872199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:22.874140 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:22.875507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:22.875664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:22.878930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:22.889998 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:30:22.890073 kernel: AES CTR mode by8 optimization enabled Apr 30 03:30:22.893298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:22.898870 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 03:30:22.920403 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 03:30:22.920586 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 30 03:30:22.920742 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:a0:74:78:95:d9 Apr 30 03:30:22.920912 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 03:30:22.918128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:22.925440 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 30 03:30:22.931051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:30:22.939885 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 03:30:22.953633 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:30:22.953704 kernel: GPT:9289727 != 16777215 Apr 30 03:30:22.953731 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:30:22.953749 kernel: GPT:9289727 != 16777215 Apr 30 03:30:22.953767 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:30:22.954300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:22.956736 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:30:22.960675 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:23.025874 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (446) Apr 30 03:30:23.066322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:30:23.067831 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (448) Apr 30 03:30:23.081697 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 03:30:23.098211 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 03:30:23.109069 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 03:30:23.109603 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 03:30:23.131039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:30:23.137260 disk-uuid[625]: Primary Header is updated. Apr 30 03:30:23.137260 disk-uuid[625]: Secondary Entries is updated. Apr 30 03:30:23.137260 disk-uuid[625]: Secondary Header is updated. Apr 30 03:30:23.144840 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:30:23.149860 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:30:23.152859 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:30:24.163106 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 03:30:24.163184 disk-uuid[626]: The operation has completed successfully. Apr 30 03:30:24.273040 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:30:24.273145 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:30:24.290248 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:30:24.302862 sh[969]: Success Apr 30 03:30:24.323839 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 03:30:24.414920 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:30:24.423919 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:30:24.425087 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:30:24.451618 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:30:24.451680 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:24.451703 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:30:24.454774 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:30:24.454851 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:30:24.554829 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:30:24.566800 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:30:24.567850 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:30:24.571981 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:30:24.574943 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:30:24.596073 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:24.596138 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:24.596157 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:30:24.603025 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:30:24.611450 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:30:24.614241 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:24.619374 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:30:24.625996 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:30:24.662532 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:24.668049 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:30:24.691264 systemd-networkd[1161]: lo: Link UP Apr 30 03:30:24.691275 systemd-networkd[1161]: lo: Gained carrier Apr 30 03:30:24.692947 systemd-networkd[1161]: Enumeration completed Apr 30 03:30:24.693390 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:24.693395 systemd-networkd[1161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:24.694687 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:30:24.696767 systemd[1]: Reached target network.target - Network. Apr 30 03:30:24.697746 systemd-networkd[1161]: eth0: Link UP Apr 30 03:30:24.697754 systemd-networkd[1161]: eth0: Gained carrier Apr 30 03:30:24.697768 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:24.712903 systemd-networkd[1161]: eth0: DHCPv4 address 172.31.22.79/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:30:25.006548 ignition[1104]: Ignition 2.19.0 Apr 30 03:30:25.006564 ignition[1104]: Stage: fetch-offline Apr 30 03:30:25.006848 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:25.006863 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:25.008938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:25.007298 ignition[1104]: Ignition finished successfully Apr 30 03:30:25.017014 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:30:25.032106 ignition[1169]: Ignition 2.19.0 Apr 30 03:30:25.032120 ignition[1169]: Stage: fetch Apr 30 03:30:25.032605 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:25.032621 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:25.032738 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:25.042538 ignition[1169]: PUT result: OK Apr 30 03:30:25.044029 ignition[1169]: parsed url from cmdline: "" Apr 30 03:30:25.044040 ignition[1169]: no config URL provided Apr 30 03:30:25.044048 ignition[1169]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:30:25.044060 ignition[1169]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:30:25.044079 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:25.044588 ignition[1169]: PUT result: OK Apr 30 03:30:25.044658 ignition[1169]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 03:30:25.045194 ignition[1169]: GET result: OK Apr 30 03:30:25.045257 ignition[1169]: parsing config with SHA512: f74e97912d45baa5d69108f8b41e077f3d23cd03d134afa0277c77c93897edba669e1dd8876d6df5b08d8833adf436f2c8ed3c9204a4b7652c8bf6842f77e363 Apr 30 03:30:25.049479 unknown[1169]: fetched base config from "system" Apr 30 03:30:25.049627 unknown[1169]: fetched base config from "system" Apr 30 03:30:25.049634 unknown[1169]: fetched user config from "aws" Apr 30 03:30:25.050098 ignition[1169]: fetch: fetch complete Apr 30 03:30:25.050104 ignition[1169]: fetch: fetch passed Apr 30 03:30:25.050156 ignition[1169]: Ignition finished successfully Apr 30 03:30:25.051657 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:30:25.058119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:30:25.072433 ignition[1175]: Ignition 2.19.0 Apr 30 03:30:25.072445 ignition[1175]: Stage: kargs Apr 30 03:30:25.072796 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:25.072820 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:25.072903 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:25.073753 ignition[1175]: PUT result: OK Apr 30 03:30:25.075970 ignition[1175]: kargs: kargs passed Apr 30 03:30:25.076028 ignition[1175]: Ignition finished successfully Apr 30 03:30:25.077165 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:30:25.082054 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:30:25.097190 ignition[1181]: Ignition 2.19.0 Apr 30 03:30:25.097202 ignition[1181]: Stage: disks Apr 30 03:30:25.097562 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:25.097572 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:25.097668 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:25.098748 ignition[1181]: PUT result: OK Apr 30 03:30:25.101797 ignition[1181]: disks: disks passed Apr 30 03:30:25.101868 ignition[1181]: Ignition finished successfully Apr 30 03:30:25.103640 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:30:25.104414 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:25.104731 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:30:25.105243 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:30:25.105722 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:30:25.106235 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:30:25.110996 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:30:25.146525 systemd-fsck[1189]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 03:30:25.149135 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:30:25.156952 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:30:25.260823 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:30:25.261127 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:30:25.262053 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:30:25.272938 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:30:25.275882 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:30:25.277111 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 03:30:25.277543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:30:25.277569 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:25.287592 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:30:25.291850 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1208) Apr 30 03:30:25.293966 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:30:25.299088 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:25.299114 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:25.299127 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:30:25.307844 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:30:25.309070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:30:25.598290 initrd-setup-root[1232]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:30:25.614258 initrd-setup-root[1239]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:30:25.618975 initrd-setup-root[1246]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:30:25.623341 initrd-setup-root[1253]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:30:25.839717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:25.851997 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:30:25.857121 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:30:25.863892 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:30:25.865085 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:25.900187 ignition[1321]: INFO : Ignition 2.19.0 Apr 30 03:30:25.901165 ignition[1321]: INFO : Stage: mount Apr 30 03:30:25.902702 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:25.902702 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:25.902702 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:25.905054 ignition[1321]: INFO : PUT result: OK Apr 30 03:30:25.908195 ignition[1321]: INFO : mount: mount passed Apr 30 03:30:25.908997 ignition[1321]: INFO : Ignition finished successfully Apr 30 03:30:25.910423 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:30:25.911376 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:30:25.916042 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:30:25.933177 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:30:25.950843 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1332) Apr 30 03:30:25.954655 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:30:25.954720 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:30:25.954734 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 03:30:25.960864 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 03:30:25.963295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:30:25.983840 ignition[1348]: INFO : Ignition 2.19.0 Apr 30 03:30:25.983840 ignition[1348]: INFO : Stage: files Apr 30 03:30:25.983840 ignition[1348]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:25.983840 ignition[1348]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:25.983840 ignition[1348]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:25.986597 ignition[1348]: INFO : PUT result: OK Apr 30 03:30:25.985113 systemd-networkd[1161]: eth0: Gained IPv6LL Apr 30 03:30:25.989188 ignition[1348]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:30:26.001357 ignition[1348]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:30:26.001357 ignition[1348]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:30:26.038002 ignition[1348]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:30:26.038824 ignition[1348]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:30:26.038824 ignition[1348]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:30:26.038619 unknown[1348]: wrote ssh authorized keys file for user: core Apr 30 03:30:26.041585 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:30:26.041585 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:30:26.148030 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:30:26.472751 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:30:26.472751 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:30:26.474794 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 03:30:26.871847 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:30:27.105609 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:30:27.105609 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:27.107960 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:30:27.389259 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:30:27.694156 ignition[1348]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:27.694156 ignition[1348]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:27.695891 ignition[1348]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:27.701120 ignition[1348]: INFO : files: files passed Apr 30 03:30:27.701120 ignition[1348]: INFO : Ignition finished successfully Apr 30 03:30:27.697178 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:30:27.703075 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:30:27.706168 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:30:27.708679 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:30:27.709224 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:30:27.718647 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:27.718647 initrd-setup-root-after-ignition[1378]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:27.721469 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:27.723246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:27.723867 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:30:27.726989 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:30:27.760971 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:30:27.761143 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:30:27.762375 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:30:27.763540 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:30:27.764444 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:30:27.770027 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:30:27.783981 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:27.788998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:30:27.802555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:27.803275 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:27.804261 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:30:27.805112 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:30:27.805292 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:27.806485 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:30:27.807348 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:30:27.808131 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:30:27.808901 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:27.809642 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:27.810537 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:30:27.811251 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:27.812025 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:30:27.813147 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:30:27.813893 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:30:27.814688 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:30:27.814886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:27.815943 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:27.816715 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:27.817391 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:30:27.817532 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:27.818175 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:30:27.818404 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:27.819771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:30:27.819969 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:27.820664 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:30:27.820835 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:30:27.829165 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:30:27.829819 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:30:27.830015 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:27.836310 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:30:27.838802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:30:27.839587 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:27.845183 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:30:27.845448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:27.850169 ignition[1402]: INFO : Ignition 2.19.0 Apr 30 03:30:27.852533 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:30:27.852668 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:30:27.856447 ignition[1402]: INFO : Stage: umount Apr 30 03:30:27.856447 ignition[1402]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:27.856447 ignition[1402]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 03:30:27.856447 ignition[1402]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 03:30:27.860149 ignition[1402]: INFO : PUT result: OK Apr 30 03:30:27.863102 ignition[1402]: INFO : umount: umount passed Apr 30 03:30:27.864957 ignition[1402]: INFO : Ignition finished successfully Apr 30 03:30:27.865925 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:30:27.866052 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:30:27.867232 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:30:27.867338 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:30:27.870045 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:30:27.870633 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:30:27.871671 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:30:27.871731 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:30:27.873269 systemd[1]: Stopped target network.target - Network. Apr 30 03:30:27.873713 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:30:27.873781 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:27.874952 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:30:27.875454 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:30:27.879892 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:27.880488 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:30:27.880956 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:30:27.882013 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:30:27.882074 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:27.882772 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:30:27.882844 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:27.883383 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:30:27.883449 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:30:27.884014 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:30:27.884073 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:27.884801 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:30:27.886413 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:30:27.888579 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:30:27.889165 systemd-networkd[1161]: eth0: DHCPv6 lease lost Apr 30 03:30:27.889423 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:30:27.889536 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:30:27.891195 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:30:27.891307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:27.892503 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:30:27.892643 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:30:27.895358 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:30:27.895423 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:27.901123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:30:27.901608 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:30:27.901694 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:27.902586 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:27.907336 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:30:27.907494 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:30:27.916461 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:30:27.916687 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:27.919674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:30:27.919774 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:27.921358 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:30:27.921399 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:27.921716 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:30:27.921761 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:27.922848 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:30:27.922908 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:27.923933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:27.923996 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:27.932043 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:30:27.932676 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:30:27.932762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:27.933472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:30:27.933535 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:27.937329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:30:27.937400 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:27.938014 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:30:27.938074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:27.940275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:27.940335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:27.941263 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:30:27.941389 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:30:27.942784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:30:27.942923 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:30:27.944249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:30:27.961073 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:30:27.969108 systemd[1]: Switching root. Apr 30 03:30:27.997632 systemd-journald[178]: Journal stopped Apr 30 03:30:29.785196 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 30 03:30:29.785281 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:30:29.785308 kernel: SELinux: policy capability open_perms=1 Apr 30 03:30:29.785327 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:30:29.785352 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:30:29.785375 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:30:29.785394 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:30:29.785411 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:30:29.785434 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:30:29.785456 kernel: audit: type=1403 audit(1745983828.542:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:30:29.785475 systemd[1]: Successfully loaded SELinux policy in 71.821ms. Apr 30 03:30:29.785503 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.523ms. Apr 30 03:30:29.785524 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:30:29.785547 systemd[1]: Detected virtualization amazon. Apr 30 03:30:29.785567 systemd[1]: Detected architecture x86-64. Apr 30 03:30:29.785586 systemd[1]: Detected first boot. Apr 30 03:30:29.785605 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:30:29.785624 zram_generator::config[1445]: No configuration found. Apr 30 03:30:29.785644 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:30:29.785663 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:30:29.785682 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:30:29.785704 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:30:29.785724 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:30:29.785742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:30:29.785761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:30:29.785780 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:30:29.785798 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:30:29.787098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:30:29.787128 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:30:29.787157 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:30:29.787178 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:29.787200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:29.787218 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:30:29.787238 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:30:29.787257 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:30:29.787276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:30:29.787294 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:30:29.787318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:29.787341 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:30:29.787360 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:30:29.787378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:30:29.787396 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:30:29.787415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:29.787435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:30:29.787454 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:30:29.787475 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:30:29.787498 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:30:29.787521 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:30:29.787540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:29.787562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:29.787583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:29.787604 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:30:29.787627 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:30:29.787646 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:30:29.787665 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:30:29.787689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:29.787708 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:30:29.787729 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:30:29.787749 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:30:29.787770 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:30:29.787792 systemd[1]: Reached target machines.target - Containers. Apr 30 03:30:29.787843 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:30:29.787864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:29.787890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:30:29.787912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:30:29.787933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:29.787955 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:30:29.787976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:29.787996 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:30:29.788017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:29.788038 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:30:29.788059 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:30:29.788084 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:30:29.788105 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:30:29.788126 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:30:29.788147 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:30:29.788169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:30:29.788190 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:30:29.788211 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:30:29.788232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:30:29.788253 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:30:29.788279 systemd[1]: Stopped verity-setup.service. Apr 30 03:30:29.788300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:29.788321 kernel: loop: module loaded Apr 30 03:30:29.788343 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:30:29.788364 kernel: fuse: init (API version 7.39) Apr 30 03:30:29.788384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:30:29.788405 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:30:29.788426 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:30:29.788448 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:30:29.788473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:30:29.788495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:29.788516 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:30:29.788537 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:30:29.788563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:29.788584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:29.788605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:29.788627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:29.788648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:30:29.788670 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:30:29.788690 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:29.788726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:29.788751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:29.788820 systemd-journald[1523]: Collecting audit messages is disabled. Apr 30 03:30:29.788857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:30:29.788877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:30:29.788896 systemd-journald[1523]: Journal started Apr 30 03:30:29.788937 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec263d20df0aefce173d97a9b33c4b7d) is 4.7M, max 38.2M, 33.4M free. Apr 30 03:30:29.441645 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:30:29.482722 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 03:30:29.483151 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:30:29.797853 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:30:29.823062 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:30:29.866637 kernel: ACPI: bus type drm_connector registered Apr 30 03:30:29.858215 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:30:29.869956 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:30:29.870691 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:30:29.870746 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:30:29.873128 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:30:29.880980 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:30:29.884046 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:30:29.886043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:29.894060 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:30:29.897999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:30:29.898781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:29.905135 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:30:29.905913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:29.909049 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:30:29.913637 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:30:29.918115 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:30:29.918313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:30:29.920545 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:30:29.922988 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:30:29.928135 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:30:29.944058 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:30:29.945878 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:30:29.973506 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec263d20df0aefce173d97a9b33c4b7d is 68.545ms for 985 entries. Apr 30 03:30:29.973506 systemd-journald[1523]: System Journal (/var/log/journal/ec263d20df0aefce173d97a9b33c4b7d) is 8.0M, max 195.6M, 187.6M free. Apr 30 03:30:30.062514 systemd-journald[1523]: Received client request to flush runtime journal. Apr 30 03:30:30.062578 kernel: loop0: detected capacity change from 0 to 210664 Apr 30 03:30:29.976117 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:29.982226 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:30:29.986783 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:30:29.993368 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:30:30.006042 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:30:30.008319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:30.020379 udevadm[1582]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:30:30.065257 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:30:30.082522 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:30:30.083395 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:30:30.106792 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:30:30.113678 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:30:30.119044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:30:30.160284 kernel: loop1: detected capacity change from 0 to 142488 Apr 30 03:30:30.169884 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Apr 30 03:30:30.169913 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Apr 30 03:30:30.181700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:30.278253 kernel: loop2: detected capacity change from 0 to 61336 Apr 30 03:30:30.378837 kernel: loop3: detected capacity change from 0 to 140768 Apr 30 03:30:30.485856 kernel: loop4: detected capacity change from 0 to 210664 Apr 30 03:30:30.521381 kernel: loop5: detected capacity change from 0 to 142488 Apr 30 03:30:30.553832 kernel: loop6: detected capacity change from 0 to 61336 Apr 30 03:30:30.568837 kernel: loop7: detected capacity change from 0 to 140768 Apr 30 03:30:30.592548 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 03:30:30.595232 (sd-merge)[1600]: Merged extensions into '/usr'. Apr 30 03:30:30.602226 systemd[1]: Reloading requested from client PID 1572 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:30:30.602463 systemd[1]: Reloading... Apr 30 03:30:30.729839 zram_generator::config[1626]: No configuration found. Apr 30 03:30:30.936781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:31.018176 systemd[1]: Reloading finished in 414 ms. Apr 30 03:30:31.048647 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:30:31.049509 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:30:31.058138 systemd[1]: Starting ensure-sysext.service... Apr 30 03:30:31.059997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:30:31.061971 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:31.083544 systemd[1]: Reloading requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:30:31.083565 systemd[1]: Reloading... Apr 30 03:30:31.103588 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:30:31.105217 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Apr 30 03:30:31.105767 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:30:31.106738 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:30:31.107041 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Apr 30 03:30:31.107108 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Apr 30 03:30:31.112394 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:30:31.112405 systemd-tmpfiles[1680]: Skipping /boot Apr 30 03:30:31.121955 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:30:31.121967 systemd-tmpfiles[1680]: Skipping /boot Apr 30 03:30:31.179866 zram_generator::config[1709]: No configuration found. Apr 30 03:30:31.194894 ldconfig[1567]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:30:31.272711 (udev-worker)[1731]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:31.322358 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 03:30:31.334902 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:30:31.341866 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 30 03:30:31.351872 kernel: ACPI: button: Sleep Button [SLPF] Apr 30 03:30:31.352548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:31.361658 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 30 03:30:31.387071 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Apr 30 03:30:31.403845 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:30:31.416829 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1725) Apr 30 03:30:31.453234 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:30:31.453582 systemd[1]: Reloading finished in 369 ms. Apr 30 03:30:31.473081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:31.474459 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:30:31.480868 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:31.549466 systemd[1]: Finished ensure-sysext.service. Apr 30 03:30:31.551967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:31.557111 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:30:31.559966 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:30:31.562440 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:31.564041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:31.565371 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:30:31.567970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:31.571969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:31.572516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:31.573936 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:30:31.581110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:30:31.586010 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:30:31.586450 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:30:31.595018 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:30:31.597961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:31.598416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:31.599880 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:30:31.600558 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:30:31.600926 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:30:31.602191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:31.602351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:31.607969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 03:30:31.620021 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:30:31.627333 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:30:31.627985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:31.634556 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:30:31.636120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:31.636667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:31.638156 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:31.638303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:31.645169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:31.654408 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:30:31.661034 lvm[1894]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:30:31.673203 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:30:31.680093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:30:31.688202 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:30:31.689383 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:30:31.691632 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:31.700975 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:30:31.708796 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:30:31.714044 augenrules[1916]: No rules Apr 30 03:30:31.714857 lvm[1913]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:30:31.715934 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:30:31.721646 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:30:31.722295 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:30:31.734950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:30:31.741576 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:30:31.765019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:31.794883 systemd-networkd[1886]: lo: Link UP Apr 30 03:30:31.795183 systemd-networkd[1886]: lo: Gained carrier Apr 30 03:30:31.796762 systemd-networkd[1886]: Enumeration completed Apr 30 03:30:31.798829 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:30:31.799296 systemd-networkd[1886]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:31.799310 systemd-networkd[1886]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:31.804499 systemd-networkd[1886]: eth0: Link UP Apr 30 03:30:31.804776 systemd-resolved[1887]: Positive Trust Anchors: Apr 30 03:30:31.804792 systemd-resolved[1887]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:30:31.804848 systemd-resolved[1887]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:30:31.805003 systemd-networkd[1886]: eth0: Gained carrier Apr 30 03:30:31.805025 systemd-networkd[1886]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:31.807041 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:30:31.810691 systemd-resolved[1887]: Defaulting to hostname 'linux'. Apr 30 03:30:31.812552 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:30:31.813241 systemd[1]: Reached target network.target - Network. Apr 30 03:30:31.813767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:31.814199 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:30:31.814639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:30:31.815226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:30:31.815797 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:30:31.816417 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:30:31.816869 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:30:31.816928 systemd-networkd[1886]: eth0: DHCPv4 address 172.31.22.79/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 03:30:31.817966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:30:31.818011 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:30:31.818521 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:30:31.820309 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:30:31.822137 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:30:31.831263 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:30:31.832442 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:30:31.833033 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:30:31.833458 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:30:31.833896 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:30:31.833932 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:30:31.835116 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:30:31.839003 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:30:31.844880 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:30:31.850404 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:30:31.855024 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:30:31.856989 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:30:31.859452 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:30:31.864160 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 03:30:31.868975 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:30:31.873542 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 03:30:31.878054 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:30:31.893038 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:30:31.903079 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:30:31.905386 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:30:31.907110 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:30:31.913014 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:30:31.929726 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:30:31.954622 jq[1940]: false Apr 30 03:30:31.957308 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:30:31.957577 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:30:31.968694 jq[1952]: true Apr 30 03:30:31.996333 update_engine[1951]: I20250430 03:30:31.996232 1951 main.cc:92] Flatcar Update Engine starting Apr 30 03:30:32.011407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:30:32.011674 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:30:32.020912 jq[1962]: true Apr 30 03:30:32.025789 ntpd[1943]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:23 UTC 2025 (1): Starting Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: ---------------------------------------------------- Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: corporation. Support and training for ntp-4 are Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: available at https://www.nwtime.org/support Apr 30 03:30:32.030216 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: ---------------------------------------------------- Apr 30 03:30:32.025845 ntpd[1943]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 03:30:32.025856 ntpd[1943]: ---------------------------------------------------- Apr 30 03:30:32.025864 ntpd[1943]: ntp-4 is maintained by Network Time Foundation, Apr 30 03:30:32.025874 ntpd[1943]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 03:30:32.025883 ntpd[1943]: corporation. Support and training for ntp-4 are Apr 30 03:30:32.025891 ntpd[1943]: available at https://www.nwtime.org/support Apr 30 03:30:32.025900 ntpd[1943]: ---------------------------------------------------- Apr 30 03:30:32.041669 (ntainerd)[1971]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:30:32.058590 ntpd[1943]: proto: precision = 0.063 usec (-24) Apr 30 03:30:32.058829 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:30:32.062068 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: proto: precision = 0.063 usec (-24) Apr 30 03:30:32.062068 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: basedate set to 2025-04-17 Apr 30 03:30:32.062068 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: gps base set to 2025-04-20 (week 2363) Apr 30 03:30:32.060125 ntpd[1943]: basedate set to 2025-04-17 Apr 30 03:30:32.059141 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:30:32.060145 ntpd[1943]: gps base set to 2025-04-20 (week 2363) Apr 30 03:30:32.065574 extend-filesystems[1941]: Found loop4 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found loop5 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found loop6 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found loop7 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p1 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p2 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p3 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found usr Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p4 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p6 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p7 Apr 30 03:30:32.067797 extend-filesystems[1941]: Found nvme0n1p9 Apr 30 03:30:32.067797 extend-filesystems[1941]: Checking size of /dev/nvme0n1p9 Apr 30 03:30:32.069358 ntpd[1943]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:30:32.082355 tar[1966]: linux-amd64/helm Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Listen normally on 3 eth0 172.31.22.79:123 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Listen normally on 4 lo [::1]:123 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: bind(21) AF_INET6 fe80::4a0:74ff:fe78:95d9%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: unable to create socket on eth0 (5) for fe80::4a0:74ff:fe78:95d9%2#123 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: failed to init interface for address fe80::4a0:74ff:fe78:95d9%2 Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: Listening on routing socket on fd #21 for interface updates Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:30:32.082608 ntpd[1943]: 30 Apr 03:30:32 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:30:32.069407 ntpd[1943]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 03:30:32.084068 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:30:32.074988 ntpd[1943]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 03:30:32.075034 ntpd[1943]: Listen normally on 3 eth0 172.31.22.79:123 Apr 30 03:30:32.075075 ntpd[1943]: Listen normally on 4 lo [::1]:123 Apr 30 03:30:32.077910 ntpd[1943]: bind(21) AF_INET6 fe80::4a0:74ff:fe78:95d9%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:30:32.097363 update_engine[1951]: I20250430 03:30:32.097064 1951 update_check_scheduler.cc:74] Next update check in 3m40s Apr 30 03:30:32.093128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:30:32.077947 ntpd[1943]: unable to create socket on eth0 (5) for fe80::4a0:74ff:fe78:95d9%2#123 Apr 30 03:30:32.093164 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:30:32.077968 ntpd[1943]: failed to init interface for address fe80::4a0:74ff:fe78:95d9%2 Apr 30 03:30:32.093682 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:30:32.078014 ntpd[1943]: Listening on routing socket on fd #21 for interface updates Apr 30 03:30:32.093703 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:30:32.080529 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:30:32.098451 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:30:32.080561 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 03:30:32.082117 dbus-daemon[1939]: [system] SELinux support is enabled Apr 30 03:30:32.096511 dbus-daemon[1939]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1886 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 03:30:32.100670 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 03:30:32.105289 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:30:32.108421 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 03:30:32.121751 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 03:30:32.153323 extend-filesystems[1941]: Resized partition /dev/nvme0n1p9 Apr 30 03:30:32.156832 extend-filesystems[2007]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:30:32.174828 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.231 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.232 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.237 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.237 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.238 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.238 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.239 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.241 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.243 INFO Fetch failed with 404: resource not found Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.243 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.244 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.273 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.273 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.278 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.278 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.280 INFO Fetch successful Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.280 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 03:30:32.303742 coreos-metadata[1938]: Apr 30 03:30:32.286 INFO Fetch successful Apr 30 03:30:32.317996 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 03:30:32.311642 systemd-logind[1949]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 03:30:32.311670 systemd-logind[1949]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 03:30:32.311695 systemd-logind[1949]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:30:32.324659 systemd-logind[1949]: New seat seat0. Apr 30 03:30:32.329045 extend-filesystems[2007]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 03:30:32.329045 extend-filesystems[2007]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 03:30:32.329045 extend-filesystems[2007]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 03:30:32.348112 extend-filesystems[1941]: Resized filesystem in /dev/nvme0n1p9 Apr 30 03:30:32.330998 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:30:32.353052 bash[2008]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:30:32.331226 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:30:32.343664 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:30:32.346421 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:30:32.367232 systemd[1]: Starting sshkeys.service... Apr 30 03:30:32.406836 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1725) Apr 30 03:30:32.416849 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:30:32.418323 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:30:32.430889 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:30:32.445028 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:30:32.579380 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:30:32.583128 coreos-metadata[2029]: Apr 30 03:30:32.583 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 03:30:32.591902 coreos-metadata[2029]: Apr 30 03:30:32.591 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 03:30:32.592498 coreos-metadata[2029]: Apr 30 03:30:32.592 INFO Fetch successful Apr 30 03:30:32.592585 coreos-metadata[2029]: Apr 30 03:30:32.592 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 03:30:32.595988 coreos-metadata[2029]: Apr 30 03:30:32.595 INFO Fetch successful Apr 30 03:30:32.602925 unknown[2029]: wrote ssh authorized keys file for user: core Apr 30 03:30:32.643252 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 03:30:32.643720 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 03:30:32.644652 dbus-daemon[1939]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1997 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 03:30:32.655934 update-ssh-keys[2103]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:30:32.660916 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:30:32.668603 systemd[1]: Finished sshkeys.service. Apr 30 03:30:32.719186 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 03:30:32.750422 polkitd[2130]: Started polkitd version 121 Apr 30 03:30:32.765271 polkitd[2130]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 03:30:32.770947 polkitd[2130]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 03:30:32.771841 polkitd[2130]: Finished loading, compiling and executing 2 rules Apr 30 03:30:32.773428 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 03:30:32.774018 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 03:30:32.776897 polkitd[2130]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 03:30:32.820585 systemd-resolved[1887]: System hostname changed to 'ip-172-31-22-79'. Apr 30 03:30:32.822829 systemd-hostnamed[1997]: Hostname set to (transient) Apr 30 03:30:32.940465 containerd[1971]: time="2025-04-30T03:30:32.940203506Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:30:33.000647 sshd_keygen[1994]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:30:33.026355 ntpd[1943]: bind(24) AF_INET6 fe80::4a0:74ff:fe78:95d9%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:30:33.026900 ntpd[1943]: 30 Apr 03:30:33 ntpd[1943]: bind(24) AF_INET6 fe80::4a0:74ff:fe78:95d9%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 03:30:33.026900 ntpd[1943]: 30 Apr 03:30:33 ntpd[1943]: unable to create socket on eth0 (6) for fe80::4a0:74ff:fe78:95d9%2#123 Apr 30 03:30:33.026900 ntpd[1943]: 30 Apr 03:30:33 ntpd[1943]: failed to init interface for address fe80::4a0:74ff:fe78:95d9%2 Apr 30 03:30:33.026401 ntpd[1943]: unable to create socket on eth0 (6) for fe80::4a0:74ff:fe78:95d9%2#123 Apr 30 03:30:33.026419 ntpd[1943]: failed to init interface for address fe80::4a0:74ff:fe78:95d9%2 Apr 30 03:30:33.034026 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:30:33.042404 containerd[1971]: time="2025-04-30T03:30:33.042043009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044089983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044136998Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044162164Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044344093Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044366859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044439398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044456174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044669146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044691369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044710212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:33.045546 containerd[1971]: time="2025-04-30T03:30:33.044725022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.044300 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:30:33.048774 containerd[1971]: time="2025-04-30T03:30:33.046845721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.048774 containerd[1971]: time="2025-04-30T03:30:33.047140704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:33.048774 containerd[1971]: time="2025-04-30T03:30:33.048536471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:33.048774 containerd[1971]: time="2025-04-30T03:30:33.048563442Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:30:33.048774 containerd[1971]: time="2025-04-30T03:30:33.048678226Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:30:33.048774 containerd[1971]: time="2025-04-30T03:30:33.048731724Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:30:33.053752 containerd[1971]: time="2025-04-30T03:30:33.053718928Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:30:33.053931 containerd[1971]: time="2025-04-30T03:30:33.053912851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:30:33.054837 containerd[1971]: time="2025-04-30T03:30:33.054199064Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:30:33.055399 containerd[1971]: time="2025-04-30T03:30:33.054964636Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:30:33.055399 containerd[1971]: time="2025-04-30T03:30:33.055012556Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:30:33.055399 containerd[1971]: time="2025-04-30T03:30:33.055215575Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:30:33.056265 containerd[1971]: time="2025-04-30T03:30:33.056224853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:30:33.056594 containerd[1971]: time="2025-04-30T03:30:33.056524588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:30:33.056594 containerd[1971]: time="2025-04-30T03:30:33.056552689Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:30:33.056778 containerd[1971]: time="2025-04-30T03:30:33.056573275Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:30:33.056778 containerd[1971]: time="2025-04-30T03:30:33.056730636Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.056778 containerd[1971]: time="2025-04-30T03:30:33.056753115Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.057848 containerd[1971]: time="2025-04-30T03:30:33.057707635Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.057848 containerd[1971]: time="2025-04-30T03:30:33.057745574Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.057848 containerd[1971]: time="2025-04-30T03:30:33.057786566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.057998493Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058027105Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058047533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058091157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058117368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058153100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058177511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058195983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058228489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058247694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058275475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058309566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058331723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.058380 containerd[1971]: time="2025-04-30T03:30:33.058349247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.058941963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.058975085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.059017676Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.059053057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.059085729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.059102786Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:30:33.059656 containerd[1971]: time="2025-04-30T03:30:33.059566094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.059999166Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.060025329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.060058796Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.060073599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.060091975Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.060106982Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:30:33.060286 containerd[1971]: time="2025-04-30T03:30:33.060136631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:30:33.060708 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:30:33.061000 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:30:33.061313 containerd[1971]: time="2025-04-30T03:30:33.060941171Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:30:33.061313 containerd[1971]: time="2025-04-30T03:30:33.061256520Z" level=info msg="Connect containerd service" Apr 30 03:30:33.061792 containerd[1971]: time="2025-04-30T03:30:33.061603887Z" level=info msg="using legacy CRI server" Apr 30 03:30:33.061792 containerd[1971]: time="2025-04-30T03:30:33.061622941Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:30:33.062063 containerd[1971]: time="2025-04-30T03:30:33.061929792Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:30:33.062955 containerd[1971]: time="2025-04-30T03:30:33.062928420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:30:33.063438 containerd[1971]: time="2025-04-30T03:30:33.063130146Z" level=info msg="Start subscribing containerd event" Apr 30 03:30:33.063438 containerd[1971]: time="2025-04-30T03:30:33.063208813Z" level=info msg="Start recovering state" Apr 30 03:30:33.063438 containerd[1971]: time="2025-04-30T03:30:33.063285522Z" level=info msg="Start event monitor" Apr 30 03:30:33.063438 containerd[1971]: time="2025-04-30T03:30:33.063309139Z" level=info msg="Start snapshots syncer" Apr 30 03:30:33.063438 containerd[1971]: time="2025-04-30T03:30:33.063321569Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:30:33.063438 containerd[1971]: time="2025-04-30T03:30:33.063332119Z" level=info msg="Start streaming server" Apr 30 03:30:33.064151 containerd[1971]: time="2025-04-30T03:30:33.064129501Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:30:33.064442 containerd[1971]: time="2025-04-30T03:30:33.064383178Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:30:33.066179 containerd[1971]: time="2025-04-30T03:30:33.065940813Z" level=info msg="containerd successfully booted in 0.128280s" Apr 30 03:30:33.068254 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:30:33.070197 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:30:33.097043 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:30:33.106272 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:30:33.108673 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:30:33.109921 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:30:33.308055 tar[1966]: linux-amd64/LICENSE Apr 30 03:30:33.308458 tar[1966]: linux-amd64/README.md Apr 30 03:30:33.318915 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:30:33.344966 systemd-networkd[1886]: eth0: Gained IPv6LL Apr 30 03:30:33.347523 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:30:33.348467 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:30:33.354139 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 03:30:33.356763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:33.359266 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:30:33.384278 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:30:33.417475 amazon-ssm-agent[2163]: Initializing new seelog logger Apr 30 03:30:33.417919 amazon-ssm-agent[2163]: New Seelog Logger Creation Complete Apr 30 03:30:33.418054 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.418090 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.418478 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 processing appconfig overrides Apr 30 03:30:33.418940 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.418940 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.419015 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 processing appconfig overrides Apr 30 03:30:33.419327 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.419327 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.419436 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 processing appconfig overrides Apr 30 03:30:33.419763 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO Proxy environment variables: Apr 30 03:30:33.422677 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.422677 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 03:30:33.422870 amazon-ssm-agent[2163]: 2025/04/30 03:30:33 processing appconfig overrides Apr 30 03:30:33.520744 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO https_proxy: Apr 30 03:30:33.619163 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO http_proxy: Apr 30 03:30:33.717085 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO no_proxy: Apr 30 03:30:33.815352 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO Checking if agent identity type OnPrem can be assumed Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO Checking if agent identity type EC2 can be assumed Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO Agent will take identity from EC2 Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [Registrar] Starting registrar module Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [EC2Identity] EC2 registration was successful. Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [CredentialRefresher] credentialRefresher has started Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 03:30:33.911487 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 03:30:33.914187 amazon-ssm-agent[2163]: 2025-04-30 03:30:33 INFO [CredentialRefresher] Next credential rotation will be in 31.20832723285 minutes Apr 30 03:30:34.924821 amazon-ssm-agent[2163]: 2025-04-30 03:30:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 03:30:35.029024 amazon-ssm-agent[2163]: 2025-04-30 03:30:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2182) started Apr 30 03:30:35.129983 amazon-ssm-agent[2163]: 2025-04-30 03:30:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 03:30:35.237066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:35.238509 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:30:35.239338 systemd[1]: Startup finished in 594ms (kernel) + 6.815s (initrd) + 6.766s (userspace) = 14.176s. Apr 30 03:30:35.242185 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:36.008625 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:30:36.016269 systemd[1]: Started sshd@0-172.31.22.79:22-147.75.109.163:60012.service - OpenSSH per-connection server daemon (147.75.109.163:60012). Apr 30 03:30:36.026559 ntpd[1943]: Listen normally on 7 eth0 [fe80::4a0:74ff:fe78:95d9%2]:123 Apr 30 03:30:36.028008 ntpd[1943]: 30 Apr 03:30:36 ntpd[1943]: Listen normally on 7 eth0 [fe80::4a0:74ff:fe78:95d9%2]:123 Apr 30 03:30:36.272649 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 60012 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:36.275000 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:36.283546 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:30:36.289395 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:30:36.292294 systemd-logind[1949]: New session 1 of user core. Apr 30 03:30:36.309690 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:30:36.319625 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:30:36.323949 (systemd)[2213]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:30:36.458358 systemd[2213]: Queued start job for default target default.target. Apr 30 03:30:36.465088 systemd[2213]: Created slice app.slice - User Application Slice. Apr 30 03:30:36.465138 systemd[2213]: Reached target paths.target - Paths. Apr 30 03:30:36.465162 systemd[2213]: Reached target timers.target - Timers. Apr 30 03:30:36.467488 systemd[2213]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:30:36.481140 systemd[2213]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:30:36.481291 systemd[2213]: Reached target sockets.target - Sockets. Apr 30 03:30:36.481312 systemd[2213]: Reached target basic.target - Basic System. Apr 30 03:30:36.481376 systemd[2213]: Reached target default.target - Main User Target. Apr 30 03:30:36.481420 systemd[2213]: Startup finished in 148ms. Apr 30 03:30:36.481552 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:30:36.490030 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:30:36.505591 kubelet[2198]: E0430 03:30:36.505528 2198 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:36.507352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:36.507495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:36.507747 systemd[1]: kubelet.service: Consumed 1.080s CPU time. Apr 30 03:30:36.703933 systemd[1]: Started sshd@1-172.31.22.79:22-147.75.109.163:60026.service - OpenSSH per-connection server daemon (147.75.109.163:60026). Apr 30 03:30:36.957186 sshd[2227]: Accepted publickey for core from 147.75.109.163 port 60026 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:36.958606 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:36.963699 systemd-logind[1949]: New session 2 of user core. Apr 30 03:30:36.970053 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:30:37.149638 sshd[2227]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:37.153749 systemd[1]: sshd@1-172.31.22.79:22-147.75.109.163:60026.service: Deactivated successfully. Apr 30 03:30:37.155833 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:30:37.156523 systemd-logind[1949]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:30:37.157675 systemd-logind[1949]: Removed session 2. Apr 30 03:30:37.194865 systemd[1]: Started sshd@2-172.31.22.79:22-147.75.109.163:60032.service - OpenSSH per-connection server daemon (147.75.109.163:60032). Apr 30 03:30:37.437880 sshd[2234]: Accepted publickey for core from 147.75.109.163 port 60032 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:37.439543 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:37.444551 systemd-logind[1949]: New session 3 of user core. Apr 30 03:30:37.454076 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:30:37.624398 sshd[2234]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:37.628901 systemd[1]: sshd@2-172.31.22.79:22-147.75.109.163:60032.service: Deactivated successfully. Apr 30 03:30:37.630790 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:30:37.631515 systemd-logind[1949]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:30:37.632575 systemd-logind[1949]: Removed session 3. Apr 30 03:30:37.671605 systemd[1]: Started sshd@3-172.31.22.79:22-147.75.109.163:60034.service - OpenSSH per-connection server daemon (147.75.109.163:60034). Apr 30 03:30:37.920349 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 60034 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:37.921688 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:37.926627 systemd-logind[1949]: New session 4 of user core. Apr 30 03:30:37.931454 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:30:38.114720 sshd[2241]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:38.117569 systemd[1]: sshd@3-172.31.22.79:22-147.75.109.163:60034.service: Deactivated successfully. Apr 30 03:30:38.119371 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:30:38.120630 systemd-logind[1949]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:30:38.121672 systemd-logind[1949]: Removed session 4. Apr 30 03:30:38.159516 systemd[1]: Started sshd@4-172.31.22.79:22-147.75.109.163:60046.service - OpenSSH per-connection server daemon (147.75.109.163:60046). Apr 30 03:30:38.402998 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 60046 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:38.404296 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:38.409231 systemd-logind[1949]: New session 5 of user core. Apr 30 03:30:38.414052 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:30:38.578190 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:30:38.578638 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:30:38.590188 sudo[2251]: pam_unix(sudo:session): session closed for user root Apr 30 03:30:38.627571 sshd[2248]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:38.630867 systemd[1]: sshd@4-172.31.22.79:22-147.75.109.163:60046.service: Deactivated successfully. Apr 30 03:30:38.632438 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:30:38.633660 systemd-logind[1949]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:30:38.634994 systemd-logind[1949]: Removed session 5. Apr 30 03:30:38.672700 systemd[1]: Started sshd@5-172.31.22.79:22-147.75.109.163:60056.service - OpenSSH per-connection server daemon (147.75.109.163:60056). Apr 30 03:30:38.916847 sshd[2256]: Accepted publickey for core from 147.75.109.163 port 60056 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:38.918441 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:38.923841 systemd-logind[1949]: New session 6 of user core. Apr 30 03:30:38.931078 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:30:40.533635 systemd-resolved[1887]: Clock change detected. Flushing caches. Apr 30 03:30:40.578606 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:30:40.578890 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:30:40.582647 sudo[2260]: pam_unix(sudo:session): session closed for user root Apr 30 03:30:40.587977 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:30:40.588260 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:30:40.607263 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:30:40.608751 auditctl[2263]: No rules Apr 30 03:30:40.609171 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:30:40.609394 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:30:40.611595 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:30:40.641464 augenrules[2281]: No rules Apr 30 03:30:40.642883 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:30:40.644544 sudo[2259]: pam_unix(sudo:session): session closed for user root Apr 30 03:30:40.681885 sshd[2256]: pam_unix(sshd:session): session closed for user core Apr 30 03:30:40.684547 systemd[1]: sshd@5-172.31.22.79:22-147.75.109.163:60056.service: Deactivated successfully. Apr 30 03:30:40.686578 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:30:40.687777 systemd-logind[1949]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:30:40.689158 systemd-logind[1949]: Removed session 6. Apr 30 03:30:40.731290 systemd[1]: Started sshd@6-172.31.22.79:22-147.75.109.163:60070.service - OpenSSH per-connection server daemon (147.75.109.163:60070). Apr 30 03:30:40.969040 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 60070 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:30:40.970339 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:30:40.975477 systemd-logind[1949]: New session 7 of user core. Apr 30 03:30:40.986160 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:30:41.121716 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:30:41.122022 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:30:41.516276 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:30:41.516385 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:30:41.886182 dockerd[2308]: time="2025-04-30T03:30:41.886050191Z" level=info msg="Starting up" Apr 30 03:30:42.030810 dockerd[2308]: time="2025-04-30T03:30:42.030551944Z" level=info msg="Loading containers: start." Apr 30 03:30:42.144949 kernel: Initializing XFRM netlink socket Apr 30 03:30:42.176147 (udev-worker)[2332]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:30:42.233536 systemd-networkd[1886]: docker0: Link UP Apr 30 03:30:42.252588 dockerd[2308]: time="2025-04-30T03:30:42.252544089Z" level=info msg="Loading containers: done." Apr 30 03:30:42.270593 dockerd[2308]: time="2025-04-30T03:30:42.270534814Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:30:42.270769 dockerd[2308]: time="2025-04-30T03:30:42.270653066Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:30:42.270769 dockerd[2308]: time="2025-04-30T03:30:42.270757782Z" level=info msg="Daemon has completed initialization" Apr 30 03:30:42.309835 dockerd[2308]: time="2025-04-30T03:30:42.309660597Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:30:42.310119 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:30:43.602294 containerd[1971]: time="2025-04-30T03:30:43.602254169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:30:44.139995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212076331.mount: Deactivated successfully. Apr 30 03:30:46.366421 containerd[1971]: time="2025-04-30T03:30:46.366362135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:46.367715 containerd[1971]: time="2025-04-30T03:30:46.367663649Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 03:30:46.368837 containerd[1971]: time="2025-04-30T03:30:46.368777749Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:46.372376 containerd[1971]: time="2025-04-30T03:30:46.372306437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:46.374010 containerd[1971]: time="2025-04-30T03:30:46.373742249Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.771447848s" Apr 30 03:30:46.374010 containerd[1971]: time="2025-04-30T03:30:46.373795046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:30:46.398639 containerd[1971]: time="2025-04-30T03:30:46.398598389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:30:48.265387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:30:48.277178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:48.513992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:48.528395 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:48.613968 kubelet[2523]: E0430 03:30:48.613515 2523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:48.619813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:48.620028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:48.959743 containerd[1971]: time="2025-04-30T03:30:48.959614939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:48.960818 containerd[1971]: time="2025-04-30T03:30:48.960757958Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 03:30:48.962162 containerd[1971]: time="2025-04-30T03:30:48.962107446Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:48.965158 containerd[1971]: time="2025-04-30T03:30:48.965092763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:48.967946 containerd[1971]: time="2025-04-30T03:30:48.966438408Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.567790976s" Apr 30 03:30:48.967946 containerd[1971]: time="2025-04-30T03:30:48.966487288Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:30:48.992414 containerd[1971]: time="2025-04-30T03:30:48.992367735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:30:50.844597 containerd[1971]: time="2025-04-30T03:30:50.844542051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:50.845672 containerd[1971]: time="2025-04-30T03:30:50.845606619Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 03:30:50.846485 containerd[1971]: time="2025-04-30T03:30:50.846425425Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:50.849287 containerd[1971]: time="2025-04-30T03:30:50.849218001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:50.850416 containerd[1971]: time="2025-04-30T03:30:50.850247461Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.857842107s" Apr 30 03:30:50.850416 containerd[1971]: time="2025-04-30T03:30:50.850286316Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:30:50.875303 containerd[1971]: time="2025-04-30T03:30:50.875242487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:30:52.044983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2991667326.mount: Deactivated successfully. Apr 30 03:30:52.538938 containerd[1971]: time="2025-04-30T03:30:52.538795731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:52.540095 containerd[1971]: time="2025-04-30T03:30:52.540039859Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 03:30:52.542201 containerd[1971]: time="2025-04-30T03:30:52.542142028Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:52.544518 containerd[1971]: time="2025-04-30T03:30:52.544479163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:52.545147 containerd[1971]: time="2025-04-30T03:30:52.545000160Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.669718598s" Apr 30 03:30:52.545147 containerd[1971]: time="2025-04-30T03:30:52.545037688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:30:52.570016 containerd[1971]: time="2025-04-30T03:30:52.569970791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:30:53.103756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188106169.mount: Deactivated successfully. Apr 30 03:30:54.161481 containerd[1971]: time="2025-04-30T03:30:54.161405808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:54.162564 containerd[1971]: time="2025-04-30T03:30:54.162512793Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 03:30:54.163732 containerd[1971]: time="2025-04-30T03:30:54.163679469Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:54.166720 containerd[1971]: time="2025-04-30T03:30:54.166642354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:54.168033 containerd[1971]: time="2025-04-30T03:30:54.167990797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.597977028s" Apr 30 03:30:54.168124 containerd[1971]: time="2025-04-30T03:30:54.168042108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:30:54.194860 containerd[1971]: time="2025-04-30T03:30:54.194811526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:30:54.682877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170044983.mount: Deactivated successfully. Apr 30 03:30:54.696252 containerd[1971]: time="2025-04-30T03:30:54.696184409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:54.698138 containerd[1971]: time="2025-04-30T03:30:54.698049420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 03:30:54.700774 containerd[1971]: time="2025-04-30T03:30:54.700698105Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:54.704889 containerd[1971]: time="2025-04-30T03:30:54.704819463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:54.706476 containerd[1971]: time="2025-04-30T03:30:54.705956604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 511.102858ms" Apr 30 03:30:54.706476 containerd[1971]: time="2025-04-30T03:30:54.706001579Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:30:54.731392 containerd[1971]: time="2025-04-30T03:30:54.731346327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:30:55.291127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324939073.mount: Deactivated successfully. Apr 30 03:30:58.660257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:30:58.667594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:58.708036 containerd[1971]: time="2025-04-30T03:30:58.707869576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:58.710210 containerd[1971]: time="2025-04-30T03:30:58.710136535Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 03:30:58.712635 containerd[1971]: time="2025-04-30T03:30:58.712578169Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:58.717595 containerd[1971]: time="2025-04-30T03:30:58.717529453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:30:58.719481 containerd[1971]: time="2025-04-30T03:30:58.719301026Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.987914556s" Apr 30 03:30:58.719481 containerd[1971]: time="2025-04-30T03:30:58.719353495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:30:59.115924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:59.124348 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:59.186037 kubelet[2677]: E0430 03:30:59.185984 2677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:59.188597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:59.188805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:01.958209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:01.970575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:02.037066 systemd[1]: Reloading requested from client PID 2743 ('systemctl') (unit session-7.scope)... Apr 30 03:31:02.037147 systemd[1]: Reloading... Apr 30 03:31:02.203951 zram_generator::config[2785]: No configuration found. Apr 30 03:31:02.348917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:31:02.435414 systemd[1]: Reloading finished in 397 ms. Apr 30 03:31:02.493557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:02.498137 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:31:02.498392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:02.503441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:02.687486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:02.698429 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:31:02.746646 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:02.746646 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:31:02.746646 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:02.752135 kubelet[2849]: I0430 03:31:02.752001 2849 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:31:03.015650 kubelet[2849]: I0430 03:31:03.015517 2849 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:31:03.015650 kubelet[2849]: I0430 03:31:03.015551 2849 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:31:03.016037 kubelet[2849]: I0430 03:31:03.015835 2849 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:31:03.057206 kubelet[2849]: I0430 03:31:03.057162 2849 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:31:03.058283 kubelet[2849]: E0430 03:31:03.058156 2849 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.076294 kubelet[2849]: I0430 03:31:03.076249 2849 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:31:03.076491 kubelet[2849]: I0430 03:31:03.076443 2849 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:31:03.078952 kubelet[2849]: I0430 03:31:03.076470 2849 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-79","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:31:03.079935 kubelet[2849]: I0430 03:31:03.079626 2849 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:31:03.079935 kubelet[2849]: I0430 03:31:03.079663 2849 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:31:03.079935 kubelet[2849]: I0430 03:31:03.079812 2849 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:03.081006 kubelet[2849]: I0430 03:31:03.080980 2849 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:31:03.081006 kubelet[2849]: I0430 03:31:03.081006 2849 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:31:03.081355 kubelet[2849]: I0430 03:31:03.081031 2849 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:31:03.081355 kubelet[2849]: I0430 03:31:03.081049 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:31:03.084961 kubelet[2849]: W0430 03:31:03.084887 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.085417 kubelet[2849]: E0430 03:31:03.085199 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.085417 kubelet[2849]: W0430 03:31:03.085282 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-79&limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.085417 kubelet[2849]: E0430 03:31:03.085316 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-79&limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.087874 kubelet[2849]: I0430 03:31:03.087667 2849 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:31:03.090418 kubelet[2849]: I0430 03:31:03.089470 2849 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:31:03.090418 kubelet[2849]: W0430 03:31:03.089536 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:31:03.090418 kubelet[2849]: I0430 03:31:03.090325 2849 server.go:1264] "Started kubelet" Apr 30 03:31:03.095620 kubelet[2849]: I0430 03:31:03.095579 2849 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:31:03.099143 kubelet[2849]: I0430 03:31:03.099095 2849 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:31:03.103344 kubelet[2849]: I0430 03:31:03.102732 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:31:03.103344 kubelet[2849]: I0430 03:31:03.103057 2849 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:31:03.103344 kubelet[2849]: E0430 03:31:03.103235 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.79:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.79:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-79.183afb12a70fd5a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-79,UID:ip-172-31-22-79,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-79,},FirstTimestamp:2025-04-30 03:31:03.090304417 +0000 UTC m=+0.387078386,LastTimestamp:2025-04-30 03:31:03.090304417 +0000 UTC m=+0.387078386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-79,}" Apr 30 03:31:03.104218 kubelet[2849]: I0430 03:31:03.104194 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:31:03.106442 kubelet[2849]: E0430 03:31:03.106418 2849 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-22-79\" not found" Apr 30 03:31:03.106566 kubelet[2849]: I0430 03:31:03.106559 2849 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:31:03.108654 kubelet[2849]: I0430 03:31:03.108637 2849 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:31:03.108822 kubelet[2849]: I0430 03:31:03.108794 2849 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:31:03.109220 kubelet[2849]: W0430 03:31:03.109169 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.109276 kubelet[2849]: E0430 03:31:03.109231 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.113511 kubelet[2849]: E0430 03:31:03.111558 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-79?timeout=10s\": dial tcp 172.31.22.79:6443: connect: connection refused" interval="200ms" Apr 30 03:31:03.113511 kubelet[2849]: I0430 03:31:03.112633 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:31:03.115625 kubelet[2849]: I0430 03:31:03.115606 2849 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:31:03.115744 kubelet[2849]: I0430 03:31:03.115736 2849 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:31:03.132926 kubelet[2849]: I0430 03:31:03.131146 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:31:03.132926 kubelet[2849]: I0430 03:31:03.132717 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:31:03.132926 kubelet[2849]: I0430 03:31:03.132753 2849 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:31:03.132926 kubelet[2849]: I0430 03:31:03.132780 2849 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:31:03.132926 kubelet[2849]: E0430 03:31:03.132839 2849 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:31:03.148025 kubelet[2849]: W0430 03:31:03.147962 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.148225 kubelet[2849]: E0430 03:31:03.148208 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:03.152610 kubelet[2849]: I0430 03:31:03.152402 2849 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:31:03.152610 kubelet[2849]: I0430 03:31:03.152418 2849 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:31:03.152610 kubelet[2849]: I0430 03:31:03.152434 2849 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:03.154797 kubelet[2849]: I0430 03:31:03.154769 2849 policy_none.go:49] "None policy: Start" Apr 30 03:31:03.155376 kubelet[2849]: I0430 03:31:03.155347 2849 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:31:03.155376 kubelet[2849]: I0430 03:31:03.155374 2849 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:31:03.166474 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:31:03.181346 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:31:03.184374 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:31:03.199036 kubelet[2849]: I0430 03:31:03.199005 2849 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:31:03.199586 kubelet[2849]: I0430 03:31:03.199542 2849 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:31:03.201831 kubelet[2849]: E0430 03:31:03.201571 2849 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-79\" not found" Apr 30 03:31:03.207125 kubelet[2849]: I0430 03:31:03.207099 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:31:03.208485 kubelet[2849]: I0430 03:31:03.208447 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-79" Apr 30 03:31:03.208923 kubelet[2849]: E0430 03:31:03.208884 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.79:6443/api/v1/nodes\": dial tcp 172.31.22.79:6443: connect: connection refused" node="ip-172-31-22-79" Apr 30 03:31:03.233314 kubelet[2849]: I0430 03:31:03.233258 2849 topology_manager.go:215] "Topology Admit Handler" podUID="e948a503c7e67e24a97e95ebcee5948d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-79" Apr 30 03:31:03.234815 kubelet[2849]: I0430 03:31:03.234770 2849 topology_manager.go:215] "Topology Admit Handler" podUID="b096d0b27aa814e9726dc1e3556929eb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:03.236165 kubelet[2849]: I0430 03:31:03.236135 2849 topology_manager.go:215] "Topology Admit Handler" podUID="738f01e64ef87bebf7bb2b3024c0efba" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-79" Apr 30 03:31:03.242571 systemd[1]: Created slice kubepods-burstable-pode948a503c7e67e24a97e95ebcee5948d.slice - libcontainer container kubepods-burstable-pode948a503c7e67e24a97e95ebcee5948d.slice. Apr 30 03:31:03.257165 systemd[1]: Created slice kubepods-burstable-podb096d0b27aa814e9726dc1e3556929eb.slice - libcontainer container kubepods-burstable-podb096d0b27aa814e9726dc1e3556929eb.slice. Apr 30 03:31:03.262187 systemd[1]: Created slice kubepods-burstable-pod738f01e64ef87bebf7bb2b3024c0efba.slice - libcontainer container kubepods-burstable-pod738f01e64ef87bebf7bb2b3024c0efba.slice. Apr 30 03:31:03.312098 kubelet[2849]: E0430 03:31:03.311992 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-79?timeout=10s\": dial tcp 172.31.22.79:6443: connect: connection refused" interval="400ms" Apr 30 03:31:03.409819 kubelet[2849]: I0430 03:31:03.409563 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e948a503c7e67e24a97e95ebcee5948d-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-79\" (UID: \"e948a503c7e67e24a97e95ebcee5948d\") " pod="kube-system/kube-apiserver-ip-172-31-22-79" Apr 30 03:31:03.409819 kubelet[2849]: I0430 03:31:03.409603 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e948a503c7e67e24a97e95ebcee5948d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-79\" (UID: \"e948a503c7e67e24a97e95ebcee5948d\") " pod="kube-system/kube-apiserver-ip-172-31-22-79" Apr 30 03:31:03.409819 kubelet[2849]: I0430 03:31:03.409626 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:03.409819 kubelet[2849]: I0430 03:31:03.409641 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:03.409819 kubelet[2849]: I0430 03:31:03.409662 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:03.410076 kubelet[2849]: I0430 03:31:03.409681 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/738f01e64ef87bebf7bb2b3024c0efba-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-79\" (UID: \"738f01e64ef87bebf7bb2b3024c0efba\") " pod="kube-system/kube-scheduler-ip-172-31-22-79" Apr 30 03:31:03.410076 kubelet[2849]: I0430 03:31:03.409697 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e948a503c7e67e24a97e95ebcee5948d-ca-certs\") pod \"kube-apiserver-ip-172-31-22-79\" (UID: \"e948a503c7e67e24a97e95ebcee5948d\") " pod="kube-system/kube-apiserver-ip-172-31-22-79" Apr 30 03:31:03.410076 kubelet[2849]: I0430 03:31:03.409712 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:03.410076 kubelet[2849]: I0430 03:31:03.409729 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:03.410747 kubelet[2849]: I0430 03:31:03.410718 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-79" Apr 30 03:31:03.411103 kubelet[2849]: E0430 03:31:03.411053 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.79:6443/api/v1/nodes\": dial tcp 172.31.22.79:6443: connect: connection refused" node="ip-172-31-22-79" Apr 30 03:31:03.555847 containerd[1971]: time="2025-04-30T03:31:03.555799165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-79,Uid:e948a503c7e67e24a97e95ebcee5948d,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:03.567154 containerd[1971]: time="2025-04-30T03:31:03.567054312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-79,Uid:b096d0b27aa814e9726dc1e3556929eb,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:03.567607 containerd[1971]: time="2025-04-30T03:31:03.567054538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-79,Uid:738f01e64ef87bebf7bb2b3024c0efba,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:03.712784 kubelet[2849]: E0430 03:31:03.712493 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-79?timeout=10s\": dial tcp 172.31.22.79:6443: connect: connection refused" interval="800ms" Apr 30 03:31:03.812956 kubelet[2849]: I0430 03:31:03.812887 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-79" Apr 30 03:31:03.813539 kubelet[2849]: E0430 03:31:03.813488 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.79:6443/api/v1/nodes\": dial tcp 172.31.22.79:6443: connect: connection refused" node="ip-172-31-22-79" Apr 30 03:31:03.996880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394074027.mount: Deactivated successfully. Apr 30 03:31:04.009824 containerd[1971]: time="2025-04-30T03:31:04.009773059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:04.011674 containerd[1971]: time="2025-04-30T03:31:04.011608625Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:04.012939 containerd[1971]: time="2025-04-30T03:31:04.012884910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:31:04.013048 containerd[1971]: time="2025-04-30T03:31:04.013001281Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:04.013720 containerd[1971]: time="2025-04-30T03:31:04.013692615Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:04.014507 containerd[1971]: time="2025-04-30T03:31:04.014289287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 03:31:04.014507 containerd[1971]: time="2025-04-30T03:31:04.014479079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:31:04.016933 containerd[1971]: time="2025-04-30T03:31:04.016856239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:31:04.019583 containerd[1971]: time="2025-04-30T03:31:04.019152409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 451.716767ms" Apr 30 03:31:04.020876 containerd[1971]: time="2025-04-30T03:31:04.020837730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.612564ms" Apr 30 03:31:04.028618 containerd[1971]: time="2025-04-30T03:31:04.028454653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.580948ms" Apr 30 03:31:04.107993 kubelet[2849]: W0430 03:31:04.107743 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.108390 kubelet[2849]: E0430 03:31:04.108289 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.161352 containerd[1971]: time="2025-04-30T03:31:04.160838962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:04.161352 containerd[1971]: time="2025-04-30T03:31:04.160905525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:04.161352 containerd[1971]: time="2025-04-30T03:31:04.160922412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:04.161352 containerd[1971]: time="2025-04-30T03:31:04.161107999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:04.163115 containerd[1971]: time="2025-04-30T03:31:04.163035116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:04.163584 containerd[1971]: time="2025-04-30T03:31:04.163292670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:04.163584 containerd[1971]: time="2025-04-30T03:31:04.163369517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:04.164391 containerd[1971]: time="2025-04-30T03:31:04.163757604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:04.178014 containerd[1971]: time="2025-04-30T03:31:04.177221375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:04.178014 containerd[1971]: time="2025-04-30T03:31:04.177269839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:04.178014 containerd[1971]: time="2025-04-30T03:31:04.177296200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:04.178014 containerd[1971]: time="2025-04-30T03:31:04.177390744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:04.195422 systemd[1]: Started cri-containerd-8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf.scope - libcontainer container 8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf. Apr 30 03:31:04.199691 systemd[1]: Started cri-containerd-d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d.scope - libcontainer container d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d. Apr 30 03:31:04.204792 systemd[1]: Started cri-containerd-698d50af022c63db14605fd34f6527c5ec16c9994417977b106bfaee7913d984.scope - libcontainer container 698d50af022c63db14605fd34f6527c5ec16c9994417977b106bfaee7913d984. Apr 30 03:31:04.264848 containerd[1971]: time="2025-04-30T03:31:04.263878972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-79,Uid:b096d0b27aa814e9726dc1e3556929eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d\"" Apr 30 03:31:04.267882 kubelet[2849]: W0430 03:31:04.267738 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.267882 kubelet[2849]: E0430 03:31:04.267789 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.276806 containerd[1971]: time="2025-04-30T03:31:04.276768840Z" level=info msg="CreateContainer within sandbox \"d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:31:04.288081 containerd[1971]: time="2025-04-30T03:31:04.288011109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-79,Uid:738f01e64ef87bebf7bb2b3024c0efba,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf\"" Apr 30 03:31:04.288415 containerd[1971]: time="2025-04-30T03:31:04.288394475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-79,Uid:e948a503c7e67e24a97e95ebcee5948d,Namespace:kube-system,Attempt:0,} returns sandbox id \"698d50af022c63db14605fd34f6527c5ec16c9994417977b106bfaee7913d984\"" Apr 30 03:31:04.291072 containerd[1971]: time="2025-04-30T03:31:04.290735344Z" level=info msg="CreateContainer within sandbox \"8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:31:04.291532 containerd[1971]: time="2025-04-30T03:31:04.291502510Z" level=info msg="CreateContainer within sandbox \"698d50af022c63db14605fd34f6527c5ec16c9994417977b106bfaee7913d984\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:31:04.340224 containerd[1971]: time="2025-04-30T03:31:04.339972498Z" level=info msg="CreateContainer within sandbox \"d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72\"" Apr 30 03:31:04.343169 containerd[1971]: time="2025-04-30T03:31:04.343086490Z" level=info msg="CreateContainer within sandbox \"8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f\"" Apr 30 03:31:04.343574 containerd[1971]: time="2025-04-30T03:31:04.343376412Z" level=info msg="StartContainer for \"a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72\"" Apr 30 03:31:04.346925 containerd[1971]: time="2025-04-30T03:31:04.345399054Z" level=info msg="CreateContainer within sandbox \"698d50af022c63db14605fd34f6527c5ec16c9994417977b106bfaee7913d984\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"710843542cf3e1a408fa13b2cdf0a1b14f69c10811402ec319b4412da148e97f\"" Apr 30 03:31:04.346925 containerd[1971]: time="2025-04-30T03:31:04.345633055Z" level=info msg="StartContainer for \"5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f\"" Apr 30 03:31:04.361579 containerd[1971]: time="2025-04-30T03:31:04.361534097Z" level=info msg="StartContainer for \"710843542cf3e1a408fa13b2cdf0a1b14f69c10811402ec319b4412da148e97f\"" Apr 30 03:31:04.369470 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 03:31:04.402134 systemd[1]: Started cri-containerd-5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f.scope - libcontainer container 5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f. Apr 30 03:31:04.409113 systemd[1]: Started cri-containerd-a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72.scope - libcontainer container a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72. Apr 30 03:31:04.422161 systemd[1]: Started cri-containerd-710843542cf3e1a408fa13b2cdf0a1b14f69c10811402ec319b4412da148e97f.scope - libcontainer container 710843542cf3e1a408fa13b2cdf0a1b14f69c10811402ec319b4412da148e97f. Apr 30 03:31:04.496135 containerd[1971]: time="2025-04-30T03:31:04.496032541Z" level=info msg="StartContainer for \"710843542cf3e1a408fa13b2cdf0a1b14f69c10811402ec319b4412da148e97f\" returns successfully" Apr 30 03:31:04.507432 containerd[1971]: time="2025-04-30T03:31:04.506590203Z" level=info msg="StartContainer for \"a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72\" returns successfully" Apr 30 03:31:04.515283 kubelet[2849]: E0430 03:31:04.514968 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-79?timeout=10s\": dial tcp 172.31.22.79:6443: connect: connection refused" interval="1.6s" Apr 30 03:31:04.539617 containerd[1971]: time="2025-04-30T03:31:04.539570208Z" level=info msg="StartContainer for \"5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f\" returns successfully" Apr 30 03:31:04.580268 kubelet[2849]: W0430 03:31:04.580141 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-79&limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.580268 kubelet[2849]: E0430 03:31:04.580227 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-79&limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.615695 kubelet[2849]: I0430 03:31:04.615671 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-79" Apr 30 03:31:04.616465 kubelet[2849]: E0430 03:31:04.616417 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.79:6443/api/v1/nodes\": dial tcp 172.31.22.79:6443: connect: connection refused" node="ip-172-31-22-79" Apr 30 03:31:04.694016 kubelet[2849]: W0430 03:31:04.693940 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:04.694016 kubelet[2849]: E0430 03:31:04.693991 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:05.087687 kubelet[2849]: E0430 03:31:05.087646 2849 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.79:6443: connect: connection refused Apr 30 03:31:06.219408 kubelet[2849]: I0430 03:31:06.218785 2849 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-79" Apr 30 03:31:07.416834 kubelet[2849]: E0430 03:31:07.416744 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-79\" not found" node="ip-172-31-22-79" Apr 30 03:31:07.484153 kubelet[2849]: I0430 03:31:07.484100 2849 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-79" Apr 30 03:31:07.521519 kubelet[2849]: E0430 03:31:07.521299 2849 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-79.183afb12a70fd5a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-79,UID:ip-172-31-22-79,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-79,},FirstTimestamp:2025-04-30 03:31:03.090304417 +0000 UTC m=+0.387078386,LastTimestamp:2025-04-30 03:31:03.090304417 +0000 UTC m=+0.387078386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-79,}" Apr 30 03:31:08.088881 kubelet[2849]: I0430 03:31:08.088837 2849 apiserver.go:52] "Watching apiserver" Apr 30 03:31:08.109178 kubelet[2849]: I0430 03:31:08.109137 2849 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:31:09.643479 systemd[1]: Reloading requested from client PID 3123 ('systemctl') (unit session-7.scope)... Apr 30 03:31:09.643497 systemd[1]: Reloading... Apr 30 03:31:09.752010 zram_generator::config[3161]: No configuration found. Apr 30 03:31:09.885423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:31:09.993102 systemd[1]: Reloading finished in 349 ms. Apr 30 03:31:10.039947 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:10.047221 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:31:10.047493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:10.053485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:10.251560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:10.261521 (kubelet)[3223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:31:10.350273 kubelet[3223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:10.350273 kubelet[3223]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:31:10.350273 kubelet[3223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:31:10.354793 kubelet[3223]: I0430 03:31:10.354722 3223 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:31:10.359533 kubelet[3223]: I0430 03:31:10.359485 3223 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:31:10.359533 kubelet[3223]: I0430 03:31:10.359510 3223 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:31:10.359766 kubelet[3223]: I0430 03:31:10.359748 3223 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:31:10.361162 kubelet[3223]: I0430 03:31:10.361130 3223 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:31:10.362478 kubelet[3223]: I0430 03:31:10.362235 3223 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:31:10.369195 kubelet[3223]: I0430 03:31:10.369062 3223 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:31:10.369326 kubelet[3223]: I0430 03:31:10.369273 3223 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:31:10.369452 kubelet[3223]: I0430 03:31:10.369297 3223 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-79","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:31:10.369567 kubelet[3223]: I0430 03:31:10.369464 3223 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:31:10.369567 kubelet[3223]: I0430 03:31:10.369476 3223 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:31:10.371016 kubelet[3223]: I0430 03:31:10.370989 3223 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:10.371139 kubelet[3223]: I0430 03:31:10.371127 3223 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:31:10.371174 kubelet[3223]: I0430 03:31:10.371141 3223 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:31:10.371174 kubelet[3223]: I0430 03:31:10.371159 3223 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:31:10.371394 kubelet[3223]: I0430 03:31:10.371176 3223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:31:10.387362 kubelet[3223]: I0430 03:31:10.386581 3223 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:31:10.388709 kubelet[3223]: I0430 03:31:10.388692 3223 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:31:10.394811 kubelet[3223]: I0430 03:31:10.394784 3223 server.go:1264] "Started kubelet" Apr 30 03:31:10.396853 kubelet[3223]: I0430 03:31:10.396696 3223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:31:10.401665 kubelet[3223]: I0430 03:31:10.401625 3223 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:31:10.403361 kubelet[3223]: I0430 03:31:10.402663 3223 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:31:10.405989 kubelet[3223]: I0430 03:31:10.405511 3223 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:31:10.407620 kubelet[3223]: I0430 03:31:10.407373 3223 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:31:10.407620 kubelet[3223]: I0430 03:31:10.407493 3223 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:31:10.409225 kubelet[3223]: I0430 03:31:10.409181 3223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:31:10.410482 kubelet[3223]: I0430 03:31:10.410465 3223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:31:10.410573 kubelet[3223]: I0430 03:31:10.410567 3223 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:31:10.410632 kubelet[3223]: I0430 03:31:10.410626 3223 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:31:10.410909 kubelet[3223]: E0430 03:31:10.410699 3223 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:31:10.419224 kubelet[3223]: I0430 03:31:10.418393 3223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:31:10.419449 kubelet[3223]: I0430 03:31:10.419412 3223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:31:10.419699 kubelet[3223]: I0430 03:31:10.419688 3223 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:31:10.422691 kubelet[3223]: E0430 03:31:10.422658 3223 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:31:10.423404 kubelet[3223]: I0430 03:31:10.423380 3223 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:31:10.423404 kubelet[3223]: I0430 03:31:10.423397 3223 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:31:10.466714 kubelet[3223]: I0430 03:31:10.466687 3223 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:31:10.466714 kubelet[3223]: I0430 03:31:10.466711 3223 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:31:10.466944 kubelet[3223]: I0430 03:31:10.466732 3223 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:31:10.466993 kubelet[3223]: I0430 03:31:10.466962 3223 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:31:10.467038 kubelet[3223]: I0430 03:31:10.466977 3223 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:31:10.467038 kubelet[3223]: I0430 03:31:10.467002 3223 policy_none.go:49] "None policy: Start" Apr 30 03:31:10.467812 kubelet[3223]: I0430 03:31:10.467790 3223 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:31:10.467812 kubelet[3223]: I0430 03:31:10.467815 3223 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:31:10.468066 kubelet[3223]: I0430 03:31:10.468046 3223 state_mem.go:75] "Updated machine memory state" Apr 30 03:31:10.472642 kubelet[3223]: I0430 03:31:10.472611 3223 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:31:10.472964 kubelet[3223]: I0430 03:31:10.472800 3223 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:31:10.473074 kubelet[3223]: I0430 03:31:10.473065 3223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:31:10.510053 kubelet[3223]: I0430 03:31:10.508471 3223 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-79" Apr 30 03:31:10.511803 kubelet[3223]: I0430 03:31:10.511573 3223 topology_manager.go:215] "Topology Admit Handler" podUID="738f01e64ef87bebf7bb2b3024c0efba" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-79" Apr 30 03:31:10.512109 kubelet[3223]: I0430 03:31:10.511976 3223 topology_manager.go:215] "Topology Admit Handler" podUID="e948a503c7e67e24a97e95ebcee5948d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-79" Apr 30 03:31:10.512109 kubelet[3223]: I0430 03:31:10.512050 3223 topology_manager.go:215] "Topology Admit Handler" podUID="b096d0b27aa814e9726dc1e3556929eb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:10.520432 kubelet[3223]: I0430 03:31:10.519798 3223 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-22-79" Apr 30 03:31:10.520432 kubelet[3223]: I0430 03:31:10.519875 3223 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-79" Apr 30 03:31:10.520432 kubelet[3223]: E0430 03:31:10.520120 3223 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-79\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-79" Apr 30 03:31:10.656320 sudo[3255]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 03:31:10.656740 sudo[3255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 03:31:10.708576 kubelet[3223]: I0430 03:31:10.708531 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:10.708576 kubelet[3223]: I0430 03:31:10.708571 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:10.708576 kubelet[3223]: I0430 03:31:10.708591 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e948a503c7e67e24a97e95ebcee5948d-ca-certs\") pod \"kube-apiserver-ip-172-31-22-79\" (UID: \"e948a503c7e67e24a97e95ebcee5948d\") " pod="kube-system/kube-apiserver-ip-172-31-22-79" Apr 30 03:31:10.708765 kubelet[3223]: I0430 03:31:10.708608 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e948a503c7e67e24a97e95ebcee5948d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-79\" (UID: \"e948a503c7e67e24a97e95ebcee5948d\") " pod="kube-system/kube-apiserver-ip-172-31-22-79" Apr 30 03:31:10.708765 kubelet[3223]: I0430 03:31:10.708625 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e948a503c7e67e24a97e95ebcee5948d-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-79\" (UID: \"e948a503c7e67e24a97e95ebcee5948d\") " pod="kube-system/kube-apiserver-ip-172-31-22-79" Apr 30 03:31:10.708765 kubelet[3223]: I0430 03:31:10.708651 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:10.708765 kubelet[3223]: I0430 03:31:10.708665 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:10.708765 kubelet[3223]: I0430 03:31:10.708682 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b096d0b27aa814e9726dc1e3556929eb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-79\" (UID: \"b096d0b27aa814e9726dc1e3556929eb\") " pod="kube-system/kube-controller-manager-ip-172-31-22-79" Apr 30 03:31:10.708885 kubelet[3223]: I0430 03:31:10.708699 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/738f01e64ef87bebf7bb2b3024c0efba-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-79\" (UID: \"738f01e64ef87bebf7bb2b3024c0efba\") " pod="kube-system/kube-scheduler-ip-172-31-22-79" Apr 30 03:31:11.376745 kubelet[3223]: I0430 03:31:11.376690 3223 apiserver.go:52] "Watching apiserver" Apr 30 03:31:11.407766 kubelet[3223]: I0430 03:31:11.407675 3223 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:31:11.458311 kubelet[3223]: E0430 03:31:11.457031 3223 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-79\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-79" Apr 30 03:31:11.493004 kubelet[3223]: I0430 03:31:11.492704 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-79" podStartSLOduration=1.492683009 podStartE2EDuration="1.492683009s" podCreationTimestamp="2025-04-30 03:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:11.491851874 +0000 UTC m=+1.211147193" watchObservedRunningTime="2025-04-30 03:31:11.492683009 +0000 UTC m=+1.211978329" Apr 30 03:31:11.493004 kubelet[3223]: I0430 03:31:11.492859 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-79" podStartSLOduration=2.492849644 podStartE2EDuration="2.492849644s" podCreationTimestamp="2025-04-30 03:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:11.474416116 +0000 UTC m=+1.193711443" watchObservedRunningTime="2025-04-30 03:31:11.492849644 +0000 UTC m=+1.212144961" Apr 30 03:31:11.510265 sudo[3255]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:11.521407 kubelet[3223]: I0430 03:31:11.521341 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-79" podStartSLOduration=1.521323969 podStartE2EDuration="1.521323969s" podCreationTimestamp="2025-04-30 03:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:11.505645535 +0000 UTC m=+1.224940852" watchObservedRunningTime="2025-04-30 03:31:11.521323969 +0000 UTC m=+1.240619288" Apr 30 03:31:13.221412 sudo[2292]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:13.258388 sshd[2289]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:13.261323 systemd[1]: sshd@6-172.31.22.79:22-147.75.109.163:60070.service: Deactivated successfully. Apr 30 03:31:13.263182 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:31:13.263354 systemd[1]: session-7.scope: Consumed 5.055s CPU time, 187.8M memory peak, 0B memory swap peak. Apr 30 03:31:13.264526 systemd-logind[1949]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:31:13.266096 systemd-logind[1949]: Removed session 7. Apr 30 03:31:18.576031 update_engine[1951]: I20250430 03:31:18.575949 1951 update_attempter.cc:509] Updating boot flags... Apr 30 03:31:18.660950 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3306) Apr 30 03:31:18.803112 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3308) Apr 30 03:31:19.013000 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3308) Apr 30 03:31:24.751621 kubelet[3223]: I0430 03:31:24.751561 3223 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:31:24.752336 containerd[1971]: time="2025-04-30T03:31:24.752287786Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:31:24.752639 kubelet[3223]: I0430 03:31:24.752461 3223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:31:25.468441 kubelet[3223]: I0430 03:31:25.468401 3223 topology_manager.go:215] "Topology Admit Handler" podUID="3ac53023-f6a8-49c6-8169-d40ce3f69ab2" podNamespace="kube-system" podName="kube-proxy-48925" Apr 30 03:31:25.471396 kubelet[3223]: I0430 03:31:25.471276 3223 topology_manager.go:215] "Topology Admit Handler" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" podNamespace="kube-system" podName="cilium-x8295" Apr 30 03:31:25.489427 systemd[1]: Created slice kubepods-burstable-podff7749c4_9f69_4b02_bf37_e72358ca29f9.slice - libcontainer container kubepods-burstable-podff7749c4_9f69_4b02_bf37_e72358ca29f9.slice. Apr 30 03:31:25.497795 systemd[1]: Created slice kubepods-besteffort-pod3ac53023_f6a8_49c6_8169_d40ce3f69ab2.slice - libcontainer container kubepods-besteffort-pod3ac53023_f6a8_49c6_8169_d40ce3f69ab2.slice. Apr 30 03:31:25.500455 kubelet[3223]: I0430 03:31:25.500428 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ac53023-f6a8-49c6-8169-d40ce3f69ab2-lib-modules\") pod \"kube-proxy-48925\" (UID: \"3ac53023-f6a8-49c6-8169-d40ce3f69ab2\") " pod="kube-system/kube-proxy-48925" Apr 30 03:31:25.500834 kubelet[3223]: I0430 03:31:25.500594 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-etc-cni-netd\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.500834 kubelet[3223]: I0430 03:31:25.500631 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff7749c4-9f69-4b02-bf37-e72358ca29f9-clustermesh-secrets\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.500834 kubelet[3223]: I0430 03:31:25.500650 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ac53023-f6a8-49c6-8169-d40ce3f69ab2-kube-proxy\") pod \"kube-proxy-48925\" (UID: \"3ac53023-f6a8-49c6-8169-d40ce3f69ab2\") " pod="kube-system/kube-proxy-48925" Apr 30 03:31:25.500834 kubelet[3223]: I0430 03:31:25.500666 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cni-path\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.500834 kubelet[3223]: I0430 03:31:25.500682 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-xtables-lock\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.500834 kubelet[3223]: I0430 03:31:25.500699 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-lib-modules\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501291 kubelet[3223]: I0430 03:31:25.500714 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-config-path\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501291 kubelet[3223]: I0430 03:31:25.500727 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-net\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501291 kubelet[3223]: I0430 03:31:25.500751 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdcw\" (UniqueName: \"kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-kube-api-access-lxdcw\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501291 kubelet[3223]: I0430 03:31:25.500768 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-run\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501291 kubelet[3223]: I0430 03:31:25.501042 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-cgroup\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501422 kubelet[3223]: I0430 03:31:25.501066 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ac53023-f6a8-49c6-8169-d40ce3f69ab2-xtables-lock\") pod \"kube-proxy-48925\" (UID: \"3ac53023-f6a8-49c6-8169-d40ce3f69ab2\") " pod="kube-system/kube-proxy-48925" Apr 30 03:31:25.501422 kubelet[3223]: I0430 03:31:25.501081 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q428\" (UniqueName: \"kubernetes.io/projected/3ac53023-f6a8-49c6-8169-d40ce3f69ab2-kube-api-access-5q428\") pod \"kube-proxy-48925\" (UID: \"3ac53023-f6a8-49c6-8169-d40ce3f69ab2\") " pod="kube-system/kube-proxy-48925" Apr 30 03:31:25.501630 kubelet[3223]: I0430 03:31:25.501494 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hostproc\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501630 kubelet[3223]: I0430 03:31:25.501515 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hubble-tls\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501630 kubelet[3223]: I0430 03:31:25.501536 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-bpf-maps\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.501630 kubelet[3223]: I0430 03:31:25.501555 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-kernel\") pod \"cilium-x8295\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " pod="kube-system/cilium-x8295" Apr 30 03:31:25.631213 kubelet[3223]: I0430 03:31:25.628918 3223 topology_manager.go:215] "Topology Admit Handler" podUID="95a8e8bb-56ca-4588-a04e-ad9d470f58fb" podNamespace="kube-system" podName="cilium-operator-599987898-9cfpd" Apr 30 03:31:25.650257 systemd[1]: Created slice kubepods-besteffort-pod95a8e8bb_56ca_4588_a04e_ad9d470f58fb.slice - libcontainer container kubepods-besteffort-pod95a8e8bb_56ca_4588_a04e_ad9d470f58fb.slice. Apr 30 03:31:25.703151 kubelet[3223]: I0430 03:31:25.703111 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-cilium-config-path\") pod \"cilium-operator-599987898-9cfpd\" (UID: \"95a8e8bb-56ca-4588-a04e-ad9d470f58fb\") " pod="kube-system/cilium-operator-599987898-9cfpd" Apr 30 03:31:25.703151 kubelet[3223]: I0430 03:31:25.703149 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtqdl\" (UniqueName: \"kubernetes.io/projected/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-kube-api-access-qtqdl\") pod \"cilium-operator-599987898-9cfpd\" (UID: \"95a8e8bb-56ca-4588-a04e-ad9d470f58fb\") " pod="kube-system/cilium-operator-599987898-9cfpd" Apr 30 03:31:25.795852 containerd[1971]: time="2025-04-30T03:31:25.795448756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8295,Uid:ff7749c4-9f69-4b02-bf37-e72358ca29f9,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:25.814364 containerd[1971]: time="2025-04-30T03:31:25.814327876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48925,Uid:3ac53023-f6a8-49c6-8169-d40ce3f69ab2,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:25.847794 containerd[1971]: time="2025-04-30T03:31:25.847118575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:25.847794 containerd[1971]: time="2025-04-30T03:31:25.847221444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:25.847794 containerd[1971]: time="2025-04-30T03:31:25.847245728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:25.847794 containerd[1971]: time="2025-04-30T03:31:25.847422094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:25.860721 containerd[1971]: time="2025-04-30T03:31:25.860061375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:25.861336 containerd[1971]: time="2025-04-30T03:31:25.861200357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:25.863196 containerd[1971]: time="2025-04-30T03:31:25.862583387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:25.863196 containerd[1971]: time="2025-04-30T03:31:25.862718764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:25.874159 systemd[1]: Started cri-containerd-fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb.scope - libcontainer container fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb. Apr 30 03:31:25.896154 systemd[1]: Started cri-containerd-7a4d1e173be9ca992e7f687be9a911e207a6638473565b4bf68b96be2b09b469.scope - libcontainer container 7a4d1e173be9ca992e7f687be9a911e207a6638473565b4bf68b96be2b09b469. Apr 30 03:31:25.927880 containerd[1971]: time="2025-04-30T03:31:25.927796696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x8295,Uid:ff7749c4-9f69-4b02-bf37-e72358ca29f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\"" Apr 30 03:31:25.932409 containerd[1971]: time="2025-04-30T03:31:25.932155864Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 03:31:25.940079 containerd[1971]: time="2025-04-30T03:31:25.940034766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48925,Uid:3ac53023-f6a8-49c6-8169-d40ce3f69ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a4d1e173be9ca992e7f687be9a911e207a6638473565b4bf68b96be2b09b469\"" Apr 30 03:31:25.943563 containerd[1971]: time="2025-04-30T03:31:25.943416375Z" level=info msg="CreateContainer within sandbox \"7a4d1e173be9ca992e7f687be9a911e207a6638473565b4bf68b96be2b09b469\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:31:25.958699 containerd[1971]: time="2025-04-30T03:31:25.958661872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9cfpd,Uid:95a8e8bb-56ca-4588-a04e-ad9d470f58fb,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:25.974027 containerd[1971]: time="2025-04-30T03:31:25.973979038Z" level=info msg="CreateContainer within sandbox \"7a4d1e173be9ca992e7f687be9a911e207a6638473565b4bf68b96be2b09b469\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61599c49f80beeeceb4df4708e62cf0f97b7737535447531a653127082c4090b\"" Apr 30 03:31:25.974716 containerd[1971]: time="2025-04-30T03:31:25.974687575Z" level=info msg="StartContainer for \"61599c49f80beeeceb4df4708e62cf0f97b7737535447531a653127082c4090b\"" Apr 30 03:31:26.012213 containerd[1971]: time="2025-04-30T03:31:26.012088838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:26.012213 containerd[1971]: time="2025-04-30T03:31:26.012156639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:26.012213 containerd[1971]: time="2025-04-30T03:31:26.012172096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:26.012423 containerd[1971]: time="2025-04-30T03:31:26.012254927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:26.013496 systemd[1]: Started cri-containerd-61599c49f80beeeceb4df4708e62cf0f97b7737535447531a653127082c4090b.scope - libcontainer container 61599c49f80beeeceb4df4708e62cf0f97b7737535447531a653127082c4090b. Apr 30 03:31:26.034138 systemd[1]: Started cri-containerd-5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731.scope - libcontainer container 5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731. Apr 30 03:31:26.065015 containerd[1971]: time="2025-04-30T03:31:26.063623180Z" level=info msg="StartContainer for \"61599c49f80beeeceb4df4708e62cf0f97b7737535447531a653127082c4090b\" returns successfully" Apr 30 03:31:26.092888 containerd[1971]: time="2025-04-30T03:31:26.092332237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9cfpd,Uid:95a8e8bb-56ca-4588-a04e-ad9d470f58fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\"" Apr 30 03:31:26.474426 kubelet[3223]: I0430 03:31:26.474311 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48925" podStartSLOduration=1.474276506 podStartE2EDuration="1.474276506s" podCreationTimestamp="2025-04-30 03:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:26.474241259 +0000 UTC m=+16.193536577" watchObservedRunningTime="2025-04-30 03:31:26.474276506 +0000 UTC m=+16.193571824" Apr 30 03:31:32.899454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025028576.mount: Deactivated successfully. Apr 30 03:31:37.601178 containerd[1971]: time="2025-04-30T03:31:37.601125509Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:37.603331 containerd[1971]: time="2025-04-30T03:31:37.603276155Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 03:31:37.605885 containerd[1971]: time="2025-04-30T03:31:37.605824370Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:37.607744 containerd[1971]: time="2025-04-30T03:31:37.607449524Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.675240698s" Apr 30 03:31:37.607744 containerd[1971]: time="2025-04-30T03:31:37.607499438Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 03:31:37.609381 containerd[1971]: time="2025-04-30T03:31:37.609323953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 03:31:37.610493 containerd[1971]: time="2025-04-30T03:31:37.610350989Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:31:37.733408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549828757.mount: Deactivated successfully. Apr 30 03:31:37.740499 containerd[1971]: time="2025-04-30T03:31:37.740451985Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\"" Apr 30 03:31:37.741311 containerd[1971]: time="2025-04-30T03:31:37.741278266Z" level=info msg="StartContainer for \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\"" Apr 30 03:31:37.846141 systemd[1]: Started cri-containerd-69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171.scope - libcontainer container 69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171. Apr 30 03:31:37.877964 containerd[1971]: time="2025-04-30T03:31:37.877764913Z" level=info msg="StartContainer for \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\" returns successfully" Apr 30 03:31:37.886360 systemd[1]: cri-containerd-69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171.scope: Deactivated successfully. Apr 30 03:31:38.094592 containerd[1971]: time="2025-04-30T03:31:38.066318704Z" level=info msg="shim disconnected" id=69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171 namespace=k8s.io Apr 30 03:31:38.094592 containerd[1971]: time="2025-04-30T03:31:38.094580694Z" level=warning msg="cleaning up after shim disconnected" id=69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171 namespace=k8s.io Apr 30 03:31:38.094592 containerd[1971]: time="2025-04-30T03:31:38.094598111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:38.526628 containerd[1971]: time="2025-04-30T03:31:38.526588230Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:31:38.551049 containerd[1971]: time="2025-04-30T03:31:38.550985435Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\"" Apr 30 03:31:38.551935 containerd[1971]: time="2025-04-30T03:31:38.551470778Z" level=info msg="StartContainer for \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\"" Apr 30 03:31:38.581135 systemd[1]: Started cri-containerd-90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7.scope - libcontainer container 90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7. Apr 30 03:31:38.611239 containerd[1971]: time="2025-04-30T03:31:38.611198917Z" level=info msg="StartContainer for \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\" returns successfully" Apr 30 03:31:38.624385 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:31:38.624650 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:31:38.624799 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:31:38.630703 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:31:38.631269 systemd[1]: cri-containerd-90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7.scope: Deactivated successfully. Apr 30 03:31:38.670006 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:31:38.684521 containerd[1971]: time="2025-04-30T03:31:38.684458456Z" level=info msg="shim disconnected" id=90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7 namespace=k8s.io Apr 30 03:31:38.684521 containerd[1971]: time="2025-04-30T03:31:38.684512485Z" level=warning msg="cleaning up after shim disconnected" id=90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7 namespace=k8s.io Apr 30 03:31:38.684521 containerd[1971]: time="2025-04-30T03:31:38.684521248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:38.722258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171-rootfs.mount: Deactivated successfully. Apr 30 03:31:39.528254 containerd[1971]: time="2025-04-30T03:31:39.528217419Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:31:39.584437 containerd[1971]: time="2025-04-30T03:31:39.584303013Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\"" Apr 30 03:31:39.585358 containerd[1971]: time="2025-04-30T03:31:39.585334713Z" level=info msg="StartContainer for \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\"" Apr 30 03:31:39.629155 systemd[1]: Started cri-containerd-f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71.scope - libcontainer container f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71. Apr 30 03:31:39.663024 containerd[1971]: time="2025-04-30T03:31:39.662949595Z" level=info msg="StartContainer for \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\" returns successfully" Apr 30 03:31:39.674515 systemd[1]: cri-containerd-f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71.scope: Deactivated successfully. Apr 30 03:31:39.721772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71-rootfs.mount: Deactivated successfully. Apr 30 03:31:39.739164 containerd[1971]: time="2025-04-30T03:31:39.739105618Z" level=info msg="shim disconnected" id=f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71 namespace=k8s.io Apr 30 03:31:39.739164 containerd[1971]: time="2025-04-30T03:31:39.739159552Z" level=warning msg="cleaning up after shim disconnected" id=f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71 namespace=k8s.io Apr 30 03:31:39.739164 containerd[1971]: time="2025-04-30T03:31:39.739168279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:39.876883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392495725.mount: Deactivated successfully. Apr 30 03:31:40.477037 containerd[1971]: time="2025-04-30T03:31:40.476985978Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:40.479097 containerd[1971]: time="2025-04-30T03:31:40.479033887Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 03:31:40.481200 containerd[1971]: time="2025-04-30T03:31:40.481135691Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:31:40.483443 containerd[1971]: time="2025-04-30T03:31:40.483394578Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.873886893s" Apr 30 03:31:40.483443 containerd[1971]: time="2025-04-30T03:31:40.483435150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 03:31:40.487449 containerd[1971]: time="2025-04-30T03:31:40.487285803Z" level=info msg="CreateContainer within sandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 03:31:40.511434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201795099.mount: Deactivated successfully. Apr 30 03:31:40.516224 containerd[1971]: time="2025-04-30T03:31:40.516180706Z" level=info msg="CreateContainer within sandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\"" Apr 30 03:31:40.517565 containerd[1971]: time="2025-04-30T03:31:40.517344278Z" level=info msg="StartContainer for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\"" Apr 30 03:31:40.545782 containerd[1971]: time="2025-04-30T03:31:40.545581749Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:31:40.565106 systemd[1]: Started cri-containerd-104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8.scope - libcontainer container 104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8. Apr 30 03:31:40.609996 containerd[1971]: time="2025-04-30T03:31:40.609606485Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\"" Apr 30 03:31:40.613730 containerd[1971]: time="2025-04-30T03:31:40.611956291Z" level=info msg="StartContainer for \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\"" Apr 30 03:31:40.622564 containerd[1971]: time="2025-04-30T03:31:40.622402086Z" level=info msg="StartContainer for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" returns successfully" Apr 30 03:31:40.652140 systemd[1]: Started cri-containerd-7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1.scope - libcontainer container 7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1. Apr 30 03:31:40.695742 systemd[1]: cri-containerd-7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1.scope: Deactivated successfully. Apr 30 03:31:40.698821 containerd[1971]: time="2025-04-30T03:31:40.698780418Z" level=info msg="StartContainer for \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\" returns successfully" Apr 30 03:31:40.747780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1-rootfs.mount: Deactivated successfully. Apr 30 03:31:40.761045 containerd[1971]: time="2025-04-30T03:31:40.760969498Z" level=info msg="shim disconnected" id=7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1 namespace=k8s.io Apr 30 03:31:40.761045 containerd[1971]: time="2025-04-30T03:31:40.761040312Z" level=warning msg="cleaning up after shim disconnected" id=7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1 namespace=k8s.io Apr 30 03:31:40.761045 containerd[1971]: time="2025-04-30T03:31:40.761049612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:31:41.578285 containerd[1971]: time="2025-04-30T03:31:41.578224688Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:31:41.609812 containerd[1971]: time="2025-04-30T03:31:41.609763179Z" level=info msg="CreateContainer within sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\"" Apr 30 03:31:41.610713 containerd[1971]: time="2025-04-30T03:31:41.610678452Z" level=info msg="StartContainer for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\"" Apr 30 03:31:41.675491 systemd[1]: Started cri-containerd-9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b.scope - libcontainer container 9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b. Apr 30 03:31:41.706506 kubelet[3223]: I0430 03:31:41.703178 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9cfpd" podStartSLOduration=2.313400169 podStartE2EDuration="16.703156025s" podCreationTimestamp="2025-04-30 03:31:25 +0000 UTC" firstStartedPulling="2025-04-30 03:31:26.094562582 +0000 UTC m=+15.813857889" lastFinishedPulling="2025-04-30 03:31:40.484318428 +0000 UTC m=+30.203613745" observedRunningTime="2025-04-30 03:31:41.607707253 +0000 UTC m=+31.327002571" watchObservedRunningTime="2025-04-30 03:31:41.703156025 +0000 UTC m=+31.422451343" Apr 30 03:31:41.724805 systemd[1]: run-containerd-runc-k8s.io-9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b-runc.ClJ132.mount: Deactivated successfully. Apr 30 03:31:41.755079 containerd[1971]: time="2025-04-30T03:31:41.755005550Z" level=info msg="StartContainer for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" returns successfully" Apr 30 03:31:41.967155 kubelet[3223]: I0430 03:31:41.967110 3223 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:31:42.000394 kubelet[3223]: I0430 03:31:42.000336 3223 topology_manager.go:215] "Topology Admit Handler" podUID="d81f4862-fe99-4544-b6f2-2330003e21c2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6bs7z" Apr 30 03:31:42.005705 kubelet[3223]: I0430 03:31:42.005666 3223 topology_manager.go:215] "Topology Admit Handler" podUID="7761f679-f22f-4dca-bc31-4bba8c0261ce" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wnbc9" Apr 30 03:31:42.016401 systemd[1]: Created slice kubepods-burstable-podd81f4862_fe99_4544_b6f2_2330003e21c2.slice - libcontainer container kubepods-burstable-podd81f4862_fe99_4544_b6f2_2330003e21c2.slice. Apr 30 03:31:42.026150 systemd[1]: Created slice kubepods-burstable-pod7761f679_f22f_4dca_bc31_4bba8c0261ce.slice - libcontainer container kubepods-burstable-pod7761f679_f22f_4dca_bc31_4bba8c0261ce.slice. Apr 30 03:31:42.113405 kubelet[3223]: I0430 03:31:42.113266 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7l4k\" (UniqueName: \"kubernetes.io/projected/7761f679-f22f-4dca-bc31-4bba8c0261ce-kube-api-access-b7l4k\") pod \"coredns-7db6d8ff4d-wnbc9\" (UID: \"7761f679-f22f-4dca-bc31-4bba8c0261ce\") " pod="kube-system/coredns-7db6d8ff4d-wnbc9" Apr 30 03:31:42.113405 kubelet[3223]: I0430 03:31:42.113306 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ggq\" (UniqueName: \"kubernetes.io/projected/d81f4862-fe99-4544-b6f2-2330003e21c2-kube-api-access-n9ggq\") pod \"coredns-7db6d8ff4d-6bs7z\" (UID: \"d81f4862-fe99-4544-b6f2-2330003e21c2\") " pod="kube-system/coredns-7db6d8ff4d-6bs7z" Apr 30 03:31:42.113405 kubelet[3223]: I0430 03:31:42.113329 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d81f4862-fe99-4544-b6f2-2330003e21c2-config-volume\") pod \"coredns-7db6d8ff4d-6bs7z\" (UID: \"d81f4862-fe99-4544-b6f2-2330003e21c2\") " pod="kube-system/coredns-7db6d8ff4d-6bs7z" Apr 30 03:31:42.113405 kubelet[3223]: I0430 03:31:42.113349 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7761f679-f22f-4dca-bc31-4bba8c0261ce-config-volume\") pod \"coredns-7db6d8ff4d-wnbc9\" (UID: \"7761f679-f22f-4dca-bc31-4bba8c0261ce\") " pod="kube-system/coredns-7db6d8ff4d-wnbc9" Apr 30 03:31:42.323722 containerd[1971]: time="2025-04-30T03:31:42.323024986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6bs7z,Uid:d81f4862-fe99-4544-b6f2-2330003e21c2,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:42.334777 containerd[1971]: time="2025-04-30T03:31:42.333239848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wnbc9,Uid:7761f679-f22f-4dca-bc31-4bba8c0261ce,Namespace:kube-system,Attempt:0,}" Apr 30 03:31:42.615939 kubelet[3223]: I0430 03:31:42.615709 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x8295" podStartSLOduration=5.938068753 podStartE2EDuration="17.615668385s" podCreationTimestamp="2025-04-30 03:31:25 +0000 UTC" firstStartedPulling="2025-04-30 03:31:25.931278591 +0000 UTC m=+15.650573899" lastFinishedPulling="2025-04-30 03:31:37.608878219 +0000 UTC m=+27.328173531" observedRunningTime="2025-04-30 03:31:42.611408557 +0000 UTC m=+32.330703876" watchObservedRunningTime="2025-04-30 03:31:42.615668385 +0000 UTC m=+32.334963705" Apr 30 03:31:43.390199 systemd[1]: Started sshd@7-172.31.22.79:22-147.75.109.163:37384.service - OpenSSH per-connection server daemon (147.75.109.163:37384). Apr 30 03:31:43.687039 sshd[4302]: Accepted publickey for core from 147.75.109.163 port 37384 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:31:43.688977 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:43.694100 systemd-logind[1949]: New session 8 of user core. Apr 30 03:31:43.698324 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:31:44.482870 systemd-networkd[1886]: cilium_host: Link UP Apr 30 03:31:44.483020 systemd-networkd[1886]: cilium_net: Link UP Apr 30 03:31:44.483165 systemd-networkd[1886]: cilium_net: Gained carrier Apr 30 03:31:44.483295 systemd-networkd[1886]: cilium_host: Gained carrier Apr 30 03:31:44.485413 (udev-worker)[4316]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:31:44.487618 (udev-worker)[4318]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:31:44.523170 sshd[4302]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:44.551856 systemd[1]: sshd@7-172.31.22.79:22-147.75.109.163:37384.service: Deactivated successfully. Apr 30 03:31:44.556402 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:31:44.558930 systemd-logind[1949]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:31:44.560582 systemd-logind[1949]: Removed session 8. Apr 30 03:31:44.625753 (udev-worker)[4328]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:31:44.634961 systemd-networkd[1886]: cilium_vxlan: Link UP Apr 30 03:31:44.634975 systemd-networkd[1886]: cilium_vxlan: Gained carrier Apr 30 03:31:44.972113 systemd-networkd[1886]: cilium_host: Gained IPv6LL Apr 30 03:31:45.188992 systemd-networkd[1886]: cilium_net: Gained IPv6LL Apr 30 03:31:45.231944 kernel: NET: Registered PF_ALG protocol family Apr 30 03:31:45.929170 systemd-networkd[1886]: lxc_health: Link UP Apr 30 03:31:45.940461 systemd-networkd[1886]: lxc_health: Gained carrier Apr 30 03:31:46.547202 systemd-networkd[1886]: lxcc8e03c5b581c: Link UP Apr 30 03:31:46.555929 kernel: eth0: renamed from tmp0e2a4 Apr 30 03:31:46.567233 (udev-worker)[4327]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:31:46.576304 (udev-worker)[4271]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:31:46.581191 kernel: eth0: renamed from tmpfcfc8 Apr 30 03:31:46.576351 systemd-networkd[1886]: lxcc8e03c5b581c: Gained carrier Apr 30 03:31:46.576637 systemd-networkd[1886]: lxc061528c128bc: Link UP Apr 30 03:31:46.585376 systemd-networkd[1886]: lxc061528c128bc: Gained carrier Apr 30 03:31:46.599023 systemd-networkd[1886]: cilium_vxlan: Gained IPv6LL Apr 30 03:31:47.108080 systemd-networkd[1886]: lxc_health: Gained IPv6LL Apr 30 03:31:47.812134 systemd-networkd[1886]: lxc061528c128bc: Gained IPv6LL Apr 30 03:31:48.516784 systemd-networkd[1886]: lxcc8e03c5b581c: Gained IPv6LL Apr 30 03:31:49.574398 systemd[1]: Started sshd@8-172.31.22.79:22-147.75.109.163:34482.service - OpenSSH per-connection server daemon (147.75.109.163:34482). Apr 30 03:31:49.863858 sshd[4686]: Accepted publickey for core from 147.75.109.163 port 34482 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:31:49.866436 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:49.875723 systemd-logind[1949]: New session 9 of user core. Apr 30 03:31:49.880468 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:31:50.258227 sshd[4686]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:50.264170 systemd-logind[1949]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:31:50.265468 systemd[1]: sshd@8-172.31.22.79:22-147.75.109.163:34482.service: Deactivated successfully. Apr 30 03:31:50.269574 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:31:50.273226 systemd-logind[1949]: Removed session 9. Apr 30 03:31:50.533480 ntpd[1943]: Listen normally on 8 cilium_host 192.168.0.213:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 8 cilium_host 192.168.0.213:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 9 cilium_net [fe80::9c6b:80ff:feda:ebcd%4]:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 10 cilium_host [fe80::4094:9aff:feb1:346f%5]:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 11 cilium_vxlan [fe80::bc11:40ff:fea9:d5ac%6]:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 12 lxc_health [fe80::2071:cdff:fe6d:2830%8]:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 13 lxcc8e03c5b581c [fe80::c8cf:cff:fe51:f344%10]:123 Apr 30 03:31:50.534915 ntpd[1943]: 30 Apr 03:31:50 ntpd[1943]: Listen normally on 14 lxc061528c128bc [fe80::88a7:a1ff:fe56:32c4%12]:123 Apr 30 03:31:50.534211 ntpd[1943]: Listen normally on 9 cilium_net [fe80::9c6b:80ff:feda:ebcd%4]:123 Apr 30 03:31:50.534271 ntpd[1943]: Listen normally on 10 cilium_host [fe80::4094:9aff:feb1:346f%5]:123 Apr 30 03:31:50.534311 ntpd[1943]: Listen normally on 11 cilium_vxlan [fe80::bc11:40ff:fea9:d5ac%6]:123 Apr 30 03:31:50.534350 ntpd[1943]: Listen normally on 12 lxc_health [fe80::2071:cdff:fe6d:2830%8]:123 Apr 30 03:31:50.534388 ntpd[1943]: Listen normally on 13 lxcc8e03c5b581c [fe80::c8cf:cff:fe51:f344%10]:123 Apr 30 03:31:50.534427 ntpd[1943]: Listen normally on 14 lxc061528c128bc [fe80::88a7:a1ff:fe56:32c4%12]:123 Apr 30 03:31:50.984777 containerd[1971]: time="2025-04-30T03:31:50.984532222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:50.984777 containerd[1971]: time="2025-04-30T03:31:50.984580458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:50.984777 containerd[1971]: time="2025-04-30T03:31:50.984591194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:50.984777 containerd[1971]: time="2025-04-30T03:31:50.984673887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:51.002768 containerd[1971]: time="2025-04-30T03:31:51.002324231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:31:51.002768 containerd[1971]: time="2025-04-30T03:31:51.002386684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:31:51.002768 containerd[1971]: time="2025-04-30T03:31:51.002403112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:51.004914 containerd[1971]: time="2025-04-30T03:31:51.003425729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:31:51.052506 systemd[1]: Started cri-containerd-0e2a4409ba9558ae2a6ac14c375b2dfd82bc8c1ef436384c7e9ac61c4f680404.scope - libcontainer container 0e2a4409ba9558ae2a6ac14c375b2dfd82bc8c1ef436384c7e9ac61c4f680404. Apr 30 03:31:51.059457 systemd[1]: Started cri-containerd-fcfc8eb4add6f2159abae2f58389040354e8ef2c2405afbe876f208fb7ce18b6.scope - libcontainer container fcfc8eb4add6f2159abae2f58389040354e8ef2c2405afbe876f208fb7ce18b6. Apr 30 03:31:51.205286 containerd[1971]: time="2025-04-30T03:31:51.204853383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wnbc9,Uid:7761f679-f22f-4dca-bc31-4bba8c0261ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e2a4409ba9558ae2a6ac14c375b2dfd82bc8c1ef436384c7e9ac61c4f680404\"" Apr 30 03:31:51.245715 containerd[1971]: time="2025-04-30T03:31:51.245103604Z" level=info msg="CreateContainer within sandbox \"0e2a4409ba9558ae2a6ac14c375b2dfd82bc8c1ef436384c7e9ac61c4f680404\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:31:51.269560 containerd[1971]: time="2025-04-30T03:31:51.269161495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6bs7z,Uid:d81f4862-fe99-4544-b6f2-2330003e21c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcfc8eb4add6f2159abae2f58389040354e8ef2c2405afbe876f208fb7ce18b6\"" Apr 30 03:31:51.278121 containerd[1971]: time="2025-04-30T03:31:51.277984847Z" level=info msg="CreateContainer within sandbox \"fcfc8eb4add6f2159abae2f58389040354e8ef2c2405afbe876f208fb7ce18b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:31:51.301936 containerd[1971]: time="2025-04-30T03:31:51.301869348Z" level=info msg="CreateContainer within sandbox \"0e2a4409ba9558ae2a6ac14c375b2dfd82bc8c1ef436384c7e9ac61c4f680404\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"537cd169e2348dcbe8d76a38ce18a8e8ec214db107d86c26c3c9ffa19db8179c\"" Apr 30 03:31:51.302526 containerd[1971]: time="2025-04-30T03:31:51.302424643Z" level=info msg="StartContainer for \"537cd169e2348dcbe8d76a38ce18a8e8ec214db107d86c26c3c9ffa19db8179c\"" Apr 30 03:31:51.314983 containerd[1971]: time="2025-04-30T03:31:51.313891355Z" level=info msg="CreateContainer within sandbox \"fcfc8eb4add6f2159abae2f58389040354e8ef2c2405afbe876f208fb7ce18b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb7df0d7a753c6c823942754b0bc74f2cd219b66fcac284e43716a84128e4372\"" Apr 30 03:31:51.317071 containerd[1971]: time="2025-04-30T03:31:51.315448207Z" level=info msg="StartContainer for \"bb7df0d7a753c6c823942754b0bc74f2cd219b66fcac284e43716a84128e4372\"" Apr 30 03:31:51.340093 systemd[1]: Started cri-containerd-537cd169e2348dcbe8d76a38ce18a8e8ec214db107d86c26c3c9ffa19db8179c.scope - libcontainer container 537cd169e2348dcbe8d76a38ce18a8e8ec214db107d86c26c3c9ffa19db8179c. Apr 30 03:31:51.357062 systemd[1]: Started cri-containerd-bb7df0d7a753c6c823942754b0bc74f2cd219b66fcac284e43716a84128e4372.scope - libcontainer container bb7df0d7a753c6c823942754b0bc74f2cd219b66fcac284e43716a84128e4372. Apr 30 03:31:51.400507 containerd[1971]: time="2025-04-30T03:31:51.400363460Z" level=info msg="StartContainer for \"bb7df0d7a753c6c823942754b0bc74f2cd219b66fcac284e43716a84128e4372\" returns successfully" Apr 30 03:31:51.400507 containerd[1971]: time="2025-04-30T03:31:51.400451053Z" level=info msg="StartContainer for \"537cd169e2348dcbe8d76a38ce18a8e8ec214db107d86c26c3c9ffa19db8179c\" returns successfully" Apr 30 03:31:51.671929 kubelet[3223]: I0430 03:31:51.644297 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wnbc9" podStartSLOduration=26.644265459 podStartE2EDuration="26.644265459s" podCreationTimestamp="2025-04-30 03:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:51.644017306 +0000 UTC m=+41.363312625" watchObservedRunningTime="2025-04-30 03:31:51.644265459 +0000 UTC m=+41.363560776" Apr 30 03:31:51.991350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994608593.mount: Deactivated successfully. Apr 30 03:31:52.340925 kubelet[3223]: I0430 03:31:52.340770 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6bs7z" podStartSLOduration=27.340751411 podStartE2EDuration="27.340751411s" podCreationTimestamp="2025-04-30 03:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:31:51.671408844 +0000 UTC m=+41.390704143" watchObservedRunningTime="2025-04-30 03:31:52.340751411 +0000 UTC m=+42.060046728" Apr 30 03:31:55.303960 systemd[1]: Started sshd@9-172.31.22.79:22-147.75.109.163:34490.service - OpenSSH per-connection server daemon (147.75.109.163:34490). Apr 30 03:31:55.591240 sshd[4872]: Accepted publickey for core from 147.75.109.163 port 34490 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:31:55.592957 sshd[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:55.597951 systemd-logind[1949]: New session 10 of user core. Apr 30 03:31:55.603086 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:31:55.948681 sshd[4872]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:55.952843 systemd[1]: sshd@9-172.31.22.79:22-147.75.109.163:34490.service: Deactivated successfully. Apr 30 03:31:55.954627 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:31:55.955824 systemd-logind[1949]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:31:55.957371 systemd-logind[1949]: Removed session 10. Apr 30 03:32:00.994673 systemd[1]: Started sshd@10-172.31.22.79:22-147.75.109.163:56838.service - OpenSSH per-connection server daemon (147.75.109.163:56838). Apr 30 03:32:01.245820 sshd[4890]: Accepted publickey for core from 147.75.109.163 port 56838 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:01.248184 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:01.256883 systemd-logind[1949]: New session 11 of user core. Apr 30 03:32:01.262188 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:32:01.641445 sshd[4890]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:01.652338 systemd[1]: sshd@10-172.31.22.79:22-147.75.109.163:56838.service: Deactivated successfully. Apr 30 03:32:01.664403 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:32:01.672498 systemd-logind[1949]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:32:01.675601 systemd-logind[1949]: Removed session 11. Apr 30 03:32:06.697368 systemd[1]: Started sshd@11-172.31.22.79:22-147.75.109.163:56850.service - OpenSSH per-connection server daemon (147.75.109.163:56850). Apr 30 03:32:06.960241 sshd[4904]: Accepted publickey for core from 147.75.109.163 port 56850 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:06.960847 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:06.966204 systemd-logind[1949]: New session 12 of user core. Apr 30 03:32:06.971110 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:32:07.276323 sshd[4904]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:07.284164 systemd[1]: sshd@11-172.31.22.79:22-147.75.109.163:56850.service: Deactivated successfully. Apr 30 03:32:07.286859 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:32:07.288762 systemd-logind[1949]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:32:07.290407 systemd-logind[1949]: Removed session 12. Apr 30 03:32:07.322153 systemd[1]: Started sshd@12-172.31.22.79:22-147.75.109.163:60686.service - OpenSSH per-connection server daemon (147.75.109.163:60686). Apr 30 03:32:07.584847 sshd[4918]: Accepted publickey for core from 147.75.109.163 port 60686 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:07.586690 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:07.591477 systemd-logind[1949]: New session 13 of user core. Apr 30 03:32:07.597120 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:32:07.984655 sshd[4918]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:07.991444 systemd[1]: sshd@12-172.31.22.79:22-147.75.109.163:60686.service: Deactivated successfully. Apr 30 03:32:07.994243 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:32:07.995616 systemd-logind[1949]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:32:07.997246 systemd-logind[1949]: Removed session 13. Apr 30 03:32:08.029327 systemd[1]: Started sshd@13-172.31.22.79:22-147.75.109.163:60702.service - OpenSSH per-connection server daemon (147.75.109.163:60702). Apr 30 03:32:08.295996 sshd[4930]: Accepted publickey for core from 147.75.109.163 port 60702 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:08.296708 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:08.308197 systemd-logind[1949]: New session 14 of user core. Apr 30 03:32:08.316637 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:32:08.592744 sshd[4930]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:08.596435 systemd-logind[1949]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:32:08.597395 systemd[1]: sshd@13-172.31.22.79:22-147.75.109.163:60702.service: Deactivated successfully. Apr 30 03:32:08.599392 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:32:08.600844 systemd-logind[1949]: Removed session 14. Apr 30 03:32:13.638983 systemd[1]: Started sshd@14-172.31.22.79:22-147.75.109.163:60706.service - OpenSSH per-connection server daemon (147.75.109.163:60706). Apr 30 03:32:13.883516 sshd[4945]: Accepted publickey for core from 147.75.109.163 port 60706 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:13.884094 sshd[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:13.889590 systemd-logind[1949]: New session 15 of user core. Apr 30 03:32:13.894121 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:32:14.138664 sshd[4945]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:14.143264 systemd[1]: sshd@14-172.31.22.79:22-147.75.109.163:60706.service: Deactivated successfully. Apr 30 03:32:14.145671 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:32:14.146887 systemd-logind[1949]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:32:14.148182 systemd-logind[1949]: Removed session 15. Apr 30 03:32:19.191697 systemd[1]: Started sshd@15-172.31.22.79:22-147.75.109.163:39172.service - OpenSSH per-connection server daemon (147.75.109.163:39172). Apr 30 03:32:19.435145 sshd[4958]: Accepted publickey for core from 147.75.109.163 port 39172 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:19.436910 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:19.442036 systemd-logind[1949]: New session 16 of user core. Apr 30 03:32:19.446152 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:32:19.690229 sshd[4958]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:19.693631 systemd[1]: sshd@15-172.31.22.79:22-147.75.109.163:39172.service: Deactivated successfully. Apr 30 03:32:19.695593 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:32:19.696968 systemd-logind[1949]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:32:19.698359 systemd-logind[1949]: Removed session 16. Apr 30 03:32:19.740105 systemd[1]: Started sshd@16-172.31.22.79:22-147.75.109.163:39184.service - OpenSSH per-connection server daemon (147.75.109.163:39184). Apr 30 03:32:19.992008 sshd[4970]: Accepted publickey for core from 147.75.109.163 port 39184 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:19.992630 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:19.997108 systemd-logind[1949]: New session 17 of user core. Apr 30 03:32:20.004148 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:32:20.701747 sshd[4970]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:20.710643 systemd[1]: sshd@16-172.31.22.79:22-147.75.109.163:39184.service: Deactivated successfully. Apr 30 03:32:20.710989 systemd-logind[1949]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:32:20.713463 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:32:20.714518 systemd-logind[1949]: Removed session 17. Apr 30 03:32:20.746030 systemd[1]: Started sshd@17-172.31.22.79:22-147.75.109.163:39192.service - OpenSSH per-connection server daemon (147.75.109.163:39192). Apr 30 03:32:21.026349 sshd[4990]: Accepted publickey for core from 147.75.109.163 port 39192 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:21.027856 sshd[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:21.033035 systemd-logind[1949]: New session 18 of user core. Apr 30 03:32:21.039102 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:32:22.769236 sshd[4990]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:22.781743 systemd[1]: sshd@17-172.31.22.79:22-147.75.109.163:39192.service: Deactivated successfully. Apr 30 03:32:22.785269 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:32:22.786717 systemd-logind[1949]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:32:22.788459 systemd-logind[1949]: Removed session 18. Apr 30 03:32:22.810219 systemd[1]: Started sshd@18-172.31.22.79:22-147.75.109.163:39208.service - OpenSSH per-connection server daemon (147.75.109.163:39208). Apr 30 03:32:23.053141 sshd[5009]: Accepted publickey for core from 147.75.109.163 port 39208 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:23.054842 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:23.059736 systemd-logind[1949]: New session 19 of user core. Apr 30 03:32:23.063065 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:32:23.657999 sshd[5009]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:23.661724 systemd-logind[1949]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:32:23.662585 systemd[1]: sshd@18-172.31.22.79:22-147.75.109.163:39208.service: Deactivated successfully. Apr 30 03:32:23.664656 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:32:23.665870 systemd-logind[1949]: Removed session 19. Apr 30 03:32:23.712364 systemd[1]: Started sshd@19-172.31.22.79:22-147.75.109.163:39222.service - OpenSSH per-connection server daemon (147.75.109.163:39222). Apr 30 03:32:23.955939 sshd[5020]: Accepted publickey for core from 147.75.109.163 port 39222 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:23.956690 sshd[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:23.961485 systemd-logind[1949]: New session 20 of user core. Apr 30 03:32:23.969165 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:32:24.216607 sshd[5020]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:24.219959 systemd[1]: sshd@19-172.31.22.79:22-147.75.109.163:39222.service: Deactivated successfully. Apr 30 03:32:24.222052 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:32:24.223571 systemd-logind[1949]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:32:24.230936 systemd-logind[1949]: Removed session 20. Apr 30 03:32:29.270604 systemd[1]: Started sshd@20-172.31.22.79:22-147.75.109.163:45368.service - OpenSSH per-connection server daemon (147.75.109.163:45368). Apr 30 03:32:29.513532 sshd[5037]: Accepted publickey for core from 147.75.109.163 port 45368 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:29.515066 sshd[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:29.520013 systemd-logind[1949]: New session 21 of user core. Apr 30 03:32:29.526128 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:32:29.767634 sshd[5037]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:29.771381 systemd-logind[1949]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:32:29.772126 systemd[1]: sshd@20-172.31.22.79:22-147.75.109.163:45368.service: Deactivated successfully. Apr 30 03:32:29.774368 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:32:29.776206 systemd-logind[1949]: Removed session 21. Apr 30 03:32:34.821295 systemd[1]: Started sshd@21-172.31.22.79:22-147.75.109.163:45370.service - OpenSSH per-connection server daemon (147.75.109.163:45370). Apr 30 03:32:35.064524 sshd[5050]: Accepted publickey for core from 147.75.109.163 port 45370 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:35.065971 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:35.077380 systemd-logind[1949]: New session 22 of user core. Apr 30 03:32:35.083129 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:32:35.316620 sshd[5050]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:35.320455 systemd[1]: sshd@21-172.31.22.79:22-147.75.109.163:45370.service: Deactivated successfully. Apr 30 03:32:35.322839 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:32:35.323883 systemd-logind[1949]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:32:35.326050 systemd-logind[1949]: Removed session 22. Apr 30 03:32:40.361893 systemd[1]: Started sshd@22-172.31.22.79:22-147.75.109.163:36304.service - OpenSSH per-connection server daemon (147.75.109.163:36304). Apr 30 03:32:40.616441 sshd[5064]: Accepted publickey for core from 147.75.109.163 port 36304 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:40.616996 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:40.622789 systemd-logind[1949]: New session 23 of user core. Apr 30 03:32:40.632292 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:32:40.871382 sshd[5064]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:40.874968 systemd[1]: sshd@22-172.31.22.79:22-147.75.109.163:36304.service: Deactivated successfully. Apr 30 03:32:40.877387 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:32:40.878173 systemd-logind[1949]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:32:40.879147 systemd-logind[1949]: Removed session 23. Apr 30 03:32:40.917050 systemd[1]: Started sshd@23-172.31.22.79:22-147.75.109.163:36314.service - OpenSSH per-connection server daemon (147.75.109.163:36314). Apr 30 03:32:41.164019 sshd[5077]: Accepted publickey for core from 147.75.109.163 port 36314 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:41.165562 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:41.170856 systemd-logind[1949]: New session 24 of user core. Apr 30 03:32:41.175115 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 03:32:42.878011 containerd[1971]: time="2025-04-30T03:32:42.877964658Z" level=info msg="StopContainer for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" with timeout 30 (s)" Apr 30 03:32:42.880869 containerd[1971]: time="2025-04-30T03:32:42.880577869Z" level=info msg="Stop container \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" with signal terminated" Apr 30 03:32:42.911178 systemd[1]: cri-containerd-104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8.scope: Deactivated successfully. Apr 30 03:32:42.924420 systemd[1]: run-containerd-runc-k8s.io-9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b-runc.FPzq5h.mount: Deactivated successfully. Apr 30 03:32:42.958162 containerd[1971]: time="2025-04-30T03:32:42.957999999Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:32:42.960551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8-rootfs.mount: Deactivated successfully. Apr 30 03:32:42.963273 containerd[1971]: time="2025-04-30T03:32:42.962610159Z" level=info msg="StopContainer for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" with timeout 2 (s)" Apr 30 03:32:42.963273 containerd[1971]: time="2025-04-30T03:32:42.962954966Z" level=info msg="Stop container \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" with signal terminated" Apr 30 03:32:42.972349 systemd-networkd[1886]: lxc_health: Link DOWN Apr 30 03:32:42.972358 systemd-networkd[1886]: lxc_health: Lost carrier Apr 30 03:32:42.976773 containerd[1971]: time="2025-04-30T03:32:42.976660523Z" level=info msg="shim disconnected" id=104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8 namespace=k8s.io Apr 30 03:32:42.976773 containerd[1971]: time="2025-04-30T03:32:42.976743587Z" level=warning msg="cleaning up after shim disconnected" id=104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8 namespace=k8s.io Apr 30 03:32:42.977233 containerd[1971]: time="2025-04-30T03:32:42.976756641Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:42.999308 systemd[1]: cri-containerd-9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b.scope: Deactivated successfully. Apr 30 03:32:42.999884 systemd[1]: cri-containerd-9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b.scope: Consumed 7.690s CPU time. Apr 30 03:32:43.019616 containerd[1971]: time="2025-04-30T03:32:43.019570045Z" level=info msg="StopContainer for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" returns successfully" Apr 30 03:32:43.020579 containerd[1971]: time="2025-04-30T03:32:43.020538437Z" level=info msg="StopPodSandbox for \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\"" Apr 30 03:32:43.020710 containerd[1971]: time="2025-04-30T03:32:43.020591435Z" level=info msg="Container to stop \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:32:43.025384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731-shm.mount: Deactivated successfully. Apr 30 03:32:43.037386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b-rootfs.mount: Deactivated successfully. Apr 30 03:32:43.041222 systemd[1]: cri-containerd-5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731.scope: Deactivated successfully. Apr 30 03:32:43.063266 containerd[1971]: time="2025-04-30T03:32:43.063168881Z" level=info msg="shim disconnected" id=9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b namespace=k8s.io Apr 30 03:32:43.063576 containerd[1971]: time="2025-04-30T03:32:43.063539857Z" level=warning msg="cleaning up after shim disconnected" id=9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b namespace=k8s.io Apr 30 03:32:43.063719 containerd[1971]: time="2025-04-30T03:32:43.063698919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:43.077937 containerd[1971]: time="2025-04-30T03:32:43.077619760Z" level=info msg="shim disconnected" id=5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731 namespace=k8s.io Apr 30 03:32:43.077937 containerd[1971]: time="2025-04-30T03:32:43.077692229Z" level=warning msg="cleaning up after shim disconnected" id=5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731 namespace=k8s.io Apr 30 03:32:43.077937 containerd[1971]: time="2025-04-30T03:32:43.077703488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.089548214Z" level=info msg="StopContainer for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" returns successfully" Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.090310157Z" level=info msg="StopPodSandbox for \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\"" Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.090360418Z" level=info msg="Container to stop \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.090379938Z" level=info msg="Container to stop \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.090396035Z" level=info msg="Container to stop \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.090411066Z" level=info msg="Container to stop \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:32:43.090922 containerd[1971]: time="2025-04-30T03:32:43.090426435Z" level=info msg="Container to stop \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:32:43.105226 systemd[1]: cri-containerd-fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb.scope: Deactivated successfully. Apr 30 03:32:43.115070 containerd[1971]: time="2025-04-30T03:32:43.115022511Z" level=info msg="TearDown network for sandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" successfully" Apr 30 03:32:43.115070 containerd[1971]: time="2025-04-30T03:32:43.115069926Z" level=info msg="StopPodSandbox for \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" returns successfully" Apr 30 03:32:43.144242 containerd[1971]: time="2025-04-30T03:32:43.144030222Z" level=info msg="shim disconnected" id=fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb namespace=k8s.io Apr 30 03:32:43.144242 containerd[1971]: time="2025-04-30T03:32:43.144175326Z" level=warning msg="cleaning up after shim disconnected" id=fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb namespace=k8s.io Apr 30 03:32:43.144242 containerd[1971]: time="2025-04-30T03:32:43.144188885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:43.163255 containerd[1971]: time="2025-04-30T03:32:43.163135856Z" level=info msg="TearDown network for sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" successfully" Apr 30 03:32:43.163255 containerd[1971]: time="2025-04-30T03:32:43.163169696Z" level=info msg="StopPodSandbox for \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" returns successfully" Apr 30 03:32:43.225059 kubelet[3223]: I0430 03:32:43.225012 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-cilium-config-path\") pod \"95a8e8bb-56ca-4588-a04e-ad9d470f58fb\" (UID: \"95a8e8bb-56ca-4588-a04e-ad9d470f58fb\") " Apr 30 03:32:43.225726 kubelet[3223]: I0430 03:32:43.225072 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtqdl\" (UniqueName: \"kubernetes.io/projected/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-kube-api-access-qtqdl\") pod \"95a8e8bb-56ca-4588-a04e-ad9d470f58fb\" (UID: \"95a8e8bb-56ca-4588-a04e-ad9d470f58fb\") " Apr 30 03:32:43.240386 kubelet[3223]: I0430 03:32:43.238289 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "95a8e8bb-56ca-4588-a04e-ad9d470f58fb" (UID: "95a8e8bb-56ca-4588-a04e-ad9d470f58fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:32:43.259036 kubelet[3223]: I0430 03:32:43.258959 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-kube-api-access-qtqdl" (OuterVolumeSpecName: "kube-api-access-qtqdl") pod "95a8e8bb-56ca-4588-a04e-ad9d470f58fb" (UID: "95a8e8bb-56ca-4588-a04e-ad9d470f58fb"). InnerVolumeSpecName "kube-api-access-qtqdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:32:43.328024 kubelet[3223]: I0430 03:32:43.325711 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-net\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328024 kubelet[3223]: I0430 03:32:43.325759 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-config-path\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328024 kubelet[3223]: I0430 03:32:43.325786 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdcw\" (UniqueName: \"kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-kube-api-access-lxdcw\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328024 kubelet[3223]: I0430 03:32:43.325805 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff7749c4-9f69-4b02-bf37-e72358ca29f9-clustermesh-secrets\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328024 kubelet[3223]: I0430 03:32:43.325820 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-lib-modules\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328024 kubelet[3223]: I0430 03:32:43.325836 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-run\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328386 kubelet[3223]: I0430 03:32:43.325850 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-xtables-lock\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328386 kubelet[3223]: I0430 03:32:43.325877 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-cgroup\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328386 kubelet[3223]: I0430 03:32:43.325892 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hostproc\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328386 kubelet[3223]: I0430 03:32:43.325934 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-bpf-maps\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328386 kubelet[3223]: I0430 03:32:43.325948 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-kernel\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328386 kubelet[3223]: I0430 03:32:43.325967 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hubble-tls\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328551 kubelet[3223]: I0430 03:32:43.325981 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-etc-cni-netd\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328551 kubelet[3223]: I0430 03:32:43.325995 3223 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cni-path\") pod \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\" (UID: \"ff7749c4-9f69-4b02-bf37-e72358ca29f9\") " Apr 30 03:32:43.328551 kubelet[3223]: I0430 03:32:43.327844 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:32:43.328699 kubelet[3223]: I0430 03:32:43.328650 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.328789 kubelet[3223]: I0430 03:32:43.328778 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.331474 kubelet[3223]: I0430 03:32:43.331440 3223 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-cilium-config-path\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.331474 kubelet[3223]: I0430 03:32:43.331479 3223 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qtqdl\" (UniqueName: \"kubernetes.io/projected/95a8e8bb-56ca-4588-a04e-ad9d470f58fb-kube-api-access-qtqdl\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.331608 kubelet[3223]: I0430 03:32:43.331514 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cni-path" (OuterVolumeSpecName: "cni-path") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.331608 kubelet[3223]: I0430 03:32:43.331539 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hostproc" (OuterVolumeSpecName: "hostproc") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.331608 kubelet[3223]: I0430 03:32:43.331552 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.331608 kubelet[3223]: I0430 03:32:43.331566 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.331769 kubelet[3223]: I0430 03:32:43.331745 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-kube-api-access-lxdcw" (OuterVolumeSpecName: "kube-api-access-lxdcw") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "kube-api-access-lxdcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:32:43.334089 kubelet[3223]: I0430 03:32:43.334056 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:32:43.334183 kubelet[3223]: I0430 03:32:43.334105 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.334183 kubelet[3223]: I0430 03:32:43.334122 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.334183 kubelet[3223]: I0430 03:32:43.334135 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.334183 kubelet[3223]: I0430 03:32:43.334148 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:32:43.334505 kubelet[3223]: I0430 03:32:43.334473 3223 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff7749c4-9f69-4b02-bf37-e72358ca29f9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ff7749c4-9f69-4b02-bf37-e72358ca29f9" (UID: "ff7749c4-9f69-4b02-bf37-e72358ca29f9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:32:43.431670 kubelet[3223]: I0430 03:32:43.431626 3223 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff7749c4-9f69-4b02-bf37-e72358ca29f9-clustermesh-secrets\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.431670 kubelet[3223]: I0430 03:32:43.431656 3223 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-lib-modules\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.431670 kubelet[3223]: I0430 03:32:43.431665 3223 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-xtables-lock\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.431670 kubelet[3223]: I0430 03:32:43.431673 3223 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-run\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.431670 kubelet[3223]: I0430 03:32:43.431681 3223 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-cgroup\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431688 3223 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hostproc\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431695 3223 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-bpf-maps\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431702 3223 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-kernel\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431710 3223 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-etc-cni-netd\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431717 3223 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cni-path\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431725 3223 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-hubble-tls\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431732 3223 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lxdcw\" (UniqueName: \"kubernetes.io/projected/ff7749c4-9f69-4b02-bf37-e72358ca29f9-kube-api-access-lxdcw\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.432983 kubelet[3223]: I0430 03:32:43.431739 3223 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff7749c4-9f69-4b02-bf37-e72358ca29f9-host-proc-sys-net\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.433241 kubelet[3223]: I0430 03:32:43.431749 3223 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff7749c4-9f69-4b02-bf37-e72358ca29f9-cilium-config-path\") on node \"ip-172-31-22-79\" DevicePath \"\"" Apr 30 03:32:43.789468 kubelet[3223]: I0430 03:32:43.789217 3223 scope.go:117] "RemoveContainer" containerID="104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8" Apr 30 03:32:43.790827 systemd[1]: Removed slice kubepods-besteffort-pod95a8e8bb_56ca_4588_a04e_ad9d470f58fb.slice - libcontainer container kubepods-besteffort-pod95a8e8bb_56ca_4588_a04e_ad9d470f58fb.slice. Apr 30 03:32:43.794968 containerd[1971]: time="2025-04-30T03:32:43.794193139Z" level=info msg="RemoveContainer for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\"" Apr 30 03:32:43.804705 systemd[1]: Removed slice kubepods-burstable-podff7749c4_9f69_4b02_bf37_e72358ca29f9.slice - libcontainer container kubepods-burstable-podff7749c4_9f69_4b02_bf37_e72358ca29f9.slice. Apr 30 03:32:43.804864 systemd[1]: kubepods-burstable-podff7749c4_9f69_4b02_bf37_e72358ca29f9.slice: Consumed 7.776s CPU time. Apr 30 03:32:43.805716 containerd[1971]: time="2025-04-30T03:32:43.805551302Z" level=info msg="RemoveContainer for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" returns successfully" Apr 30 03:32:43.809837 kubelet[3223]: I0430 03:32:43.809083 3223 scope.go:117] "RemoveContainer" containerID="104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8" Apr 30 03:32:43.845294 containerd[1971]: time="2025-04-30T03:32:43.820893386Z" level=error msg="ContainerStatus for \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\": not found" Apr 30 03:32:43.848203 kubelet[3223]: E0430 03:32:43.847315 3223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\": not found" containerID="104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8" Apr 30 03:32:43.850565 kubelet[3223]: I0430 03:32:43.850280 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8"} err="failed to get container status \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"104a30750e0eebc08b7591caa6d662c9e8827e21fe2f8bece06a7f2dd615c3e8\": not found" Apr 30 03:32:43.850565 kubelet[3223]: I0430 03:32:43.850430 3223 scope.go:117] "RemoveContainer" containerID="9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b" Apr 30 03:32:43.854809 containerd[1971]: time="2025-04-30T03:32:43.854770970Z" level=info msg="RemoveContainer for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\"" Apr 30 03:32:43.861132 containerd[1971]: time="2025-04-30T03:32:43.861077085Z" level=info msg="RemoveContainer for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" returns successfully" Apr 30 03:32:43.861336 kubelet[3223]: I0430 03:32:43.861321 3223 scope.go:117] "RemoveContainer" containerID="7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1" Apr 30 03:32:43.862734 containerd[1971]: time="2025-04-30T03:32:43.862698972Z" level=info msg="RemoveContainer for \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\"" Apr 30 03:32:43.879271 containerd[1971]: time="2025-04-30T03:32:43.879136010Z" level=info msg="RemoveContainer for \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\" returns successfully" Apr 30 03:32:43.879699 kubelet[3223]: I0430 03:32:43.879421 3223 scope.go:117] "RemoveContainer" containerID="f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71" Apr 30 03:32:43.881154 containerd[1971]: time="2025-04-30T03:32:43.881074143Z" level=info msg="RemoveContainer for \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\"" Apr 30 03:32:43.886680 containerd[1971]: time="2025-04-30T03:32:43.886630098Z" level=info msg="RemoveContainer for \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\" returns successfully" Apr 30 03:32:43.886884 kubelet[3223]: I0430 03:32:43.886862 3223 scope.go:117] "RemoveContainer" containerID="90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7" Apr 30 03:32:43.888255 containerd[1971]: time="2025-04-30T03:32:43.888217677Z" level=info msg="RemoveContainer for \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\"" Apr 30 03:32:43.911549 containerd[1971]: time="2025-04-30T03:32:43.911497948Z" level=info msg="RemoveContainer for \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\" returns successfully" Apr 30 03:32:43.912385 kubelet[3223]: I0430 03:32:43.912341 3223 scope.go:117] "RemoveContainer" containerID="69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171" Apr 30 03:32:43.914111 containerd[1971]: time="2025-04-30T03:32:43.914077617Z" level=info msg="RemoveContainer for \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\"" Apr 30 03:32:43.917456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731-rootfs.mount: Deactivated successfully. Apr 30 03:32:43.917559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb-rootfs.mount: Deactivated successfully. Apr 30 03:32:43.917617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb-shm.mount: Deactivated successfully. Apr 30 03:32:43.917674 systemd[1]: var-lib-kubelet-pods-95a8e8bb\x2d56ca\x2d4588\x2da04e\x2dad9d470f58fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtqdl.mount: Deactivated successfully. Apr 30 03:32:43.917734 systemd[1]: var-lib-kubelet-pods-ff7749c4\x2d9f69\x2d4b02\x2dbf37\x2de72358ca29f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlxdcw.mount: Deactivated successfully. Apr 30 03:32:43.917794 systemd[1]: var-lib-kubelet-pods-ff7749c4\x2d9f69\x2d4b02\x2dbf37\x2de72358ca29f9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 03:32:43.917851 systemd[1]: var-lib-kubelet-pods-ff7749c4\x2d9f69\x2d4b02\x2dbf37\x2de72358ca29f9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 03:32:43.926982 containerd[1971]: time="2025-04-30T03:32:43.926928372Z" level=info msg="RemoveContainer for \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\" returns successfully" Apr 30 03:32:43.927803 kubelet[3223]: I0430 03:32:43.927431 3223 scope.go:117] "RemoveContainer" containerID="9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b" Apr 30 03:32:43.929446 containerd[1971]: time="2025-04-30T03:32:43.928289585Z" level=error msg="ContainerStatus for \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\": not found" Apr 30 03:32:43.931277 kubelet[3223]: E0430 03:32:43.931082 3223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\": not found" containerID="9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b" Apr 30 03:32:43.931642 kubelet[3223]: I0430 03:32:43.931409 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b"} err="failed to get container status \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9564ec6961f8badbd87d5d9e6afaab4f1925815cd0020b873fc824a1c2f9d78b\": not found" Apr 30 03:32:43.931922 kubelet[3223]: I0430 03:32:43.931541 3223 scope.go:117] "RemoveContainer" containerID="7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1" Apr 30 03:32:43.932933 containerd[1971]: time="2025-04-30T03:32:43.932697642Z" level=error msg="ContainerStatus for \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\": not found" Apr 30 03:32:43.933595 kubelet[3223]: E0430 03:32:43.932950 3223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\": not found" containerID="7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1" Apr 30 03:32:43.933595 kubelet[3223]: I0430 03:32:43.933007 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1"} err="failed to get container status \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d722e8f8b69962bbe7b40a34c258cb1ab772602e88a9d33d042c26b95f7e7c1\": not found" Apr 30 03:32:43.933595 kubelet[3223]: I0430 03:32:43.933033 3223 scope.go:117] "RemoveContainer" containerID="f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71" Apr 30 03:32:43.934997 kubelet[3223]: E0430 03:32:43.933824 3223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\": not found" containerID="f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71" Apr 30 03:32:43.934997 kubelet[3223]: I0430 03:32:43.933850 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71"} err="failed to get container status \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\": not found" Apr 30 03:32:43.934997 kubelet[3223]: I0430 03:32:43.933876 3223 scope.go:117] "RemoveContainer" containerID="90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7" Apr 30 03:32:43.934997 kubelet[3223]: E0430 03:32:43.934530 3223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\": not found" containerID="90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7" Apr 30 03:32:43.934997 kubelet[3223]: I0430 03:32:43.934558 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7"} err="failed to get container status \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\": not found" Apr 30 03:32:43.934997 kubelet[3223]: I0430 03:32:43.934582 3223 scope.go:117] "RemoveContainer" containerID="69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171" Apr 30 03:32:43.935205 containerd[1971]: time="2025-04-30T03:32:43.933642052Z" level=error msg="ContainerStatus for \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1d4117e2d20c1bdc6257deb271dba6584cf2b0936de78f5c664dfff01c99e71\": not found" Apr 30 03:32:43.935205 containerd[1971]: time="2025-04-30T03:32:43.934232386Z" level=error msg="ContainerStatus for \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90e04a0283f6880a68eba23c6a2ac058d4263340f6ff5b12c9927fc5b998ddb7\": not found" Apr 30 03:32:43.935205 containerd[1971]: time="2025-04-30T03:32:43.934756699Z" level=error msg="ContainerStatus for \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\": not found" Apr 30 03:32:43.935293 kubelet[3223]: E0430 03:32:43.934881 3223 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\": not found" containerID="69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171" Apr 30 03:32:43.935293 kubelet[3223]: I0430 03:32:43.934926 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171"} err="failed to get container status \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\": rpc error: code = NotFound desc = an error occurred when try to find container \"69b2ac17d1f53a2516cc756cf0dd34ac975297c1ab4440c0b5af984a4612b171\": not found" Apr 30 03:32:44.414329 kubelet[3223]: I0430 03:32:44.414273 3223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a8e8bb-56ca-4588-a04e-ad9d470f58fb" path="/var/lib/kubelet/pods/95a8e8bb-56ca-4588-a04e-ad9d470f58fb/volumes" Apr 30 03:32:44.422571 kubelet[3223]: I0430 03:32:44.422530 3223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" path="/var/lib/kubelet/pods/ff7749c4-9f69-4b02-bf37-e72358ca29f9/volumes" Apr 30 03:32:44.786375 sshd[5077]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:44.794073 systemd[1]: sshd@23-172.31.22.79:22-147.75.109.163:36314.service: Deactivated successfully. Apr 30 03:32:44.796780 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 03:32:44.798579 systemd-logind[1949]: Session 24 logged out. Waiting for processes to exit. Apr 30 03:32:44.801736 systemd-logind[1949]: Removed session 24. Apr 30 03:32:44.836316 systemd[1]: Started sshd@24-172.31.22.79:22-147.75.109.163:36320.service - OpenSSH per-connection server daemon (147.75.109.163:36320). Apr 30 03:32:45.095727 sshd[5240]: Accepted publickey for core from 147.75.109.163 port 36320 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:45.097070 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:45.101356 systemd-logind[1949]: New session 25 of user core. Apr 30 03:32:45.105128 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 03:32:45.533455 ntpd[1943]: Deleting interface #12 lxc_health, fe80::2071:cdff:fe6d:2830%8#123, interface stats: received=0, sent=0, dropped=0, active_time=55 secs Apr 30 03:32:45.533952 ntpd[1943]: 30 Apr 03:32:45 ntpd[1943]: Deleting interface #12 lxc_health, fe80::2071:cdff:fe6d:2830%8#123, interface stats: received=0, sent=0, dropped=0, active_time=55 secs Apr 30 03:32:45.537369 kubelet[3223]: E0430 03:32:45.537318 3223 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:32:45.880562 sshd[5240]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:45.891603 systemd[1]: sshd@24-172.31.22.79:22-147.75.109.163:36320.service: Deactivated successfully. Apr 30 03:32:45.899051 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 03:32:45.906160 systemd-logind[1949]: Session 25 logged out. Waiting for processes to exit. Apr 30 03:32:45.909762 systemd-logind[1949]: Removed session 25. Apr 30 03:32:45.910892 kubelet[3223]: I0430 03:32:45.903733 3223 topology_manager.go:215] "Topology Admit Handler" podUID="2469de18-02ee-4217-b197-04492c119515" podNamespace="kube-system" podName="cilium-hvs5c" Apr 30 03:32:45.921564 kubelet[3223]: E0430 03:32:45.921520 3223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" containerName="mount-cgroup" Apr 30 03:32:45.921690 kubelet[3223]: E0430 03:32:45.921575 3223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" containerName="apply-sysctl-overwrites" Apr 30 03:32:45.921690 kubelet[3223]: E0430 03:32:45.921586 3223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" containerName="clean-cilium-state" Apr 30 03:32:45.921690 kubelet[3223]: E0430 03:32:45.921594 3223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" containerName="cilium-agent" Apr 30 03:32:45.921690 kubelet[3223]: E0430 03:32:45.921604 3223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" containerName="mount-bpf-fs" Apr 30 03:32:45.921690 kubelet[3223]: E0430 03:32:45.921612 3223 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="95a8e8bb-56ca-4588-a04e-ad9d470f58fb" containerName="cilium-operator" Apr 30 03:32:45.921690 kubelet[3223]: I0430 03:32:45.921671 3223 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff7749c4-9f69-4b02-bf37-e72358ca29f9" containerName="cilium-agent" Apr 30 03:32:45.921690 kubelet[3223]: I0430 03:32:45.921691 3223 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a8e8bb-56ca-4588-a04e-ad9d470f58fb" containerName="cilium-operator" Apr 30 03:32:45.937893 systemd[1]: Started sshd@25-172.31.22.79:22-147.75.109.163:36328.service - OpenSSH per-connection server daemon (147.75.109.163:36328). Apr 30 03:32:45.978678 systemd[1]: Created slice kubepods-burstable-pod2469de18_02ee_4217_b197_04492c119515.slice - libcontainer container kubepods-burstable-pod2469de18_02ee_4217_b197_04492c119515.slice. Apr 30 03:32:46.066313 kubelet[3223]: I0430 03:32:46.066244 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-xtables-lock\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066313 kubelet[3223]: I0430 03:32:46.066292 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2469de18-02ee-4217-b197-04492c119515-hubble-tls\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066313 kubelet[3223]: I0430 03:32:46.066313 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-cilium-run\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066313 kubelet[3223]: I0430 03:32:46.066333 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-lib-modules\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066560 kubelet[3223]: I0430 03:32:46.066351 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-bpf-maps\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066560 kubelet[3223]: I0430 03:32:46.066372 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-host-proc-sys-kernel\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066560 kubelet[3223]: I0430 03:32:46.066390 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktk7\" (UniqueName: \"kubernetes.io/projected/2469de18-02ee-4217-b197-04492c119515-kube-api-access-4ktk7\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066560 kubelet[3223]: I0430 03:32:46.066417 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-cni-path\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066560 kubelet[3223]: I0430 03:32:46.066440 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-host-proc-sys-net\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066560 kubelet[3223]: I0430 03:32:46.066457 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2469de18-02ee-4217-b197-04492c119515-clustermesh-secrets\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066710 kubelet[3223]: I0430 03:32:46.066472 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2469de18-02ee-4217-b197-04492c119515-cilium-ipsec-secrets\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066710 kubelet[3223]: I0430 03:32:46.066489 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-hostproc\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066710 kubelet[3223]: I0430 03:32:46.066502 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-cilium-cgroup\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066710 kubelet[3223]: I0430 03:32:46.066518 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2469de18-02ee-4217-b197-04492c119515-etc-cni-netd\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.066710 kubelet[3223]: I0430 03:32:46.066533 3223 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2469de18-02ee-4217-b197-04492c119515-cilium-config-path\") pod \"cilium-hvs5c\" (UID: \"2469de18-02ee-4217-b197-04492c119515\") " pod="kube-system/cilium-hvs5c" Apr 30 03:32:46.198848 sshd[5252]: Accepted publickey for core from 147.75.109.163 port 36328 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:46.201575 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:46.214312 systemd-logind[1949]: New session 26 of user core. Apr 30 03:32:46.224163 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 03:32:46.282775 containerd[1971]: time="2025-04-30T03:32:46.282716996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvs5c,Uid:2469de18-02ee-4217-b197-04492c119515,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:46.320065 containerd[1971]: time="2025-04-30T03:32:46.319932433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:46.320408 containerd[1971]: time="2025-04-30T03:32:46.320008857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:46.320408 containerd[1971]: time="2025-04-30T03:32:46.320222349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:46.321121 containerd[1971]: time="2025-04-30T03:32:46.321046658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:46.349179 systemd[1]: Started cri-containerd-adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621.scope - libcontainer container adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621. Apr 30 03:32:46.378814 containerd[1971]: time="2025-04-30T03:32:46.378744189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hvs5c,Uid:2469de18-02ee-4217-b197-04492c119515,Namespace:kube-system,Attempt:0,} returns sandbox id \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\"" Apr 30 03:32:46.382436 containerd[1971]: time="2025-04-30T03:32:46.382390904Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:32:46.393699 sshd[5252]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:46.398228 systemd[1]: sshd@25-172.31.22.79:22-147.75.109.163:36328.service: Deactivated successfully. Apr 30 03:32:46.401114 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 03:32:46.402838 systemd-logind[1949]: Session 26 logged out. Waiting for processes to exit. Apr 30 03:32:46.405748 systemd-logind[1949]: Removed session 26. Apr 30 03:32:46.419608 containerd[1971]: time="2025-04-30T03:32:46.419562513Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4\"" Apr 30 03:32:46.420934 containerd[1971]: time="2025-04-30T03:32:46.420231099Z" level=info msg="StartContainer for \"7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4\"" Apr 30 03:32:46.446476 systemd[1]: Started sshd@26-172.31.22.79:22-147.75.109.163:36332.service - OpenSSH per-connection server daemon (147.75.109.163:36332). Apr 30 03:32:46.452630 systemd[1]: Started cri-containerd-7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4.scope - libcontainer container 7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4. Apr 30 03:32:46.485162 containerd[1971]: time="2025-04-30T03:32:46.484227540Z" level=info msg="StartContainer for \"7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4\" returns successfully" Apr 30 03:32:46.505238 systemd[1]: cri-containerd-7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4.scope: Deactivated successfully. Apr 30 03:32:46.552362 containerd[1971]: time="2025-04-30T03:32:46.552042397Z" level=info msg="shim disconnected" id=7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4 namespace=k8s.io Apr 30 03:32:46.552362 containerd[1971]: time="2025-04-30T03:32:46.552206974Z" level=warning msg="cleaning up after shim disconnected" id=7fc0c5318b69f8cb0c5bd53b5cb1cb36ab03b530b2a5f391c4e429cb6c6fa9a4 namespace=k8s.io Apr 30 03:32:46.552362 containerd[1971]: time="2025-04-30T03:32:46.552215524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:46.687654 sshd[5318]: Accepted publickey for core from 147.75.109.163 port 36332 ssh2: RSA SHA256:7ZQea3lKZeIY1pq8546y2SpcWopo7i1peiZKBcYFJ3g Apr 30 03:32:46.689161 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:46.693786 systemd-logind[1949]: New session 27 of user core. Apr 30 03:32:46.700198 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 03:32:46.815811 containerd[1971]: time="2025-04-30T03:32:46.815557581Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:32:46.836562 containerd[1971]: time="2025-04-30T03:32:46.836513486Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b\"" Apr 30 03:32:46.838523 containerd[1971]: time="2025-04-30T03:32:46.838489710Z" level=info msg="StartContainer for \"9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b\"" Apr 30 03:32:46.880124 systemd[1]: Started cri-containerd-9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b.scope - libcontainer container 9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b. Apr 30 03:32:46.926758 containerd[1971]: time="2025-04-30T03:32:46.926708994Z" level=info msg="StartContainer for \"9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b\" returns successfully" Apr 30 03:32:46.942417 systemd[1]: cri-containerd-9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b.scope: Deactivated successfully. Apr 30 03:32:46.995346 containerd[1971]: time="2025-04-30T03:32:46.995283771Z" level=info msg="shim disconnected" id=9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b namespace=k8s.io Apr 30 03:32:46.995712 containerd[1971]: time="2025-04-30T03:32:46.995545943Z" level=warning msg="cleaning up after shim disconnected" id=9deeecf027c305415fd33cce35564b359f9fa12596dfffef7a81844468c0f90b namespace=k8s.io Apr 30 03:32:46.995712 containerd[1971]: time="2025-04-30T03:32:46.995562554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:47.816772 containerd[1971]: time="2025-04-30T03:32:47.816621888Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:32:47.842940 containerd[1971]: time="2025-04-30T03:32:47.842009942Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421\"" Apr 30 03:32:47.847251 containerd[1971]: time="2025-04-30T03:32:47.847215076Z" level=info msg="StartContainer for \"98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421\"" Apr 30 03:32:47.897139 systemd[1]: Started cri-containerd-98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421.scope - libcontainer container 98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421. Apr 30 03:32:47.928460 containerd[1971]: time="2025-04-30T03:32:47.928423532Z" level=info msg="StartContainer for \"98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421\" returns successfully" Apr 30 03:32:47.937251 systemd[1]: cri-containerd-98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421.scope: Deactivated successfully. Apr 30 03:32:47.982987 containerd[1971]: time="2025-04-30T03:32:47.981289570Z" level=info msg="shim disconnected" id=98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421 namespace=k8s.io Apr 30 03:32:47.982987 containerd[1971]: time="2025-04-30T03:32:47.981350987Z" level=warning msg="cleaning up after shim disconnected" id=98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421 namespace=k8s.io Apr 30 03:32:47.982987 containerd[1971]: time="2025-04-30T03:32:47.981359760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:48.175579 systemd[1]: run-containerd-runc-k8s.io-98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421-runc.ErnV44.mount: Deactivated successfully. Apr 30 03:32:48.175688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98a9b5f5a5bb936b869b9df9da71e1c164c484f955718c888b1a6686a96ad421-rootfs.mount: Deactivated successfully. Apr 30 03:32:48.821505 containerd[1971]: time="2025-04-30T03:32:48.821452495Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:32:48.859428 containerd[1971]: time="2025-04-30T03:32:48.859365796Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1\"" Apr 30 03:32:48.860088 containerd[1971]: time="2025-04-30T03:32:48.859966228Z" level=info msg="StartContainer for \"7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1\"" Apr 30 03:32:48.905180 systemd[1]: Started cri-containerd-7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1.scope - libcontainer container 7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1. Apr 30 03:32:48.934117 systemd[1]: cri-containerd-7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1.scope: Deactivated successfully. Apr 30 03:32:48.937681 containerd[1971]: time="2025-04-30T03:32:48.937569443Z" level=info msg="StartContainer for \"7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1\" returns successfully" Apr 30 03:32:48.969047 containerd[1971]: time="2025-04-30T03:32:48.968975756Z" level=info msg="shim disconnected" id=7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1 namespace=k8s.io Apr 30 03:32:48.969047 containerd[1971]: time="2025-04-30T03:32:48.969042314Z" level=warning msg="cleaning up after shim disconnected" id=7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1 namespace=k8s.io Apr 30 03:32:48.969047 containerd[1971]: time="2025-04-30T03:32:48.969053993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:49.175706 systemd[1]: run-containerd-runc-k8s.io-7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1-runc.uT3KXu.mount: Deactivated successfully. Apr 30 03:32:49.175810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f5a2ce8827a73b1abdc5395ac1e5c60acc3dc9e926c510c68131f4335cffda1-rootfs.mount: Deactivated successfully. Apr 30 03:32:49.829180 containerd[1971]: time="2025-04-30T03:32:49.829138924Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:32:49.858107 containerd[1971]: time="2025-04-30T03:32:49.858057154Z" level=info msg="CreateContainer within sandbox \"adf2f18f5f1f8bd2843d21c6c0d22e7f2b0abcc03070a08ef9725ea360353621\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb\"" Apr 30 03:32:49.859839 containerd[1971]: time="2025-04-30T03:32:49.859795751Z" level=info msg="StartContainer for \"2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb\"" Apr 30 03:32:49.897152 systemd[1]: Started cri-containerd-2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb.scope - libcontainer container 2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb. Apr 30 03:32:49.927034 containerd[1971]: time="2025-04-30T03:32:49.926972000Z" level=info msg="StartContainer for \"2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb\" returns successfully" Apr 30 03:32:50.604944 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 03:32:53.433846 systemd[1]: run-containerd-runc-k8s.io-2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb-runc.94F6lj.mount: Deactivated successfully. Apr 30 03:32:53.629949 (udev-worker)[5627]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:32:53.633425 systemd-networkd[1886]: lxc_health: Link UP Apr 30 03:32:53.637386 (udev-worker)[6124]: Network interface NamePolicy= disabled on kernel command line. Apr 30 03:32:53.639196 systemd-networkd[1886]: lxc_health: Gained carrier Apr 30 03:32:54.311932 kubelet[3223]: I0430 03:32:54.310826 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hvs5c" podStartSLOduration=9.310806436 podStartE2EDuration="9.310806436s" podCreationTimestamp="2025-04-30 03:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:50.855083972 +0000 UTC m=+100.574379290" watchObservedRunningTime="2025-04-30 03:32:54.310806436 +0000 UTC m=+104.030101754" Apr 30 03:32:55.652962 systemd-networkd[1886]: lxc_health: Gained IPv6LL Apr 30 03:32:55.705292 systemd[1]: run-containerd-runc-k8s.io-2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb-runc.BxW5aK.mount: Deactivated successfully. Apr 30 03:32:57.949994 systemd[1]: run-containerd-runc-k8s.io-2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb-runc.PPACCW.mount: Deactivated successfully. Apr 30 03:32:58.533500 ntpd[1943]: Listen normally on 15 lxc_health [fe80::902c:7bff:feb7:5ce2%14]:123 Apr 30 03:32:58.534062 ntpd[1943]: 30 Apr 03:32:58 ntpd[1943]: Listen normally on 15 lxc_health [fe80::902c:7bff:feb7:5ce2%14]:123 Apr 30 03:33:00.120351 systemd[1]: run-containerd-runc-k8s.io-2701c2ef0567245ff20389698d341165a0cb13ee8fbe2759b55ab6526f8255fb-runc.2O006C.mount: Deactivated successfully. Apr 30 03:33:04.586173 sshd[5318]: pam_unix(sshd:session): session closed for user core Apr 30 03:33:04.592641 systemd[1]: sshd@26-172.31.22.79:22-147.75.109.163:36332.service: Deactivated successfully. Apr 30 03:33:04.595136 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 03:33:04.596753 systemd-logind[1949]: Session 27 logged out. Waiting for processes to exit. Apr 30 03:33:04.598489 systemd-logind[1949]: Removed session 27. Apr 30 03:33:10.450759 containerd[1971]: time="2025-04-30T03:33:10.450720628Z" level=info msg="StopPodSandbox for \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\"" Apr 30 03:33:10.451184 containerd[1971]: time="2025-04-30T03:33:10.450820848Z" level=info msg="TearDown network for sandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" successfully" Apr 30 03:33:10.451184 containerd[1971]: time="2025-04-30T03:33:10.450833512Z" level=info msg="StopPodSandbox for \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" returns successfully" Apr 30 03:33:10.451604 containerd[1971]: time="2025-04-30T03:33:10.451578396Z" level=info msg="RemovePodSandbox for \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\"" Apr 30 03:33:10.451742 containerd[1971]: time="2025-04-30T03:33:10.451613236Z" level=info msg="Forcibly stopping sandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\"" Apr 30 03:33:10.451742 containerd[1971]: time="2025-04-30T03:33:10.451660682Z" level=info msg="TearDown network for sandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" successfully" Apr 30 03:33:10.457241 containerd[1971]: time="2025-04-30T03:33:10.457197054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:33:10.457412 containerd[1971]: time="2025-04-30T03:33:10.457264536Z" level=info msg="RemovePodSandbox \"5669ffd9c8cbcb8c08c78d18eae0175b14a044dacdf6cfe30544ec0dfe6fd731\" returns successfully" Apr 30 03:33:10.457746 containerd[1971]: time="2025-04-30T03:33:10.457718010Z" level=info msg="StopPodSandbox for \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\"" Apr 30 03:33:10.457840 containerd[1971]: time="2025-04-30T03:33:10.457800092Z" level=info msg="TearDown network for sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" successfully" Apr 30 03:33:10.457840 containerd[1971]: time="2025-04-30T03:33:10.457813879Z" level=info msg="StopPodSandbox for \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" returns successfully" Apr 30 03:33:10.458277 containerd[1971]: time="2025-04-30T03:33:10.458129245Z" level=info msg="RemovePodSandbox for \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\"" Apr 30 03:33:10.458277 containerd[1971]: time="2025-04-30T03:33:10.458263662Z" level=info msg="Forcibly stopping sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\"" Apr 30 03:33:10.458393 containerd[1971]: time="2025-04-30T03:33:10.458336186Z" level=info msg="TearDown network for sandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" successfully" Apr 30 03:33:10.463345 containerd[1971]: time="2025-04-30T03:33:10.463303495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:33:10.463345 containerd[1971]: time="2025-04-30T03:33:10.463360887Z" level=info msg="RemovePodSandbox \"fd535e395b38f3888a30b4569c56a51ad325d95da04ff8bb056511bbb3b38efb\" returns successfully" Apr 30 03:33:19.546234 systemd[1]: cri-containerd-a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72.scope: Deactivated successfully. Apr 30 03:33:19.546479 systemd[1]: cri-containerd-a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72.scope: Consumed 3.038s CPU time, 26.4M memory peak, 0B memory swap peak. Apr 30 03:33:19.572804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72-rootfs.mount: Deactivated successfully. Apr 30 03:33:19.601077 containerd[1971]: time="2025-04-30T03:33:19.600960967Z" level=info msg="shim disconnected" id=a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72 namespace=k8s.io Apr 30 03:33:19.601077 containerd[1971]: time="2025-04-30T03:33:19.601036708Z" level=warning msg="cleaning up after shim disconnected" id=a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72 namespace=k8s.io Apr 30 03:33:19.601077 containerd[1971]: time="2025-04-30T03:33:19.601048330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:33:19.891341 kubelet[3223]: I0430 03:33:19.891232 3223 scope.go:117] "RemoveContainer" containerID="a18b588ba595e3eaeeec5eccf65be76594c961db1a228e499c830b415bc67f72" Apr 30 03:33:19.894386 containerd[1971]: time="2025-04-30T03:33:19.894348076Z" level=info msg="CreateContainer within sandbox \"d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:33:19.910745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196561272.mount: Deactivated successfully. Apr 30 03:33:19.920356 containerd[1971]: time="2025-04-30T03:33:19.920292569Z" level=info msg="CreateContainer within sandbox \"d2851c7da3c456011f87a93dbde7546a6e9489fc1aa6a03ff9bc8a31cfdf217d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6b9aec87134e337bdc33502571a2ea2ba162f77f1010bb6b33b7e17b741b6014\"" Apr 30 03:33:19.920929 containerd[1971]: time="2025-04-30T03:33:19.920852321Z" level=info msg="StartContainer for \"6b9aec87134e337bdc33502571a2ea2ba162f77f1010bb6b33b7e17b741b6014\"" Apr 30 03:33:19.954124 systemd[1]: Started cri-containerd-6b9aec87134e337bdc33502571a2ea2ba162f77f1010bb6b33b7e17b741b6014.scope - libcontainer container 6b9aec87134e337bdc33502571a2ea2ba162f77f1010bb6b33b7e17b741b6014. Apr 30 03:33:20.004250 containerd[1971]: time="2025-04-30T03:33:20.004201367Z" level=info msg="StartContainer for \"6b9aec87134e337bdc33502571a2ea2ba162f77f1010bb6b33b7e17b741b6014\" returns successfully" Apr 30 03:33:22.775513 kubelet[3223]: E0430 03:33:22.775430 3223 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-79?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 03:33:24.962788 systemd[1]: cri-containerd-5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f.scope: Deactivated successfully. Apr 30 03:33:24.965042 systemd[1]: cri-containerd-5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f.scope: Consumed 1.407s CPU time, 19.2M memory peak, 0B memory swap peak. Apr 30 03:33:24.989000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f-rootfs.mount: Deactivated successfully. Apr 30 03:33:25.010829 containerd[1971]: time="2025-04-30T03:33:25.010749703Z" level=info msg="shim disconnected" id=5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f namespace=k8s.io Apr 30 03:33:25.010829 containerd[1971]: time="2025-04-30T03:33:25.010805970Z" level=warning msg="cleaning up after shim disconnected" id=5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f namespace=k8s.io Apr 30 03:33:25.010829 containerd[1971]: time="2025-04-30T03:33:25.010816337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:33:25.904760 kubelet[3223]: I0430 03:33:25.904729 3223 scope.go:117] "RemoveContainer" containerID="5f3f2e3fda1fa8c59ec3775ed9efb6440a6cb566f6bfdf675c5c2e9189f0282f" Apr 30 03:33:25.906684 containerd[1971]: time="2025-04-30T03:33:25.906633319Z" level=info msg="CreateContainer within sandbox \"8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:33:25.923013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245567008.mount: Deactivated successfully. Apr 30 03:33:25.928024 containerd[1971]: time="2025-04-30T03:33:25.927876759Z" level=info msg="CreateContainer within sandbox \"8f47c14b29e25a1d1124f73c4b38b8e7307b310e9ba7ef1db1fa4ae995b32bbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d694023b7aac9e178eebd32df8563cd0a0450863b6e06837afb28393622c323a\"" Apr 30 03:33:25.928936 containerd[1971]: time="2025-04-30T03:33:25.928472907Z" level=info msg="StartContainer for \"d694023b7aac9e178eebd32df8563cd0a0450863b6e06837afb28393622c323a\"" Apr 30 03:33:25.980188 systemd[1]: Started cri-containerd-d694023b7aac9e178eebd32df8563cd0a0450863b6e06837afb28393622c323a.scope - libcontainer container d694023b7aac9e178eebd32df8563cd0a0450863b6e06837afb28393622c323a. Apr 30 03:33:25.993344 systemd[1]: run-containerd-runc-k8s.io-d694023b7aac9e178eebd32df8563cd0a0450863b6e06837afb28393622c323a-runc.jOxrD2.mount: Deactivated successfully. Apr 30 03:33:26.037025 containerd[1971]: time="2025-04-30T03:33:26.036963086Z" level=info msg="StartContainer for \"d694023b7aac9e178eebd32df8563cd0a0450863b6e06837afb28393622c323a\" returns successfully" Apr 30 03:33:32.778791 kubelet[3223]: E0430 03:33:32.778493 3223 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-22-79)"