May 17 00:19:41.917895 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:19:41.917920 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:41.917933 kernel: BIOS-provided physical RAM map: May 17 00:19:41.917940 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:19:41.917946 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:19:41.917953 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 May 17 00:19:41.917961 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved May 17 00:19:41.917968 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:19:41.917975 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:19:41.917984 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:19:41.917991 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:19:41.917998 kernel: NX (Execute Disable) protection: active May 17 00:19:41.918004 kernel: APIC: Static calls initialized May 17 00:19:41.918012 kernel: efi: EFI v2.7 by EDK II May 17 00:19:41.918020 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 May 17 00:19:41.918031 kernel: SMBIOS 2.7 present. May 17 00:19:41.918038 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:19:41.918046 kernel: Hypervisor detected: KVM May 17 00:19:41.918054 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:19:41.918061 kernel: kvm-clock: using sched offset of 3940698906 cycles May 17 00:19:41.918070 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:19:41.918077 kernel: tsc: Detected 2499.996 MHz processor May 17 00:19:41.918085 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:19:41.918093 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:19:41.918101 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:19:41.918111 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 17 00:19:41.918119 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:19:41.918127 kernel: Using GB pages for direct mapping May 17 00:19:41.918135 kernel: Secure boot disabled May 17 00:19:41.918142 kernel: ACPI: Early table checksum verification disabled May 17 00:19:41.918150 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:19:41.918158 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:19:41.918165 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:19:41.918173 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:19:41.918184 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:19:41.918191 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:19:41.918199 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:19:41.918207 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:19:41.918215 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:19:41.918223 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:19:41.918235 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:19:41.918245 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:19:41.918253 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:19:41.918262 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:19:41.918270 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:19:41.918278 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:19:41.918286 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:19:41.918297 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:19:41.918305 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:19:41.918313 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:19:41.918321 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:19:41.918330 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:19:41.918338 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:19:41.918346 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:19:41.918354 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:19:41.918362 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:19:41.918370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:19:41.918382 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:19:41.918390 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:19:41.918398 kernel: Zone ranges: May 17 00:19:41.918406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:19:41.918414 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:19:41.918422 kernel: Normal empty May 17 00:19:41.918431 kernel: Movable zone start for each node May 17 00:19:41.918439 kernel: Early memory node ranges May 17 00:19:41.918447 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:19:41.918458 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:19:41.918466 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:19:41.918474 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:19:41.918482 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:19:41.918490 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:19:41.918499 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:19:41.918507 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:19:41.918515 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:19:41.918523 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:19:41.918534 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:19:41.918543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:19:41.918551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:19:41.918691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:19:41.918700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:19:41.918708 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:19:41.918717 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:19:41.918725 kernel: TSC deadline timer available May 17 00:19:41.918733 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:19:41.918811 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:19:41.918820 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:19:41.918828 kernel: Booting paravirtualized kernel on KVM May 17 00:19:41.918837 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:19:41.918845 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:19:41.918853 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:19:41.918862 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:19:41.918869 kernel: pcpu-alloc: [0] 0 1 May 17 00:19:41.918877 kernel: kvm-guest: PV spinlocks enabled May 17 00:19:41.918886 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:19:41.918898 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:41.918907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:19:41.918915 kernel: random: crng init done May 17 00:19:41.918923 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:19:41.918931 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:19:41.918939 kernel: Fallback order for Node 0: 0 May 17 00:19:41.918948 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:19:41.918958 kernel: Policy zone: DMA32 May 17 00:19:41.918967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:19:41.918975 kernel: Memory: 1874608K/2037804K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 162936K reserved, 0K cma-reserved) May 17 00:19:41.918984 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:19:41.918992 kernel: Kernel/User page tables isolation: enabled May 17 00:19:41.919001 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:19:41.919009 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:19:41.919017 kernel: Dynamic Preempt: voluntary May 17 00:19:41.919025 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:19:41.919037 kernel: rcu: RCU event tracing is enabled. May 17 00:19:41.919045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:19:41.919053 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:19:41.919062 kernel: Rude variant of Tasks RCU enabled. May 17 00:19:41.919070 kernel: Tracing variant of Tasks RCU enabled. May 17 00:19:41.919078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:19:41.919086 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:19:41.919095 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:19:41.919114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:19:41.919123 kernel: Console: colour dummy device 80x25 May 17 00:19:41.919131 kernel: printk: console [tty0] enabled May 17 00:19:41.919140 kernel: printk: console [ttyS0] enabled May 17 00:19:41.919151 kernel: ACPI: Core revision 20230628 May 17 00:19:41.919160 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:19:41.919169 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:19:41.919178 kernel: x2apic enabled May 17 00:19:41.919187 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:19:41.919196 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 17 00:19:41.919208 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) May 17 00:19:41.919216 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:19:41.919225 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:19:41.919234 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:19:41.919242 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:19:41.919251 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:19:41.919260 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:19:41.919268 kernel: RETBleed: Vulnerable May 17 00:19:41.919277 kernel: Speculative Store Bypass: Vulnerable May 17 00:19:41.919288 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:19:41.919297 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:19:41.919306 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:19:41.919314 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:19:41.919323 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:19:41.919332 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:19:41.919340 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:19:41.919349 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:19:41.919357 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:19:41.919366 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:19:41.919375 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:19:41.919386 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:19:41.919395 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:19:41.919403 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:19:41.919412 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:19:41.919421 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:19:41.919430 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:19:41.919438 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:19:41.919447 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:19:41.919456 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:19:41.919464 kernel: Freeing SMP alternatives memory: 32K May 17 00:19:41.919473 kernel: pid_max: default: 32768 minimum: 301 May 17 00:19:41.919484 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:19:41.919493 kernel: landlock: Up and running. May 17 00:19:41.919501 kernel: SELinux: Initializing. May 17 00:19:41.919510 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:19:41.919518 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:19:41.919527 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:19:41.919536 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:41.919545 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:41.919554 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:19:41.919563 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:19:41.919577 kernel: signal: max sigframe size: 3632 May 17 00:19:41.919586 kernel: rcu: Hierarchical SRCU implementation. May 17 00:19:41.919595 kernel: rcu: Max phase no-delay instances is 400. May 17 00:19:41.919604 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:19:41.919613 kernel: smp: Bringing up secondary CPUs ... May 17 00:19:41.919622 kernel: smpboot: x86: Booting SMP configuration: May 17 00:19:41.919630 kernel: .... node #0, CPUs: #1 May 17 00:19:41.919640 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:19:41.919649 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:19:41.919661 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:19:41.919670 kernel: smpboot: Max logical packages: 1 May 17 00:19:41.919679 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) May 17 00:19:41.919687 kernel: devtmpfs: initialized May 17 00:19:41.919696 kernel: x86/mm: Memory block size: 128MB May 17 00:19:41.919705 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:19:41.919714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:19:41.919723 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:19:41.919732 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:19:41.919785 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:19:41.919794 kernel: audit: initializing netlink subsys (disabled) May 17 00:19:41.919803 kernel: audit: type=2000 audit(1747441180.714:1): state=initialized audit_enabled=0 res=1 May 17 00:19:41.919812 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:19:41.919821 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:19:41.919829 kernel: cpuidle: using governor menu May 17 00:19:41.919839 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:19:41.919847 kernel: dca service started, version 1.12.1 May 17 00:19:41.919856 kernel: PCI: Using configuration type 1 for base access May 17 00:19:41.919868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:19:41.919877 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:19:41.919886 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:19:41.919895 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:19:41.919903 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:19:41.919912 kernel: ACPI: Added _OSI(Module Device) May 17 00:19:41.919921 kernel: ACPI: Added _OSI(Processor Device) May 17 00:19:41.919930 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:19:41.919938 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:19:41.919950 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:19:41.919959 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:19:41.919967 kernel: ACPI: Interpreter enabled May 17 00:19:41.919976 kernel: ACPI: PM: (supports S0 S5) May 17 00:19:41.919985 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:19:41.919994 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:19:41.920003 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:19:41.920012 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:19:41.920021 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:19:41.920196 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:19:41.920296 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:19:41.920388 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:19:41.920399 kernel: acpiphp: Slot [3] registered May 17 00:19:41.920408 kernel: acpiphp: Slot [4] registered May 17 00:19:41.920417 kernel: acpiphp: Slot [5] registered May 17 00:19:41.920426 kernel: acpiphp: Slot [6] registered May 17 00:19:41.920438 kernel: acpiphp: Slot [7] registered May 17 00:19:41.920447 kernel: acpiphp: Slot [8] registered May 17 00:19:41.920455 kernel: acpiphp: Slot [9] registered May 17 00:19:41.920464 kernel: acpiphp: Slot [10] registered May 17 00:19:41.920473 kernel: acpiphp: Slot [11] registered May 17 00:19:41.920482 kernel: acpiphp: Slot [12] registered May 17 00:19:41.920491 kernel: acpiphp: Slot [13] registered May 17 00:19:41.920505 kernel: acpiphp: Slot [14] registered May 17 00:19:41.920514 kernel: acpiphp: Slot [15] registered May 17 00:19:41.920530 kernel: acpiphp: Slot [16] registered May 17 00:19:41.920550 kernel: acpiphp: Slot [17] registered May 17 00:19:41.920559 kernel: acpiphp: Slot [18] registered May 17 00:19:41.920568 kernel: acpiphp: Slot [19] registered May 17 00:19:41.920577 kernel: acpiphp: Slot [20] registered May 17 00:19:41.920586 kernel: acpiphp: Slot [21] registered May 17 00:19:41.920594 kernel: acpiphp: Slot [22] registered May 17 00:19:41.920603 kernel: acpiphp: Slot [23] registered May 17 00:19:41.920621 kernel: acpiphp: Slot [24] registered May 17 00:19:41.920630 kernel: acpiphp: Slot [25] registered May 17 00:19:41.920642 kernel: acpiphp: Slot [26] registered May 17 00:19:41.920650 kernel: acpiphp: Slot [27] registered May 17 00:19:41.920659 kernel: acpiphp: Slot [28] registered May 17 00:19:41.920668 kernel: acpiphp: Slot [29] registered May 17 00:19:41.920677 kernel: acpiphp: Slot [30] registered May 17 00:19:41.920686 kernel: acpiphp: Slot [31] registered May 17 00:19:41.920695 kernel: PCI host bridge to bus 0000:00 May 17 00:19:41.920815 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:19:41.920900 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:19:41.920984 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:19:41.921064 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:19:41.921154 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:19:41.921235 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:19:41.921343 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:19:41.921445 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:19:41.921554 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:19:41.921646 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:19:41.921748 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:19:41.921878 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:19:41.922047 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:19:41.922197 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:19:41.922300 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:19:41.922408 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:19:41.922517 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:19:41.922617 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:19:41.922714 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:19:41.922825 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:19:41.922924 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:19:41.923027 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:19:41.923129 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:19:41.923231 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:19:41.923329 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:19:41.923342 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:19:41.923352 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:19:41.923361 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:19:41.923371 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:19:41.923383 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:19:41.923392 kernel: iommu: Default domain type: Translated May 17 00:19:41.923401 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:19:41.923410 kernel: efivars: Registered efivars operations May 17 00:19:41.923419 kernel: PCI: Using ACPI for IRQ routing May 17 00:19:41.923429 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:19:41.923438 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:19:41.923447 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:19:41.923544 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:19:41.923644 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:19:41.925531 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:19:41.925562 kernel: vgaarb: loaded May 17 00:19:41.925573 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:19:41.925583 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:19:41.925592 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:19:41.925601 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:19:41.925611 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:19:41.925620 kernel: pnp: PnP ACPI init May 17 00:19:41.925636 kernel: pnp: PnP ACPI: found 5 devices May 17 00:19:41.925645 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:19:41.925654 kernel: NET: Registered PF_INET protocol family May 17 00:19:41.925663 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:19:41.925673 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:19:41.925681 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:19:41.925690 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:19:41.925700 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:19:41.925709 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:19:41.925720 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:19:41.925730 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:19:41.925752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:19:41.925761 kernel: NET: Registered PF_XDP protocol family May 17 00:19:41.925892 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:19:41.925978 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:19:41.926076 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:19:41.926175 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:19:41.926263 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:19:41.926364 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:19:41.926377 kernel: PCI: CLS 0 bytes, default 64 May 17 00:19:41.926386 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:19:41.926396 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns May 17 00:19:41.926405 kernel: clocksource: Switched to clocksource tsc May 17 00:19:41.926414 kernel: Initialise system trusted keyrings May 17 00:19:41.926423 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:19:41.926432 kernel: Key type asymmetric registered May 17 00:19:41.926445 kernel: Asymmetric key parser 'x509' registered May 17 00:19:41.926454 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:19:41.926463 kernel: io scheduler mq-deadline registered May 17 00:19:41.926473 kernel: io scheduler kyber registered May 17 00:19:41.926481 kernel: io scheduler bfq registered May 17 00:19:41.926491 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:19:41.926500 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:19:41.926509 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:19:41.926517 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:19:41.926529 kernel: i8042: Warning: Keylock active May 17 00:19:41.926539 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:19:41.926548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:19:41.926647 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:19:41.926735 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:19:41.928853 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:19:41 UTC (1747441181) May 17 00:19:41.928945 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:19:41.928964 kernel: intel_pstate: CPU model not supported May 17 00:19:41.928974 kernel: efifb: probing for efifb May 17 00:19:41.928983 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k May 17 00:19:41.928992 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:19:41.929001 kernel: efifb: scrolling: redraw May 17 00:19:41.929010 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:19:41.929020 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:19:41.929029 kernel: fb0: EFI VGA frame buffer device May 17 00:19:41.929038 kernel: pstore: Using crash dump compression: deflate May 17 00:19:41.929047 kernel: pstore: Registered efi_pstore as persistent store backend May 17 00:19:41.929059 kernel: NET: Registered PF_INET6 protocol family May 17 00:19:41.929067 kernel: Segment Routing with IPv6 May 17 00:19:41.929076 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:19:41.929085 kernel: NET: Registered PF_PACKET protocol family May 17 00:19:41.929094 kernel: Key type dns_resolver registered May 17 00:19:41.929103 kernel: IPI shorthand broadcast: enabled May 17 00:19:41.929141 kernel: sched_clock: Marking stable (455001559, 136309206)->(684785940, -93475175) May 17 00:19:41.929154 kernel: registered taskstats version 1 May 17 00:19:41.929164 kernel: Loading compiled-in X.509 certificates May 17 00:19:41.929176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:19:41.929186 kernel: Key type .fscrypt registered May 17 00:19:41.929195 kernel: Key type fscrypt-provisioning registered May 17 00:19:41.929204 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:19:41.929213 kernel: ima: Allocated hash algorithm: sha1 May 17 00:19:41.929223 kernel: ima: No architecture policies found May 17 00:19:41.929232 kernel: clk: Disabling unused clocks May 17 00:19:41.929241 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:19:41.929253 kernel: Write protecting the kernel read-only data: 36864k May 17 00:19:41.929263 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:19:41.929272 kernel: Run /init as init process May 17 00:19:41.929281 kernel: with arguments: May 17 00:19:41.929291 kernel: /init May 17 00:19:41.929300 kernel: with environment: May 17 00:19:41.929309 kernel: HOME=/ May 17 00:19:41.929318 kernel: TERM=linux May 17 00:19:41.929327 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:19:41.929340 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:19:41.929354 systemd[1]: Detected virtualization amazon. May 17 00:19:41.929364 systemd[1]: Detected architecture x86-64. May 17 00:19:41.929373 systemd[1]: Running in initrd. May 17 00:19:41.929383 systemd[1]: No hostname configured, using default hostname. May 17 00:19:41.929392 systemd[1]: Hostname set to . May 17 00:19:41.929402 systemd[1]: Initializing machine ID from VM UUID. May 17 00:19:41.929415 systemd[1]: Queued start job for default target initrd.target. May 17 00:19:41.929425 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:41.929435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:41.929446 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:19:41.929456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:19:41.929465 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:19:41.929475 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:19:41.929489 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:19:41.929499 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:19:41.929509 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:41.929519 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:41.929529 systemd[1]: Reached target paths.target - Path Units. May 17 00:19:41.929541 systemd[1]: Reached target slices.target - Slice Units. May 17 00:19:41.929550 systemd[1]: Reached target swap.target - Swaps. May 17 00:19:41.929560 systemd[1]: Reached target timers.target - Timer Units. May 17 00:19:41.929570 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:19:41.929580 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:19:41.929590 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:19:41.929600 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:19:41.929609 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:41.929619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:19:41.929631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:41.929641 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:19:41.929651 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:19:41.929661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:19:41.929670 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:19:41.929680 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:19:41.929690 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:19:41.929699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:19:41.929712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:41.929722 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:19:41.930844 systemd-journald[178]: Collecting audit messages is disabled. May 17 00:19:41.930884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:41.930901 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:19:41.930913 systemd-journald[178]: Journal started May 17 00:19:41.930937 systemd-journald[178]: Runtime Journal (/run/log/journal/ec223befecbd051ada6e76c9ea38505f) is 4.7M, max 38.2M, 33.4M free. May 17 00:19:41.924942 systemd-modules-load[179]: Inserted module 'overlay' May 17 00:19:41.946174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:19:41.950245 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:19:41.950269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:41.959758 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:19:41.961061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:41.963636 kernel: Bridge firewalling registered May 17 00:19:41.961434 systemd-modules-load[179]: Inserted module 'br_netfilter' May 17 00:19:41.964530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:19:41.967128 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:19:41.968648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:19:41.980013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:41.985970 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:19:41.994823 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:41.999957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:42.007789 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:19:42.008065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:42.010660 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:19:42.013924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:19:42.024984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:42.036434 dracut-cmdline[211]: dracut-dracut-053 May 17 00:19:42.039602 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:42.051898 systemd-resolved[212]: Positive Trust Anchors: May 17 00:19:42.051910 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:19:42.051947 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:19:42.056757 systemd-resolved[212]: Defaulting to hostname 'linux'. May 17 00:19:42.059392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:19:42.059845 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:42.115776 kernel: SCSI subsystem initialized May 17 00:19:42.126775 kernel: Loading iSCSI transport class v2.0-870. May 17 00:19:42.138766 kernel: iscsi: registered transport (tcp) May 17 00:19:42.160806 kernel: iscsi: registered transport (qla4xxx) May 17 00:19:42.160885 kernel: QLogic iSCSI HBA Driver May 17 00:19:42.200196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:19:42.209026 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:19:42.235298 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:19:42.235375 kernel: device-mapper: uevent: version 1.0.3 May 17 00:19:42.235399 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:19:42.278775 kernel: raid6: avx512x4 gen() 17856 MB/s May 17 00:19:42.296766 kernel: raid6: avx512x2 gen() 18000 MB/s May 17 00:19:42.314771 kernel: raid6: avx512x1 gen() 17695 MB/s May 17 00:19:42.332773 kernel: raid6: avx2x4 gen() 17414 MB/s May 17 00:19:42.350771 kernel: raid6: avx2x2 gen() 17390 MB/s May 17 00:19:42.368967 kernel: raid6: avx2x1 gen() 13189 MB/s May 17 00:19:42.369023 kernel: raid6: using algorithm avx512x2 gen() 18000 MB/s May 17 00:19:42.387926 kernel: raid6: .... xor() 24411 MB/s, rmw enabled May 17 00:19:42.387985 kernel: raid6: using avx512x2 recovery algorithm May 17 00:19:42.409780 kernel: xor: automatically using best checksumming function avx May 17 00:19:42.571772 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:19:42.582669 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:19:42.590941 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:42.604042 systemd-udevd[396]: Using default interface naming scheme 'v255'. May 17 00:19:42.609113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:42.615927 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:19:42.637551 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation May 17 00:19:42.666828 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:19:42.671949 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:19:42.724088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:19:42.734145 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:19:42.757978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:19:42.762045 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:19:42.763722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:19:42.764778 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:19:42.773029 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:19:42.790980 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:19:42.814767 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:19:42.833480 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:19:42.833798 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:19:42.848767 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:19:42.857890 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:19:42.857951 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:00:a7:64:b2:69 May 17 00:19:42.862147 kernel: AES CTR mode by8 optimization enabled May 17 00:19:42.862804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:19:42.863825 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:42.866910 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. May 17 00:19:42.867881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:42.869008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:19:42.869321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:42.873000 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:42.880225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:42.891768 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:19:42.892006 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:19:42.891621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:19:42.891751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:42.903113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:42.911768 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:19:42.917817 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:19:42.917890 kernel: GPT:9289727 != 16777215 May 17 00:19:42.917911 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:19:42.917930 kernel: GPT:9289727 != 16777215 May 17 00:19:42.917949 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:19:42.917969 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:19:42.932018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:42.938115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:42.956805 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:43.026790 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (452) May 17 00:19:43.040720 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:19:43.042794 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (456) May 17 00:19:43.107786 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:19:43.114513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:19:43.120393 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:19:43.120977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:19:43.126924 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:19:43.134414 disk-uuid[630]: Primary Header is updated. May 17 00:19:43.134414 disk-uuid[630]: Secondary Entries is updated. May 17 00:19:43.134414 disk-uuid[630]: Secondary Header is updated. May 17 00:19:43.140784 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:19:43.146570 kernel: GPT:disk_guids don't match. May 17 00:19:43.146631 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:19:43.146644 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:19:43.151763 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:19:44.157951 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:19:44.158028 disk-uuid[631]: The operation has completed successfully. May 17 00:19:44.288863 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:19:44.288982 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:19:44.310925 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:19:44.315903 sh[972]: Success May 17 00:19:44.335765 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:19:44.426562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:19:44.432861 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:19:44.435341 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:19:44.470973 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:19:44.471030 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:44.474181 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:19:44.474234 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:19:44.475473 kernel: BTRFS info (device dm-0): using free space tree May 17 00:19:44.594766 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:19:44.619389 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:19:44.620448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:19:44.626938 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:19:44.628889 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:19:44.650907 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:44.650974 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:44.650988 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:19:44.657884 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:19:44.666261 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:19:44.669781 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:44.676120 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:19:44.683874 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:19:44.716583 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:19:44.721965 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:19:44.743831 systemd-networkd[1165]: lo: Link UP May 17 00:19:44.743842 systemd-networkd[1165]: lo: Gained carrier May 17 00:19:44.745045 systemd-networkd[1165]: Enumeration completed May 17 00:19:44.745493 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:44.745497 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:19:44.746855 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:19:44.747265 systemd[1]: Reached target network.target - Network. May 17 00:19:44.748309 systemd-networkd[1165]: eth0: Link UP May 17 00:19:44.748319 systemd-networkd[1165]: eth0: Gained carrier May 17 00:19:44.748328 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:44.765877 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.27.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:19:45.102052 ignition[1110]: Ignition 2.19.0 May 17 00:19:45.102066 ignition[1110]: Stage: fetch-offline May 17 00:19:45.102335 ignition[1110]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:45.102347 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:45.104394 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:19:45.102676 ignition[1110]: Ignition finished successfully May 17 00:19:45.110962 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:19:45.126476 ignition[1174]: Ignition 2.19.0 May 17 00:19:45.126491 ignition[1174]: Stage: fetch May 17 00:19:45.126996 ignition[1174]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:45.127010 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:45.127131 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:45.162423 ignition[1174]: PUT result: OK May 17 00:19:45.166466 ignition[1174]: parsed url from cmdline: "" May 17 00:19:45.166475 ignition[1174]: no config URL provided May 17 00:19:45.166483 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:19:45.166494 ignition[1174]: no config at "/usr/lib/ignition/user.ign" May 17 00:19:45.166511 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:45.167379 ignition[1174]: PUT result: OK May 17 00:19:45.167423 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:19:45.169259 ignition[1174]: GET result: OK May 17 00:19:45.169371 ignition[1174]: parsing config with SHA512: 1bcea59f7c89ff5f5055ec9173bea3be9cd9b4f12ebf046bf5fa7027f3551af3e7c858f1b4fb8ce11837671c1e573e3335e00a98f57345d99c6183d4918de1e9 May 17 00:19:45.174955 unknown[1174]: fetched base config from "system" May 17 00:19:45.174969 unknown[1174]: fetched base config from "system" May 17 00:19:45.174982 unknown[1174]: fetched user config from "aws" May 17 00:19:45.176689 ignition[1174]: fetch: fetch complete May 17 00:19:45.176703 ignition[1174]: fetch: fetch passed May 17 00:19:45.176779 ignition[1174]: Ignition finished successfully May 17 00:19:45.179314 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:19:45.184015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:19:45.201731 ignition[1181]: Ignition 2.19.0 May 17 00:19:45.201758 ignition[1181]: Stage: kargs May 17 00:19:45.202247 ignition[1181]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:45.202260 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:45.202380 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:45.203257 ignition[1181]: PUT result: OK May 17 00:19:45.206046 ignition[1181]: kargs: kargs passed May 17 00:19:45.206127 ignition[1181]: Ignition finished successfully May 17 00:19:45.207924 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:19:45.212001 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:19:45.234906 ignition[1187]: Ignition 2.19.0 May 17 00:19:45.234920 ignition[1187]: Stage: disks May 17 00:19:45.235415 ignition[1187]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:45.235428 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:45.235545 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:45.238204 ignition[1187]: PUT result: OK May 17 00:19:45.242876 ignition[1187]: disks: disks passed May 17 00:19:45.242956 ignition[1187]: Ignition finished successfully May 17 00:19:45.244218 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:19:45.245369 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:19:45.246068 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:19:45.246404 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:19:45.246950 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:19:45.247496 systemd[1]: Reached target basic.target - Basic System. May 17 00:19:45.252986 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:19:45.283402 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:19:45.286214 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:19:45.291912 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:19:45.388761 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:19:45.389840 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:19:45.390916 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:19:45.406885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:19:45.408986 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:19:45.410200 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:19:45.410613 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:19:45.410640 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:19:45.416883 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:19:45.418279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:19:45.426758 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) May 17 00:19:45.426807 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:45.430509 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:45.430564 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:19:45.442769 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:19:45.444619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:19:45.891938 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:19:45.907414 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory May 17 00:19:45.911377 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:19:45.928457 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:19:46.060912 systemd-networkd[1165]: eth0: Gained IPv6LL May 17 00:19:46.202719 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:19:46.213952 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:19:46.218169 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:19:46.226920 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:46.226546 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:19:46.264560 ignition[1326]: INFO : Ignition 2.19.0 May 17 00:19:46.264560 ignition[1326]: INFO : Stage: mount May 17 00:19:46.264560 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:19:46.264560 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:46.264560 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:46.269883 ignition[1326]: INFO : PUT result: OK May 17 00:19:46.270468 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:19:46.272820 ignition[1326]: INFO : mount: mount passed May 17 00:19:46.273390 ignition[1326]: INFO : Ignition finished successfully May 17 00:19:46.273977 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:19:46.279874 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:19:46.287254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:19:46.310761 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1338) May 17 00:19:46.310820 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:46.312836 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:46.315348 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:19:46.319778 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:19:46.322454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:19:46.342878 ignition[1354]: INFO : Ignition 2.19.0 May 17 00:19:46.342878 ignition[1354]: INFO : Stage: files May 17 00:19:46.343961 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:19:46.343961 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:46.343961 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:46.344922 ignition[1354]: INFO : PUT result: OK May 17 00:19:46.346863 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping May 17 00:19:46.358359 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:19:46.358359 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:19:46.376083 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:19:46.376988 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:19:46.376988 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:19:46.376659 unknown[1354]: wrote ssh authorized keys file for user: core May 17 00:19:46.379080 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:19:46.379080 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:19:46.488580 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:19:46.677548 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:19:46.677548 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:19:46.679347 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:19:47.145481 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:19:47.381052 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:19:47.382200 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:19:47.387930 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:19:47.387930 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:19:47.387930 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:47.387930 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:47.387930 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:47.387930 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:19:48.291959 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:19:48.683598 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:48.683598 ignition[1354]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:19:48.686321 ignition[1354]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:19:48.687201 ignition[1354]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:19:48.687201 ignition[1354]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:19:48.687201 ignition[1354]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 17 00:19:48.687201 ignition[1354]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:19:48.687201 ignition[1354]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:19:48.687201 ignition[1354]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:19:48.687201 ignition[1354]: INFO : files: files passed May 17 00:19:48.687201 ignition[1354]: INFO : Ignition finished successfully May 17 00:19:48.688228 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:19:48.695989 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:19:48.698093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:19:48.701404 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:19:48.701525 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:19:48.711720 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:19:48.711720 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:19:48.713760 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:19:48.715997 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:19:48.717040 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:19:48.721953 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:19:48.750934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:19:48.751056 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:19:48.752454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:19:48.753195 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:19:48.753979 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:19:48.755100 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:19:48.772377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:19:48.783977 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:19:48.795830 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:48.796543 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:19:48.797569 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:19:48.798409 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:19:48.798588 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:19:48.799716 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:19:48.800569 systemd[1]: Stopped target basic.target - Basic System. May 17 00:19:48.801417 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:19:48.802186 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:19:48.802960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:19:48.803729 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:19:48.804503 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:19:48.805417 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:19:48.806570 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:19:48.807342 systemd[1]: Stopped target swap.target - Swaps. May 17 00:19:48.808070 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:19:48.808249 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:19:48.809420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:48.810218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:48.810910 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:19:48.811634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:48.812110 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:19:48.812278 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:19:48.813768 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:19:48.813958 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:19:48.814682 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:19:48.814866 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:19:48.827227 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:19:48.827681 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:19:48.827837 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:48.832999 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:19:48.833459 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:19:48.833649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:19:48.834191 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:19:48.834283 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:19:48.839208 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:19:48.839293 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:19:48.844125 ignition[1407]: INFO : Ignition 2.19.0 May 17 00:19:48.844125 ignition[1407]: INFO : Stage: umount May 17 00:19:48.848088 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:19:48.848088 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:19:48.848088 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:19:48.848088 ignition[1407]: INFO : PUT result: OK May 17 00:19:48.849512 ignition[1407]: INFO : umount: umount passed May 17 00:19:48.849512 ignition[1407]: INFO : Ignition finished successfully May 17 00:19:48.851373 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:19:48.851472 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:19:48.852355 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:19:48.852439 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:19:48.852835 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:19:48.852881 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:19:48.853207 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:19:48.853245 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:19:48.853549 systemd[1]: Stopped target network.target - Network. May 17 00:19:48.853981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:19:48.854034 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:19:48.854665 systemd[1]: Stopped target paths.target - Path Units. May 17 00:19:48.855321 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:19:48.858809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:48.859173 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:19:48.859478 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:19:48.859780 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:19:48.859824 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:19:48.860106 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:19:48.860140 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:19:48.860410 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:19:48.860455 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:19:48.860732 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:19:48.862832 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:19:48.863308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:19:48.863639 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:19:48.865910 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:19:48.867805 systemd-networkd[1165]: eth0: DHCPv6 lease lost May 17 00:19:48.869931 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:19:48.870043 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:19:48.870717 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:19:48.870794 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:48.875978 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:19:48.876403 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:19:48.876472 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:19:48.879125 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:48.880128 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:19:48.880236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:19:48.883241 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:19:48.883359 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:19:48.886657 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:19:48.887241 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:19:48.888244 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:19:48.888293 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:48.888670 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:19:48.888711 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:19:48.889042 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:19:48.889078 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:48.889694 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:19:48.893962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:48.895028 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:19:48.895103 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:19:48.896134 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:19:48.896168 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:48.896522 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:19:48.896567 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:19:48.898283 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:19:48.898330 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:19:48.899014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:19:48.899058 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:48.905915 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:19:48.906345 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:19:48.906404 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:48.908236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:19:48.908290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:48.909023 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:19:48.909209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:19:48.913368 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:19:48.913485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:19:48.914580 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:19:48.915698 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:19:48.927380 systemd[1]: Switching root. May 17 00:19:48.966529 systemd-journald[178]: Journal stopped May 17 00:19:50.748434 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). May 17 00:19:50.748543 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:19:50.748575 kernel: SELinux: policy capability open_perms=1 May 17 00:19:50.748596 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:19:50.748617 kernel: SELinux: policy capability always_check_network=0 May 17 00:19:50.748638 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:19:50.748671 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:19:50.748691 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:19:50.748713 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:19:50.752814 kernel: audit: type=1403 audit(1747441189.515:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:19:50.752859 systemd[1]: Successfully loaded SELinux policy in 118.943ms. May 17 00:19:50.752892 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.477ms. May 17 00:19:50.752916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:19:50.752938 systemd[1]: Detected virtualization amazon. May 17 00:19:50.752960 systemd[1]: Detected architecture x86-64. May 17 00:19:50.752981 systemd[1]: Detected first boot. May 17 00:19:50.753003 systemd[1]: Initializing machine ID from VM UUID. May 17 00:19:50.753024 zram_generator::config[1449]: No configuration found. May 17 00:19:50.753053 systemd[1]: Populated /etc with preset unit settings. May 17 00:19:50.753075 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:19:50.753097 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:19:50.753125 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:19:50.753147 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:19:50.753169 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:19:50.753191 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:19:50.753212 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:19:50.753236 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:19:50.753258 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:19:50.753278 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:19:50.753300 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:19:50.753328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:50.753350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:50.753371 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:19:50.753392 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:19:50.753418 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:19:50.753440 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:19:50.753462 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:19:50.753483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:50.753504 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:19:50.753525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:19:50.753546 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:19:50.753567 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:19:50.753591 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:19:50.753612 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:19:50.753633 systemd[1]: Reached target slices.target - Slice Units. May 17 00:19:50.753655 systemd[1]: Reached target swap.target - Swaps. May 17 00:19:50.753676 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:19:50.753699 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:19:50.753719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:50.755234 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:19:50.755329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:50.755350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:19:50.755377 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:19:50.755399 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:19:50.755419 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:19:50.755447 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:50.755463 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:19:50.755482 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:19:50.755503 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:19:50.755525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:19:50.755550 systemd[1]: Reached target machines.target - Containers. May 17 00:19:50.755571 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:19:50.755592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:19:50.755614 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:19:50.755635 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:19:50.755656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:19:50.755677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:19:50.755698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:19:50.755722 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:19:50.755832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:19:50.755858 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:19:50.755879 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:19:50.755901 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:19:50.755922 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:19:50.755943 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:19:50.755965 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:19:50.755986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:19:50.756012 kernel: loop: module loaded May 17 00:19:50.756032 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:19:50.756049 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:19:50.756068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:19:50.756088 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:19:50.756106 systemd[1]: Stopped verity-setup.service. May 17 00:19:50.756132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:50.756151 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:19:50.756170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:19:50.756193 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:19:50.756212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:19:50.756233 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:19:50.756299 systemd-journald[1534]: Collecting audit messages is disabled. May 17 00:19:50.756338 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:19:50.756357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:50.756376 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:19:50.756395 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:19:50.756413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:19:50.756432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:19:50.756452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:19:50.756471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:19:50.756493 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:19:50.756514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:19:50.761829 kernel: fuse: init (API version 7.39) May 17 00:19:50.761885 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:19:50.761910 kernel: ACPI: bus type drm_connector registered May 17 00:19:50.761937 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:19:50.761962 systemd-journald[1534]: Journal started May 17 00:19:50.762009 systemd-journald[1534]: Runtime Journal (/run/log/journal/ec223befecbd051ada6e76c9ea38505f) is 4.7M, max 38.2M, 33.4M free. May 17 00:19:50.372623 systemd[1]: Queued start job for default target multi-user.target. May 17 00:19:50.415763 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:19:50.416186 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:19:50.766856 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:19:50.766945 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:19:50.768059 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:19:50.768257 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:19:50.769419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:19:50.770338 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:19:50.771368 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:19:50.786191 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:19:50.795903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:19:50.803839 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:19:50.804920 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:19:50.805072 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:19:50.809696 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:19:50.823016 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:19:50.827315 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:19:50.828153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:19:50.839965 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:19:50.848906 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:19:50.849663 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:19:50.859981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:19:50.860712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:19:50.862408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:50.866941 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:19:50.877985 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:19:50.883482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:19:50.889042 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:19:50.890922 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:19:50.892724 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:19:50.903276 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:19:50.917913 systemd-journald[1534]: Time spent on flushing to /var/log/journal/ec223befecbd051ada6e76c9ea38505f is 71.487ms for 991 entries. May 17 00:19:50.917913 systemd-journald[1534]: System Journal (/var/log/journal/ec223befecbd051ada6e76c9ea38505f) is 8.0M, max 195.6M, 187.6M free. May 17 00:19:51.009094 systemd-journald[1534]: Received client request to flush runtime journal. May 17 00:19:51.009187 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:19:50.908811 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:19:50.910531 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:19:50.924424 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:19:50.964183 udevadm[1585]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:19:50.978607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:51.009023 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:19:51.013040 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:19:51.030035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:19:51.033184 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:19:51.037238 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:19:51.085650 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. May 17 00:19:51.086118 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. May 17 00:19:51.094272 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:51.220770 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:19:51.258764 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:19:51.368360 kernel: loop2: detected capacity change from 0 to 140768 May 17 00:19:51.482779 kernel: loop3: detected capacity change from 0 to 61336 May 17 00:19:51.528869 kernel: loop4: detected capacity change from 0 to 221472 May 17 00:19:51.562810 kernel: loop5: detected capacity change from 0 to 142488 May 17 00:19:51.581803 kernel: loop6: detected capacity change from 0 to 140768 May 17 00:19:51.610894 kernel: loop7: detected capacity change from 0 to 61336 May 17 00:19:51.624826 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:19:51.627489 (sd-merge)[1603]: Merged extensions into '/usr'. May 17 00:19:51.632901 systemd[1]: Reloading requested from client PID 1578 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:19:51.632919 systemd[1]: Reloading... May 17 00:19:51.707771 zram_generator::config[1629]: No configuration found. May 17 00:19:51.862264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:19:51.939129 systemd[1]: Reloading finished in 305 ms. May 17 00:19:51.963652 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:19:51.968736 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:19:51.974056 systemd[1]: Starting ensure-sysext.service... May 17 00:19:51.977945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:19:51.986330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:51.996087 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... May 17 00:19:51.996109 systemd[1]: Reloading... May 17 00:19:52.033421 systemd-udevd[1683]: Using default interface naming scheme 'v255'. May 17 00:19:52.042042 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:19:52.042577 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:19:52.044691 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:19:52.045996 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. May 17 00:19:52.046207 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. May 17 00:19:52.055116 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:19:52.055132 systemd-tmpfiles[1682]: Skipping /boot May 17 00:19:52.076199 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:19:52.076345 systemd-tmpfiles[1682]: Skipping /boot May 17 00:19:52.128775 zram_generator::config[1709]: No configuration found. May 17 00:19:52.242838 (udev-worker)[1721]: Network interface NamePolicy= disabled on kernel command line. May 17 00:19:52.349772 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:19:52.355795 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:19:52.369877 kernel: ACPI: button: Power Button [PWRF] May 17 00:19:52.369968 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 May 17 00:19:52.386781 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:19:52.481764 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1721) May 17 00:19:52.513766 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 May 17 00:19:52.531021 ldconfig[1573]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:19:52.535698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:19:52.540315 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:19:52.673975 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:19:52.674280 systemd[1]: Reloading finished in 677 ms. May 17 00:19:52.690365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:52.691117 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:19:52.694241 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:52.715148 systemd[1]: Finished ensure-sysext.service. May 17 00:19:52.715882 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:19:52.738154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:19:52.738884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:52.744072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:19:52.750915 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:19:52.751812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:19:52.753942 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:19:52.758802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:19:52.769079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:19:52.775959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:19:52.779129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:19:52.780467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:19:52.789942 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:19:52.795645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:19:52.805011 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:19:52.815787 lvm[1877]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:19:52.816934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:19:52.818854 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:19:52.821944 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:19:52.832990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:52.833686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:52.834673 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:19:52.835702 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:19:52.837620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:19:52.837858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:19:52.847559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:19:52.848258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:19:52.851202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:19:52.863990 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:19:52.865059 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:19:52.865261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:19:52.867049 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:19:52.887518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:19:52.900238 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:19:52.902339 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:19:52.904978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:52.915063 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:19:52.953718 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:19:52.956372 augenrules[1912]: No rules May 17 00:19:52.957194 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:19:52.959867 lvm[1909]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:19:52.962808 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:19:52.974956 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:19:52.977035 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:19:52.979632 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:19:53.006531 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:19:53.019864 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:19:53.082445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:53.086484 systemd-networkd[1889]: lo: Link UP May 17 00:19:53.086881 systemd-networkd[1889]: lo: Gained carrier May 17 00:19:53.089314 systemd-networkd[1889]: Enumeration completed May 17 00:19:53.089615 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:19:53.090849 systemd-networkd[1889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:53.090855 systemd-networkd[1889]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:19:53.097714 systemd-networkd[1889]: eth0: Link UP May 17 00:19:53.098226 systemd-networkd[1889]: eth0: Gained carrier May 17 00:19:53.098377 systemd-networkd[1889]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:53.100540 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:19:53.101768 systemd-resolved[1891]: Positive Trust Anchors: May 17 00:19:53.101784 systemd-resolved[1891]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:19:53.101837 systemd-resolved[1891]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:19:53.107861 systemd-networkd[1889]: eth0: DHCPv4 address 172.31.27.59/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:19:53.107891 systemd-resolved[1891]: Defaulting to hostname 'linux'. May 17 00:19:53.110884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:19:53.111773 systemd[1]: Reached target network.target - Network. May 17 00:19:53.112827 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:53.113598 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:19:53.114433 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:19:53.115169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:19:53.116041 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:19:53.116798 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:19:53.117244 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:19:53.117633 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:19:53.117663 systemd[1]: Reached target paths.target - Path Units. May 17 00:19:53.118073 systemd[1]: Reached target timers.target - Timer Units. May 17 00:19:53.119560 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:19:53.121474 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:19:53.130984 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:19:53.132124 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:19:53.132816 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:19:53.133263 systemd[1]: Reached target basic.target - Basic System. May 17 00:19:53.133644 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:19:53.133674 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:19:53.134868 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:19:53.138938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:19:53.142797 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:19:53.145865 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:19:53.148122 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:19:53.148800 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:19:53.152911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:19:53.160564 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:19:53.165887 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:19:53.166514 jq[1940]: false May 17 00:19:53.169890 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:19:53.177923 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:19:53.179908 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:19:53.191951 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:19:53.192703 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:19:53.193923 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:19:53.203208 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:19:53.207639 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:19:53.233358 extend-filesystems[1941]: Found loop4 May 17 00:19:53.241152 extend-filesystems[1941]: Found loop5 May 17 00:19:53.241152 extend-filesystems[1941]: Found loop6 May 17 00:19:53.241152 extend-filesystems[1941]: Found loop7 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p1 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p2 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p3 May 17 00:19:53.241152 extend-filesystems[1941]: Found usr May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p4 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p6 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p7 May 17 00:19:53.241152 extend-filesystems[1941]: Found nvme0n1p9 May 17 00:19:53.241152 extend-filesystems[1941]: Checking size of /dev/nvme0n1p9 May 17 00:19:53.241114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:19:53.241564 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:19:53.241901 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:19:53.242455 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:19:53.252818 jq[1957]: true May 17 00:19:53.254886 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:19:53.254249 dbus-daemon[1939]: [system] SELinux support is enabled May 17 00:19:53.264210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:19:53.264374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:19:53.275191 dbus-daemon[1939]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1889 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:19:53.286701 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:19:53.289908 update_engine[1952]: I20250517 00:19:53.289206 1952 main.cc:92] Flatcar Update Engine starting May 17 00:19:53.292186 ntpd[1943]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:07:47 UTC 2025 (1): Starting May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: ---------------------------------------------------- May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: ntp-4 is maintained by Network Time Foundation, May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: corporation. Support and training for ntp-4 are May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: available at https://www.nwtime.org/support May 17 00:19:53.293007 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: ---------------------------------------------------- May 17 00:19:53.292210 ntpd[1943]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:19:53.293087 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:19:53.292218 ntpd[1943]: ---------------------------------------------------- May 17 00:19:53.293157 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:19:53.292225 ntpd[1943]: ntp-4 is maintained by Network Time Foundation, May 17 00:19:53.292232 ntpd[1943]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:19:53.292239 ntpd[1943]: corporation. Support and training for ntp-4 are May 17 00:19:53.292246 ntpd[1943]: available at https://www.nwtime.org/support May 17 00:19:53.292252 ntpd[1943]: ---------------------------------------------------- May 17 00:19:53.298116 ntpd[1943]: proto: precision = 0.056 usec (-24) May 17 00:19:53.303953 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:19:53.307714 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: proto: precision = 0.056 usec (-24) May 17 00:19:53.307714 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: basedate set to 2025-05-04 May 17 00:19:53.307714 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: gps base set to 2025-05-04 (week 2365) May 17 00:19:53.307714 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:19:53.307714 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:19:53.305980 ntpd[1943]: basedate set to 2025-05-04 May 17 00:19:53.304844 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:19:53.305998 ntpd[1943]: gps base set to 2025-05-04 (week 2365) May 17 00:19:53.304876 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:19:53.307471 ntpd[1943]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:19:53.307504 ntpd[1943]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:19:53.310903 extend-filesystems[1941]: Resized partition /dev/nvme0n1p9 May 17 00:19:53.317833 update_engine[1952]: I20250517 00:19:53.316192 1952 update_check_scheduler.cc:74] Next update check in 2m0s May 17 00:19:53.317493 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:19:53.319148 systemd[1]: Started update-engine.service - Update Engine. May 17 00:19:53.319987 ntpd[1943]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Listen normally on 3 eth0 172.31.27.59:123 May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Listen normally on 4 lo [::1]:123 May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: bind(21) AF_INET6 fe80::400:a7ff:fe64:b269%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: unable to create socket on eth0 (5) for fe80::400:a7ff:fe64:b269%2#123 May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: failed to init interface for address fe80::400:a7ff:fe64:b269%2 May 17 00:19:53.321849 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: Listening on routing socket on fd #21 for interface updates May 17 00:19:53.320030 ntpd[1943]: Listen normally on 3 eth0 172.31.27.59:123 May 17 00:19:53.320064 ntpd[1943]: Listen normally on 4 lo [::1]:123 May 17 00:19:53.320103 ntpd[1943]: bind(21) AF_INET6 fe80::400:a7ff:fe64:b269%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:19:53.320120 ntpd[1943]: unable to create socket on eth0 (5) for fe80::400:a7ff:fe64:b269%2#123 May 17 00:19:53.320131 ntpd[1943]: failed to init interface for address fe80::400:a7ff:fe64:b269%2 May 17 00:19:53.320157 ntpd[1943]: Listening on routing socket on fd #21 for interface updates May 17 00:19:53.324835 tar[1964]: linux-amd64/helm May 17 00:19:53.325064 extend-filesystems[1990]: resize2fs 1.47.1 (20-May-2024) May 17 00:19:53.328953 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:19:53.335768 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:19:53.339970 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:19:53.345165 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:19:53.345165 ntpd[1943]: 17 May 00:19:53 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:19:53.340013 ntpd[1943]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:19:53.347716 jq[1967]: true May 17 00:19:53.378967 coreos-metadata[1938]: May 17 00:19:53.378 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:19:53.383044 coreos-metadata[1938]: May 17 00:19:53.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:19:53.383044 coreos-metadata[1938]: May 17 00:19:53.381 INFO Fetch successful May 17 00:19:53.383044 coreos-metadata[1938]: May 17 00:19:53.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:19:53.383044 coreos-metadata[1938]: May 17 00:19:53.382 INFO Fetch successful May 17 00:19:53.383044 coreos-metadata[1938]: May 17 00:19:53.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:19:53.382843 (ntainerd)[1986]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:19:53.387157 coreos-metadata[1938]: May 17 00:19:53.384 INFO Fetch successful May 17 00:19:53.387157 coreos-metadata[1938]: May 17 00:19:53.384 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:19:53.389268 coreos-metadata[1938]: May 17 00:19:53.389 INFO Fetch successful May 17 00:19:53.389268 coreos-metadata[1938]: May 17 00:19:53.389 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:19:53.390480 coreos-metadata[1938]: May 17 00:19:53.390 INFO Fetch failed with 404: resource not found May 17 00:19:53.390480 coreos-metadata[1938]: May 17 00:19:53.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:19:53.391290 coreos-metadata[1938]: May 17 00:19:53.391 INFO Fetch successful May 17 00:19:53.391290 coreos-metadata[1938]: May 17 00:19:53.391 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:19:53.391895 coreos-metadata[1938]: May 17 00:19:53.391 INFO Fetch successful May 17 00:19:53.391895 coreos-metadata[1938]: May 17 00:19:53.391 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:19:53.392460 coreos-metadata[1938]: May 17 00:19:53.392 INFO Fetch successful May 17 00:19:53.392460 coreos-metadata[1938]: May 17 00:19:53.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:19:53.394755 coreos-metadata[1938]: May 17 00:19:53.394 INFO Fetch successful May 17 00:19:53.394755 coreos-metadata[1938]: May 17 00:19:53.394 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:19:53.395908 coreos-metadata[1938]: May 17 00:19:53.395 INFO Fetch successful May 17 00:19:53.403243 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:19:53.401171 systemd-logind[1951]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:19:53.401194 systemd-logind[1951]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:19:53.423284 extend-filesystems[1990]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:19:53.423284 extend-filesystems[1990]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:19:53.423284 extend-filesystems[1990]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:19:53.401212 systemd-logind[1951]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:19:53.436473 extend-filesystems[1941]: Resized filesystem in /dev/nvme0n1p9 May 17 00:19:53.403558 systemd-logind[1951]: New seat seat0. May 17 00:19:53.404678 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:19:53.424429 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:19:53.425041 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:19:53.466078 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:19:53.468015 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:19:53.472124 bash[2018]: Updated "/home/core/.ssh/authorized_keys" May 17 00:19:53.472855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:19:53.490035 systemd[1]: Starting sshkeys.service... May 17 00:19:53.510764 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1718) May 17 00:19:53.510862 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:19:53.511068 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:19:53.520447 dbus-daemon[1939]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1983 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:19:53.533422 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:19:53.546937 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:19:53.561721 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:19:53.586450 polkitd[2035]: Started polkitd version 121 May 17 00:19:53.604235 polkitd[2035]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:19:53.604307 polkitd[2035]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:19:53.612473 polkitd[2035]: Finished loading, compiling and executing 2 rules May 17 00:19:53.613085 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:19:53.612926 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:19:53.615817 polkitd[2035]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:19:53.667492 systemd-hostnamed[1983]: Hostname set to (transient) May 17 00:19:53.667603 systemd-resolved[1891]: System hostname changed to 'ip-172-31-27-59'. May 17 00:19:53.692989 locksmithd[1991]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:19:53.726163 coreos-metadata[2049]: May 17 00:19:53.725 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:19:53.729833 coreos-metadata[2049]: May 17 00:19:53.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:19:53.739619 coreos-metadata[2049]: May 17 00:19:53.739 INFO Fetch successful May 17 00:19:53.739711 coreos-metadata[2049]: May 17 00:19:53.739 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:19:53.742232 coreos-metadata[2049]: May 17 00:19:53.741 INFO Fetch successful May 17 00:19:53.746682 unknown[2049]: wrote ssh authorized keys file for user: core May 17 00:19:53.796989 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:19:53.804971 update-ssh-keys[2095]: Updated "/home/core/.ssh/authorized_keys" May 17 00:19:53.805804 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:19:53.813210 systemd[1]: Finished sshkeys.service. May 17 00:19:53.836068 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:19:53.843101 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:19:53.875073 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:19:53.875909 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:19:53.888039 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:19:53.924674 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:19:53.949204 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:19:53.951034 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:19:53.951852 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:19:53.962395 containerd[1986]: time="2025-05-17T00:19:53.962323499Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:19:53.992281 containerd[1986]: time="2025-05-17T00:19:53.992034947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.993907 containerd[1986]: time="2025-05-17T00:19:53.993671698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:53.993907 containerd[1986]: time="2025-05-17T00:19:53.993702878Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:19:53.993907 containerd[1986]: time="2025-05-17T00:19:53.993718365Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:19:53.993907 containerd[1986]: time="2025-05-17T00:19:53.993861771Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:19:53.993907 containerd[1986]: time="2025-05-17T00:19:53.993876296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.994042 containerd[1986]: time="2025-05-17T00:19:53.993926758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:53.994042 containerd[1986]: time="2025-05-17T00:19:53.993937468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.994356 containerd[1986]: time="2025-05-17T00:19:53.994086389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:53.994356 containerd[1986]: time="2025-05-17T00:19:53.994102770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.994356 containerd[1986]: time="2025-05-17T00:19:53.994114425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:53.994356 containerd[1986]: time="2025-05-17T00:19:53.994123091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.994356 containerd[1986]: time="2025-05-17T00:19:53.994181534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.994356 containerd[1986]: time="2025-05-17T00:19:53.994351958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:53.994498 containerd[1986]: time="2025-05-17T00:19:53.994447158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:53.994498 containerd[1986]: time="2025-05-17T00:19:53.994459084Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:19:53.994540 containerd[1986]: time="2025-05-17T00:19:53.994524724Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:19:53.994595 containerd[1986]: time="2025-05-17T00:19:53.994562605Z" level=info msg="metadata content store policy set" policy=shared May 17 00:19:53.998735 containerd[1986]: time="2025-05-17T00:19:53.998318736Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:19:53.998735 containerd[1986]: time="2025-05-17T00:19:53.998373575Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:19:53.998735 containerd[1986]: time="2025-05-17T00:19:53.998392303Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:19:53.998735 containerd[1986]: time="2025-05-17T00:19:53.998414354Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:19:53.998735 containerd[1986]: time="2025-05-17T00:19:53.998428823Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:19:53.998735 containerd[1986]: time="2025-05-17T00:19:53.998562801Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:19:53.998935 containerd[1986]: time="2025-05-17T00:19:53.998844780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:19:53.998961 containerd[1986]: time="2025-05-17T00:19:53.998945484Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:19:53.998984 containerd[1986]: time="2025-05-17T00:19:53.998960670Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:19:53.999025 containerd[1986]: time="2025-05-17T00:19:53.999009860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:19:53.999048 containerd[1986]: time="2025-05-17T00:19:53.999030773Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999048 containerd[1986]: time="2025-05-17T00:19:53.999044409Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999088 containerd[1986]: time="2025-05-17T00:19:53.999056067Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999088 containerd[1986]: time="2025-05-17T00:19:53.999069145Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999088 containerd[1986]: time="2025-05-17T00:19:53.999083520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999150 containerd[1986]: time="2025-05-17T00:19:53.999095663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999150 containerd[1986]: time="2025-05-17T00:19:53.999106669Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999150 containerd[1986]: time="2025-05-17T00:19:53.999118015Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:19:53.999150 containerd[1986]: time="2025-05-17T00:19:53.999147384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999229 containerd[1986]: time="2025-05-17T00:19:53.999161268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999229 containerd[1986]: time="2025-05-17T00:19:53.999173299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999229 containerd[1986]: time="2025-05-17T00:19:53.999186016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999229 containerd[1986]: time="2025-05-17T00:19:53.999197121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999229 containerd[1986]: time="2025-05-17T00:19:53.999217646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999229 containerd[1986]: time="2025-05-17T00:19:53.999228748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999241581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999253298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999265961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999276355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999288145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999302126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999320610Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999339118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999355 containerd[1986]: time="2025-05-17T00:19:53.999349945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999359811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999414508Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999432767Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999443331Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999503978Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999515356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:19:53.999528 containerd[1986]: time="2025-05-17T00:19:53.999526860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:19:53.999667 containerd[1986]: time="2025-05-17T00:19:53.999536514Z" level=info msg="NRI interface is disabled by configuration." May 17 00:19:53.999667 containerd[1986]: time="2025-05-17T00:19:53.999546573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:19:54.000502 containerd[1986]: time="2025-05-17T00:19:53.999812022Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:19:54.000502 containerd[1986]: time="2025-05-17T00:19:53.999867369Z" level=info msg="Connect containerd service" May 17 00:19:54.000502 containerd[1986]: time="2025-05-17T00:19:53.999898152Z" level=info msg="using legacy CRI server" May 17 00:19:54.000502 containerd[1986]: time="2025-05-17T00:19:53.999904975Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:19:54.000502 containerd[1986]: time="2025-05-17T00:19:54.000000142Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:19:54.000801 containerd[1986]: time="2025-05-17T00:19:54.000622452Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:19:54.000801 containerd[1986]: time="2025-05-17T00:19:54.000732705Z" level=info msg="Start subscribing containerd event" May 17 00:19:54.000801 containerd[1986]: time="2025-05-17T00:19:54.000782034Z" level=info msg="Start recovering state" May 17 00:19:54.000866 containerd[1986]: time="2025-05-17T00:19:54.000834246Z" level=info msg="Start event monitor" May 17 00:19:54.000866 containerd[1986]: time="2025-05-17T00:19:54.000846218Z" level=info msg="Start snapshots syncer" May 17 00:19:54.000866 containerd[1986]: time="2025-05-17T00:19:54.000853726Z" level=info msg="Start cni network conf syncer for default" May 17 00:19:54.000866 containerd[1986]: time="2025-05-17T00:19:54.000861502Z" level=info msg="Start streaming server" May 17 00:19:54.001760 containerd[1986]: time="2025-05-17T00:19:54.001332422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:19:54.001760 containerd[1986]: time="2025-05-17T00:19:54.001381708Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:19:54.003218 containerd[1986]: time="2025-05-17T00:19:54.002580606Z" level=info msg="containerd successfully booted in 0.040927s" May 17 00:19:54.002665 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:19:54.189554 tar[1964]: linux-amd64/LICENSE May 17 00:19:54.189723 tar[1964]: linux-amd64/README.md May 17 00:19:54.200610 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:19:54.292641 ntpd[1943]: bind(24) AF_INET6 fe80::400:a7ff:fe64:b269%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:19:54.293003 ntpd[1943]: 17 May 00:19:54 ntpd[1943]: bind(24) AF_INET6 fe80::400:a7ff:fe64:b269%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:19:54.293003 ntpd[1943]: 17 May 00:19:54 ntpd[1943]: unable to create socket on eth0 (6) for fe80::400:a7ff:fe64:b269%2#123 May 17 00:19:54.293003 ntpd[1943]: 17 May 00:19:54 ntpd[1943]: failed to init interface for address fe80::400:a7ff:fe64:b269%2 May 17 00:19:54.292687 ntpd[1943]: unable to create socket on eth0 (6) for fe80::400:a7ff:fe64:b269%2#123 May 17 00:19:54.292699 ntpd[1943]: failed to init interface for address fe80::400:a7ff:fe64:b269%2 May 17 00:19:54.892905 systemd-networkd[1889]: eth0: Gained IPv6LL May 17 00:19:54.896140 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:19:54.897497 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:19:54.902096 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:19:54.912319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:54.917007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:19:54.956298 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:19:54.975348 amazon-ssm-agent[2164]: Initializing new seelog logger May 17 00:19:54.975921 amazon-ssm-agent[2164]: New Seelog Logger Creation Complete May 17 00:19:54.976103 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.976186 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.976866 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 processing appconfig overrides May 17 00:19:54.977353 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.977353 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.977449 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 processing appconfig overrides May 17 00:19:54.977831 amazon-ssm-agent[2164]: 2025-05-17 00:19:54 INFO Proxy environment variables: May 17 00:19:54.977831 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.977831 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.977996 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 processing appconfig overrides May 17 00:19:54.980958 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.980958 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:19:54.980958 amazon-ssm-agent[2164]: 2025/05/17 00:19:54 processing appconfig overrides May 17 00:19:55.077175 amazon-ssm-agent[2164]: 2025-05-17 00:19:54 INFO https_proxy: May 17 00:19:55.174940 amazon-ssm-agent[2164]: 2025-05-17 00:19:54 INFO http_proxy: May 17 00:19:55.268454 amazon-ssm-agent[2164]: 2025-05-17 00:19:54 INFO no_proxy: May 17 00:19:55.268598 amazon-ssm-agent[2164]: 2025-05-17 00:19:54 INFO Checking if agent identity type OnPrem can be assumed May 17 00:19:55.268644 amazon-ssm-agent[2164]: 2025-05-17 00:19:54 INFO Checking if agent identity type EC2 can be assumed May 17 00:19:55.268681 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO Agent will take identity from EC2 May 17 00:19:55.268719 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:19:55.268775 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:19:55.268810 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:19:55.268844 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [Registrar] Starting registrar module May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [EC2Identity] EC2 registration was successful. May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [CredentialRefresher] credentialRefresher has started May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:19:55.269045 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:19:55.273083 amazon-ssm-agent[2164]: 2025-05-17 00:19:55 INFO [CredentialRefresher] Next credential rotation will be in 31.1582885347 minutes May 17 00:19:55.515039 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:19:55.516698 systemd[1]: Started sshd@0-172.31.27.59:22-147.75.109.163:37696.service - OpenSSH per-connection server daemon (147.75.109.163:37696). May 17 00:19:55.750812 sshd[2184]: Accepted publickey for core from 147.75.109.163 port 37696 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:19:55.753344 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:55.763312 systemd-logind[1951]: New session 1 of user core. May 17 00:19:55.763882 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:19:55.769012 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:19:55.784631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:19:55.791049 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:19:55.796456 (systemd)[2188]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:19:55.918845 systemd[2188]: Queued start job for default target default.target. May 17 00:19:55.934040 systemd[2188]: Created slice app.slice - User Application Slice. May 17 00:19:55.934074 systemd[2188]: Reached target paths.target - Paths. May 17 00:19:55.934089 systemd[2188]: Reached target timers.target - Timers. May 17 00:19:55.935392 systemd[2188]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:19:55.947643 systemd[2188]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:19:55.947797 systemd[2188]: Reached target sockets.target - Sockets. May 17 00:19:55.947813 systemd[2188]: Reached target basic.target - Basic System. May 17 00:19:55.947854 systemd[2188]: Reached target default.target - Main User Target. May 17 00:19:55.947884 systemd[2188]: Startup finished in 144ms. May 17 00:19:55.948028 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:19:55.954946 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:19:56.101032 systemd[1]: Started sshd@1-172.31.27.59:22-147.75.109.163:37698.service - OpenSSH per-connection server daemon (147.75.109.163:37698). May 17 00:19:56.254511 sshd[2199]: Accepted publickey for core from 147.75.109.163 port 37698 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:19:56.255873 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:56.260656 systemd-logind[1951]: New session 2 of user core. May 17 00:19:56.264932 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:19:56.282989 amazon-ssm-agent[2164]: 2025-05-17 00:19:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:19:56.383399 amazon-ssm-agent[2164]: 2025-05-17 00:19:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2203) started May 17 00:19:56.389244 sshd[2199]: pam_unix(sshd:session): session closed for user core May 17 00:19:56.391988 systemd[1]: sshd@1-172.31.27.59:22-147.75.109.163:37698.service: Deactivated successfully. May 17 00:19:56.393856 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:19:56.395191 systemd-logind[1951]: Session 2 logged out. Waiting for processes to exit. May 17 00:19:56.396267 systemd-logind[1951]: Removed session 2. May 17 00:19:56.418507 systemd[1]: Started sshd@2-172.31.27.59:22-147.75.109.163:37710.service - OpenSSH per-connection server daemon (147.75.109.163:37710). May 17 00:19:56.484351 amazon-ssm-agent[2164]: 2025-05-17 00:19:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:19:56.577497 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 37710 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:19:56.579009 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:56.583361 systemd-logind[1951]: New session 3 of user core. May 17 00:19:56.590962 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:19:56.711076 sshd[2217]: pam_unix(sshd:session): session closed for user core May 17 00:19:56.713466 systemd[1]: sshd@2-172.31.27.59:22-147.75.109.163:37710.service: Deactivated successfully. May 17 00:19:56.714904 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:19:56.716307 systemd-logind[1951]: Session 3 logged out. Waiting for processes to exit. May 17 00:19:56.717330 systemd-logind[1951]: Removed session 3. May 17 00:19:57.292621 ntpd[1943]: Listen normally on 7 eth0 [fe80::400:a7ff:fe64:b269%2]:123 May 17 00:19:57.293790 ntpd[1943]: 17 May 00:19:57 ntpd[1943]: Listen normally on 7 eth0 [fe80::400:a7ff:fe64:b269%2]:123 May 17 00:19:58.033015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:58.034381 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:19:58.036994 systemd[1]: Startup finished in 585ms (kernel) + 7.740s (initrd) + 8.637s (userspace) = 16.963s. May 17 00:19:58.043604 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:19:59.192243 kubelet[2228]: E0517 00:19:59.192093 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:19:59.194892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:19:59.195085 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:19:59.195427 systemd[1]: kubelet.service: Consumed 1.029s CPU time. May 17 00:20:00.114560 systemd-resolved[1891]: Clock change detected. Flushing caches. May 17 00:20:06.564799 systemd[1]: Started sshd@3-172.31.27.59:22-147.75.109.163:39464.service - OpenSSH per-connection server daemon (147.75.109.163:39464). May 17 00:20:06.724349 sshd[2240]: Accepted publickey for core from 147.75.109.163 port 39464 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:20:06.728011 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:06.737523 systemd-logind[1951]: New session 4 of user core. May 17 00:20:06.748194 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:20:06.865694 sshd[2240]: pam_unix(sshd:session): session closed for user core May 17 00:20:06.868621 systemd[1]: sshd@3-172.31.27.59:22-147.75.109.163:39464.service: Deactivated successfully. May 17 00:20:06.870186 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:20:06.871436 systemd-logind[1951]: Session 4 logged out. Waiting for processes to exit. May 17 00:20:06.872411 systemd-logind[1951]: Removed session 4. May 17 00:20:06.895172 systemd[1]: Started sshd@4-172.31.27.59:22-147.75.109.163:39474.service - OpenSSH per-connection server daemon (147.75.109.163:39474). May 17 00:20:07.055613 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 39474 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:20:07.057066 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:07.061214 systemd-logind[1951]: New session 5 of user core. May 17 00:20:07.071186 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:20:07.187146 sshd[2247]: pam_unix(sshd:session): session closed for user core May 17 00:20:07.191227 systemd[1]: sshd@4-172.31.27.59:22-147.75.109.163:39474.service: Deactivated successfully. May 17 00:20:07.193132 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:20:07.194748 systemd-logind[1951]: Session 5 logged out. Waiting for processes to exit. May 17 00:20:07.195920 systemd-logind[1951]: Removed session 5. May 17 00:20:07.220348 systemd[1]: Started sshd@5-172.31.27.59:22-147.75.109.163:39488.service - OpenSSH per-connection server daemon (147.75.109.163:39488). May 17 00:20:07.377623 sshd[2254]: Accepted publickey for core from 147.75.109.163 port 39488 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:20:07.379323 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:07.384155 systemd-logind[1951]: New session 6 of user core. May 17 00:20:07.394454 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:20:07.514410 sshd[2254]: pam_unix(sshd:session): session closed for user core May 17 00:20:07.519054 systemd[1]: sshd@5-172.31.27.59:22-147.75.109.163:39488.service: Deactivated successfully. May 17 00:20:07.520837 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:20:07.521556 systemd-logind[1951]: Session 6 logged out. Waiting for processes to exit. May 17 00:20:07.522715 systemd-logind[1951]: Removed session 6. May 17 00:20:07.549094 systemd[1]: Started sshd@6-172.31.27.59:22-147.75.109.163:39498.service - OpenSSH per-connection server daemon (147.75.109.163:39498). May 17 00:20:07.709600 sshd[2261]: Accepted publickey for core from 147.75.109.163 port 39498 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:20:07.711154 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:07.715783 systemd-logind[1951]: New session 7 of user core. May 17 00:20:07.731181 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:20:07.844468 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:20:07.844879 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:07.863544 sudo[2264]: pam_unix(sudo:session): session closed for user root May 17 00:20:07.886598 sshd[2261]: pam_unix(sshd:session): session closed for user core May 17 00:20:07.891556 systemd[1]: sshd@6-172.31.27.59:22-147.75.109.163:39498.service: Deactivated successfully. May 17 00:20:07.893517 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:20:07.894611 systemd-logind[1951]: Session 7 logged out. Waiting for processes to exit. May 17 00:20:07.895861 systemd-logind[1951]: Removed session 7. May 17 00:20:07.915622 systemd[1]: Started sshd@7-172.31.27.59:22-147.75.109.163:39508.service - OpenSSH per-connection server daemon (147.75.109.163:39508). May 17 00:20:08.071479 sshd[2269]: Accepted publickey for core from 147.75.109.163 port 39508 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:20:08.073057 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:08.078046 systemd-logind[1951]: New session 8 of user core. May 17 00:20:08.088163 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:20:08.181538 sudo[2273]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:20:08.182140 sudo[2273]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:08.186079 sudo[2273]: pam_unix(sudo:session): session closed for user root May 17 00:20:08.191625 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:20:08.192036 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:08.216376 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:20:08.218995 auditctl[2276]: No rules May 17 00:20:08.219887 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:20:08.220165 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:20:08.222164 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:20:08.255697 augenrules[2294]: No rules May 17 00:20:08.256821 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:20:08.258118 sudo[2272]: pam_unix(sudo:session): session closed for user root May 17 00:20:08.280180 sshd[2269]: pam_unix(sshd:session): session closed for user core May 17 00:20:08.283781 systemd[1]: sshd@7-172.31.27.59:22-147.75.109.163:39508.service: Deactivated successfully. May 17 00:20:08.285753 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:20:08.287526 systemd-logind[1951]: Session 8 logged out. Waiting for processes to exit. May 17 00:20:08.288665 systemd-logind[1951]: Removed session 8. May 17 00:20:08.310012 systemd[1]: Started sshd@8-172.31.27.59:22-147.75.109.163:57260.service - OpenSSH per-connection server daemon (147.75.109.163:57260). May 17 00:20:08.468290 sshd[2302]: Accepted publickey for core from 147.75.109.163 port 57260 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:20:08.469883 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:08.476042 systemd-logind[1951]: New session 9 of user core. May 17 00:20:08.485228 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:20:08.581631 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:20:08.581940 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:08.945387 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:20:08.945510 (dockerd)[2321]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:20:09.266920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:20:09.276496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:09.364390 dockerd[2321]: time="2025-05-17T00:20:09.364323622Z" level=info msg="Starting up" May 17 00:20:09.487433 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2961852115-merged.mount: Deactivated successfully. May 17 00:20:09.577860 dockerd[2321]: time="2025-05-17T00:20:09.577748221Z" level=info msg="Loading containers: start." May 17 00:20:09.601214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:09.605119 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:09.662503 kubelet[2352]: E0517 00:20:09.662056 2352 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:09.666011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:09.666160 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:09.723968 kernel: Initializing XFRM netlink socket May 17 00:20:09.752020 (udev-worker)[2358]: Network interface NamePolicy= disabled on kernel command line. May 17 00:20:09.813947 systemd-networkd[1889]: docker0: Link UP May 17 00:20:09.839729 dockerd[2321]: time="2025-05-17T00:20:09.839611173Z" level=info msg="Loading containers: done." May 17 00:20:09.865431 dockerd[2321]: time="2025-05-17T00:20:09.865379832Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:20:09.865604 dockerd[2321]: time="2025-05-17T00:20:09.865497926Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:20:09.865639 dockerd[2321]: time="2025-05-17T00:20:09.865621126Z" level=info msg="Daemon has completed initialization" May 17 00:20:09.915877 dockerd[2321]: time="2025-05-17T00:20:09.915352355Z" level=info msg="API listen on /run/docker.sock" May 17 00:20:09.915607 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:20:10.915720 containerd[1986]: time="2025-05-17T00:20:10.915658578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:20:11.535639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148299584.mount: Deactivated successfully. May 17 00:20:12.808528 containerd[1986]: time="2025-05-17T00:20:12.808470591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:12.809526 containerd[1986]: time="2025-05-17T00:20:12.809481670Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:20:12.812515 containerd[1986]: time="2025-05-17T00:20:12.812450035Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:12.815599 containerd[1986]: time="2025-05-17T00:20:12.815544188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:12.816608 containerd[1986]: time="2025-05-17T00:20:12.816453366Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.90075581s" May 17 00:20:12.816608 containerd[1986]: time="2025-05-17T00:20:12.816488233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:20:12.817137 containerd[1986]: time="2025-05-17T00:20:12.817105248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:20:14.164947 containerd[1986]: time="2025-05-17T00:20:14.164880214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:14.168023 containerd[1986]: time="2025-05-17T00:20:14.167961219Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:20:14.170476 containerd[1986]: time="2025-05-17T00:20:14.170398438Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:14.174520 containerd[1986]: time="2025-05-17T00:20:14.174451531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:14.175780 containerd[1986]: time="2025-05-17T00:20:14.175617520Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.358348072s" May 17 00:20:14.175780 containerd[1986]: time="2025-05-17T00:20:14.175663747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:20:14.176546 containerd[1986]: time="2025-05-17T00:20:14.176520738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:20:15.354631 containerd[1986]: time="2025-05-17T00:20:15.354567275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:15.356392 containerd[1986]: time="2025-05-17T00:20:15.356333795Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:20:15.358126 containerd[1986]: time="2025-05-17T00:20:15.358074245Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:15.361369 containerd[1986]: time="2025-05-17T00:20:15.361008722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:15.362066 containerd[1986]: time="2025-05-17T00:20:15.362031390Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.185476653s" May 17 00:20:15.362066 containerd[1986]: time="2025-05-17T00:20:15.362065227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:20:15.362646 containerd[1986]: time="2025-05-17T00:20:15.362604531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:20:16.457478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341756310.mount: Deactivated successfully. May 17 00:20:16.998433 containerd[1986]: time="2025-05-17T00:20:16.998383441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:17.002028 containerd[1986]: time="2025-05-17T00:20:17.001962364Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:20:17.006385 containerd[1986]: time="2025-05-17T00:20:17.005176584Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:17.009886 containerd[1986]: time="2025-05-17T00:20:17.009823976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:17.011021 containerd[1986]: time="2025-05-17T00:20:17.010548783Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.647912976s" May 17 00:20:17.011021 containerd[1986]: time="2025-05-17T00:20:17.010588510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:20:17.011330 containerd[1986]: time="2025-05-17T00:20:17.011299499Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:20:17.565078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312550606.mount: Deactivated successfully. May 17 00:20:18.635406 containerd[1986]: time="2025-05-17T00:20:18.635348836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:18.637531 containerd[1986]: time="2025-05-17T00:20:18.637476980Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:20:18.640158 containerd[1986]: time="2025-05-17T00:20:18.640094339Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:18.644537 containerd[1986]: time="2025-05-17T00:20:18.644477744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:18.646340 containerd[1986]: time="2025-05-17T00:20:18.645701969Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.63436542s" May 17 00:20:18.646340 containerd[1986]: time="2025-05-17T00:20:18.645749820Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:20:18.646964 containerd[1986]: time="2025-05-17T00:20:18.646761077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:20:19.184538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610728480.mount: Deactivated successfully. May 17 00:20:19.196539 containerd[1986]: time="2025-05-17T00:20:19.196490298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:19.198755 containerd[1986]: time="2025-05-17T00:20:19.198521421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:20:19.202166 containerd[1986]: time="2025-05-17T00:20:19.200924848Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:19.204415 containerd[1986]: time="2025-05-17T00:20:19.204332614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:19.205662 containerd[1986]: time="2025-05-17T00:20:19.205142428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 558.341119ms" May 17 00:20:19.205662 containerd[1986]: time="2025-05-17T00:20:19.205187686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:20:19.206018 containerd[1986]: time="2025-05-17T00:20:19.205972809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:20:19.722723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:20:19.729225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:19.738759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361494496.mount: Deactivated successfully. May 17 00:20:20.071145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:20.087896 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:20.241409 kubelet[2623]: E0517 00:20:20.241217 2623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:20.246980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:20.247279 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:22.459193 containerd[1986]: time="2025-05-17T00:20:22.459123118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:22.461198 containerd[1986]: time="2025-05-17T00:20:22.461145944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:20:22.463724 containerd[1986]: time="2025-05-17T00:20:22.463660963Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:22.468029 containerd[1986]: time="2025-05-17T00:20:22.467986736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:22.469282 containerd[1986]: time="2025-05-17T00:20:22.469138693Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.263118033s" May 17 00:20:22.469282 containerd[1986]: time="2025-05-17T00:20:22.469173682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:20:23.523506 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:20:25.525107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:25.542491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:25.585614 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-9.scope)... May 17 00:20:25.585631 systemd[1]: Reloading... May 17 00:20:25.709048 zram_generator::config[2739]: No configuration found. May 17 00:20:25.873759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:25.960384 systemd[1]: Reloading finished in 374 ms. May 17 00:20:26.012699 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:20:26.012812 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:20:26.013207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:26.018373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:26.212777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:26.223520 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:20:26.280034 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:26.280034 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:20:26.280034 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:26.280399 kubelet[2806]: I0517 00:20:26.280079 2806 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:20:26.524147 kubelet[2806]: I0517 00:20:26.524035 2806 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:20:26.524147 kubelet[2806]: I0517 00:20:26.524067 2806 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:20:26.524533 kubelet[2806]: I0517 00:20:26.524324 2806 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:20:26.576052 kubelet[2806]: I0517 00:20:26.576006 2806 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:20:26.582195 kubelet[2806]: E0517 00:20:26.581834 2806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:26.607164 kubelet[2806]: E0517 00:20:26.607098 2806 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:20:26.607164 kubelet[2806]: I0517 00:20:26.607142 2806 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:20:26.622768 kubelet[2806]: I0517 00:20:26.622732 2806 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:20:26.625275 kubelet[2806]: I0517 00:20:26.625236 2806 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:20:26.625463 kubelet[2806]: I0517 00:20:26.625424 2806 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:20:26.625634 kubelet[2806]: I0517 00:20:26.625462 2806 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:20:26.625731 kubelet[2806]: I0517 00:20:26.625641 2806 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:20:26.625731 kubelet[2806]: I0517 00:20:26.625651 2806 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:20:26.625780 kubelet[2806]: I0517 00:20:26.625747 2806 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:26.631542 kubelet[2806]: I0517 00:20:26.631399 2806 kubelet.go:408] "Attempting to sync node with API server" May 17 00:20:26.631542 kubelet[2806]: I0517 00:20:26.631450 2806 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:20:26.631919 kubelet[2806]: I0517 00:20:26.631768 2806 kubelet.go:314] "Adding apiserver pod source" May 17 00:20:26.631919 kubelet[2806]: I0517 00:20:26.631818 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:20:26.637738 kubelet[2806]: W0517 00:20:26.636829 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-59&limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:26.637738 kubelet[2806]: E0517 00:20:26.636905 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-59&limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:26.637738 kubelet[2806]: W0517 00:20:26.637341 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:26.637738 kubelet[2806]: E0517 00:20:26.637390 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:26.638171 kubelet[2806]: I0517 00:20:26.638154 2806 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:20:26.643607 kubelet[2806]: I0517 00:20:26.643580 2806 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:20:26.643745 kubelet[2806]: W0517 00:20:26.643654 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:20:26.644487 kubelet[2806]: I0517 00:20:26.644294 2806 server.go:1274] "Started kubelet" May 17 00:20:26.644487 kubelet[2806]: I0517 00:20:26.644396 2806 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:20:26.645760 kubelet[2806]: I0517 00:20:26.645466 2806 server.go:449] "Adding debug handlers to kubelet server" May 17 00:20:26.651047 kubelet[2806]: I0517 00:20:26.651016 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:20:26.653131 kubelet[2806]: I0517 00:20:26.652352 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:20:26.653131 kubelet[2806]: I0517 00:20:26.652595 2806 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:20:26.654968 kubelet[2806]: E0517 00:20:26.653037 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.59:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-59.184028898c01207a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-59,UID:ip-172-31-27-59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-59,},FirstTimestamp:2025-05-17 00:20:26.64426713 +0000 UTC m=+0.416949153,LastTimestamp:2025-05-17 00:20:26.64426713 +0000 UTC m=+0.416949153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-59,}" May 17 00:20:26.655882 kubelet[2806]: I0517 00:20:26.655543 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:20:26.662039 kubelet[2806]: I0517 00:20:26.662004 2806 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:20:26.662410 kubelet[2806]: E0517 00:20:26.662379 2806 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-59\" not found" May 17 00:20:26.665286 kubelet[2806]: E0517 00:20:26.664307 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-59?timeout=10s\": dial tcp 172.31.27.59:6443: connect: connection refused" interval="200ms" May 17 00:20:26.665286 kubelet[2806]: I0517 00:20:26.664407 2806 reconciler.go:26] "Reconciler: start to sync state" May 17 00:20:26.665286 kubelet[2806]: I0517 00:20:26.664453 2806 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:20:26.665286 kubelet[2806]: W0517 00:20:26.664862 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:26.665683 kubelet[2806]: E0517 00:20:26.664922 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:26.665984 kubelet[2806]: I0517 00:20:26.665962 2806 factory.go:221] Registration of the systemd container factory successfully May 17 00:20:26.666090 kubelet[2806]: I0517 00:20:26.666070 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:20:26.669098 kubelet[2806]: E0517 00:20:26.668956 2806 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:20:26.669221 kubelet[2806]: I0517 00:20:26.669207 2806 factory.go:221] Registration of the containerd container factory successfully May 17 00:20:26.687002 kubelet[2806]: I0517 00:20:26.686790 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:20:26.690719 kubelet[2806]: I0517 00:20:26.690295 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:20:26.690719 kubelet[2806]: I0517 00:20:26.690341 2806 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:20:26.690719 kubelet[2806]: I0517 00:20:26.690362 2806 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:20:26.690719 kubelet[2806]: E0517 00:20:26.690405 2806 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:20:26.699518 kubelet[2806]: W0517 00:20:26.699325 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:26.699518 kubelet[2806]: I0517 00:20:26.699392 2806 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:20:26.699518 kubelet[2806]: I0517 00:20:26.699403 2806 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:20:26.699518 kubelet[2806]: E0517 00:20:26.699418 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:26.700114 kubelet[2806]: I0517 00:20:26.699816 2806 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:26.706854 kubelet[2806]: I0517 00:20:26.706807 2806 policy_none.go:49] "None policy: Start" May 17 00:20:26.707895 kubelet[2806]: I0517 00:20:26.707870 2806 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:20:26.708031 kubelet[2806]: I0517 00:20:26.707904 2806 state_mem.go:35] "Initializing new in-memory state store" May 17 00:20:26.719976 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:20:26.730673 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:20:26.734202 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:20:26.745323 kubelet[2806]: I0517 00:20:26.745113 2806 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:20:26.745323 kubelet[2806]: I0517 00:20:26.745300 2806 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:20:26.745323 kubelet[2806]: I0517 00:20:26.745309 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:20:26.745720 kubelet[2806]: I0517 00:20:26.745701 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:20:26.747234 kubelet[2806]: E0517 00:20:26.746905 2806 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-59\" not found" May 17 00:20:26.802119 systemd[1]: Created slice kubepods-burstable-poda94c201d4e9b737cc6128e9af4bbb8c9.slice - libcontainer container kubepods-burstable-poda94c201d4e9b737cc6128e9af4bbb8c9.slice. May 17 00:20:26.828385 systemd[1]: Created slice kubepods-burstable-pod7fcc5800ff1ba089733f90a1280e9780.slice - libcontainer container kubepods-burstable-pod7fcc5800ff1ba089733f90a1280e9780.slice. May 17 00:20:26.840408 systemd[1]: Created slice kubepods-burstable-pod191aa19f835251fc59b2510797f1ab84.slice - libcontainer container kubepods-burstable-pod191aa19f835251fc59b2510797f1ab84.slice. May 17 00:20:26.847593 kubelet[2806]: I0517 00:20:26.847563 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-59" May 17 00:20:26.847960 kubelet[2806]: E0517 00:20:26.847898 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.59:6443/api/v1/nodes\": dial tcp 172.31.27.59:6443: connect: connection refused" node="ip-172-31-27-59" May 17 00:20:26.865599 kubelet[2806]: E0517 00:20:26.865544 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-59?timeout=10s\": dial tcp 172.31.27.59:6443: connect: connection refused" interval="400ms" May 17 00:20:26.966109 kubelet[2806]: I0517 00:20:26.966045 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a94c201d4e9b737cc6128e9af4bbb8c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-59\" (UID: \"a94c201d4e9b737cc6128e9af4bbb8c9\") " pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:26.966402 kubelet[2806]: I0517 00:20:26.966119 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a94c201d4e9b737cc6128e9af4bbb8c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-59\" (UID: \"a94c201d4e9b737cc6128e9af4bbb8c9\") " pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:26.966402 kubelet[2806]: I0517 00:20:26.966184 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:26.966402 kubelet[2806]: I0517 00:20:26.966205 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:26.966402 kubelet[2806]: I0517 00:20:26.966304 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:26.966402 kubelet[2806]: I0517 00:20:26.966331 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a94c201d4e9b737cc6128e9af4bbb8c9-ca-certs\") pod \"kube-apiserver-ip-172-31-27-59\" (UID: \"a94c201d4e9b737cc6128e9af4bbb8c9\") " pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:26.966568 kubelet[2806]: I0517 00:20:26.966355 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:26.966568 kubelet[2806]: I0517 00:20:26.966369 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:26.966568 kubelet[2806]: I0517 00:20:26.966385 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/191aa19f835251fc59b2510797f1ab84-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-59\" (UID: \"191aa19f835251fc59b2510797f1ab84\") " pod="kube-system/kube-scheduler-ip-172-31-27-59" May 17 00:20:27.049798 kubelet[2806]: I0517 00:20:27.049771 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-59" May 17 00:20:27.050144 kubelet[2806]: E0517 00:20:27.050110 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.59:6443/api/v1/nodes\": dial tcp 172.31.27.59:6443: connect: connection refused" node="ip-172-31-27-59" May 17 00:20:27.125865 containerd[1986]: time="2025-05-17T00:20:27.125748436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-59,Uid:a94c201d4e9b737cc6128e9af4bbb8c9,Namespace:kube-system,Attempt:0,}" May 17 00:20:27.145117 containerd[1986]: time="2025-05-17T00:20:27.145075070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-59,Uid:191aa19f835251fc59b2510797f1ab84,Namespace:kube-system,Attempt:0,}" May 17 00:20:27.145364 containerd[1986]: time="2025-05-17T00:20:27.145075512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-59,Uid:7fcc5800ff1ba089733f90a1280e9780,Namespace:kube-system,Attempt:0,}" May 17 00:20:27.266546 kubelet[2806]: E0517 00:20:27.266504 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-59?timeout=10s\": dial tcp 172.31.27.59:6443: connect: connection refused" interval="800ms" May 17 00:20:27.451863 kubelet[2806]: I0517 00:20:27.451760 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-59" May 17 00:20:27.452195 kubelet[2806]: E0517 00:20:27.452084 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.59:6443/api/v1/nodes\": dial tcp 172.31.27.59:6443: connect: connection refused" node="ip-172-31-27-59" May 17 00:20:27.639193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604699516.mount: Deactivated successfully. May 17 00:20:27.656699 containerd[1986]: time="2025-05-17T00:20:27.656647208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:27.662979 containerd[1986]: time="2025-05-17T00:20:27.662895623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:20:27.664972 containerd[1986]: time="2025-05-17T00:20:27.664861363Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:27.667236 containerd[1986]: time="2025-05-17T00:20:27.667191168Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:27.668945 containerd[1986]: time="2025-05-17T00:20:27.668892326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:27.671211 containerd[1986]: time="2025-05-17T00:20:27.670894241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:20:27.673105 containerd[1986]: time="2025-05-17T00:20:27.673054098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:20:27.675270 containerd[1986]: time="2025-05-17T00:20:27.675213868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:27.676164 containerd[1986]: time="2025-05-17T00:20:27.675980829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.151728ms" May 17 00:20:27.678197 containerd[1986]: time="2025-05-17T00:20:27.678157306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.000571ms" May 17 00:20:27.681230 containerd[1986]: time="2025-05-17T00:20:27.681194889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.909991ms" May 17 00:20:27.692861 kubelet[2806]: W0517 00:20:27.692797 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:27.692861 kubelet[2806]: E0517 00:20:27.692864 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:27.788156 kubelet[2806]: W0517 00:20:27.788000 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:27.790280 kubelet[2806]: E0517 00:20:27.789252 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:27.896044 containerd[1986]: time="2025-05-17T00:20:27.895752201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:27.896044 containerd[1986]: time="2025-05-17T00:20:27.895806941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:27.896044 containerd[1986]: time="2025-05-17T00:20:27.895821830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:27.896044 containerd[1986]: time="2025-05-17T00:20:27.895907368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:27.900399 containerd[1986]: time="2025-05-17T00:20:27.900040308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:27.900399 containerd[1986]: time="2025-05-17T00:20:27.900085599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:27.900399 containerd[1986]: time="2025-05-17T00:20:27.900110422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:27.900399 containerd[1986]: time="2025-05-17T00:20:27.900186356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:27.904727 containerd[1986]: time="2025-05-17T00:20:27.903992535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:27.904727 containerd[1986]: time="2025-05-17T00:20:27.904552229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:27.904727 containerd[1986]: time="2025-05-17T00:20:27.904565180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:27.904727 containerd[1986]: time="2025-05-17T00:20:27.904641007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:27.922134 systemd[1]: Started cri-containerd-43dd14c33a7a84dc0d1a95417bf6c8bba562fc5b177cc3d744a99511737191a8.scope - libcontainer container 43dd14c33a7a84dc0d1a95417bf6c8bba562fc5b177cc3d744a99511737191a8. May 17 00:20:27.935723 systemd[1]: Started cri-containerd-93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd.scope - libcontainer container 93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd. May 17 00:20:27.937212 systemd[1]: Started cri-containerd-d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b.scope - libcontainer container d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b. May 17 00:20:28.002058 containerd[1986]: time="2025-05-17T00:20:28.002000131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-59,Uid:7fcc5800ff1ba089733f90a1280e9780,Namespace:kube-system,Attempt:0,} returns sandbox id \"d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b\"" May 17 00:20:28.003794 containerd[1986]: time="2025-05-17T00:20:28.003763904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-59,Uid:a94c201d4e9b737cc6128e9af4bbb8c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"43dd14c33a7a84dc0d1a95417bf6c8bba562fc5b177cc3d744a99511737191a8\"" May 17 00:20:28.013136 containerd[1986]: time="2025-05-17T00:20:28.013101340Z" level=info msg="CreateContainer within sandbox \"43dd14c33a7a84dc0d1a95417bf6c8bba562fc5b177cc3d744a99511737191a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:20:28.013489 containerd[1986]: time="2025-05-17T00:20:28.013442022Z" level=info msg="CreateContainer within sandbox \"d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:20:28.024604 kubelet[2806]: W0517 00:20:28.024490 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:28.024878 kubelet[2806]: E0517 00:20:28.024802 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:28.025409 containerd[1986]: time="2025-05-17T00:20:28.025077226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-59,Uid:191aa19f835251fc59b2510797f1ab84,Namespace:kube-system,Attempt:0,} returns sandbox id \"93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd\"" May 17 00:20:28.028483 containerd[1986]: time="2025-05-17T00:20:28.028448750Z" level=info msg="CreateContainer within sandbox \"93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:20:28.054146 containerd[1986]: time="2025-05-17T00:20:28.053948714Z" level=info msg="CreateContainer within sandbox \"43dd14c33a7a84dc0d1a95417bf6c8bba562fc5b177cc3d744a99511737191a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6f4e40772f8c2234537f3fac6fdb444d16ca7c2bd44815094a9b00c8fb00fda\"" May 17 00:20:28.055262 containerd[1986]: time="2025-05-17T00:20:28.055231109Z" level=info msg="StartContainer for \"d6f4e40772f8c2234537f3fac6fdb444d16ca7c2bd44815094a9b00c8fb00fda\"" May 17 00:20:28.067132 kubelet[2806]: E0517 00:20:28.067059 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-59?timeout=10s\": dial tcp 172.31.27.59:6443: connect: connection refused" interval="1.6s" May 17 00:20:28.073362 containerd[1986]: time="2025-05-17T00:20:28.073244622Z" level=info msg="CreateContainer within sandbox \"d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90\"" May 17 00:20:28.074996 containerd[1986]: time="2025-05-17T00:20:28.074026761Z" level=info msg="StartContainer for \"a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90\"" May 17 00:20:28.076938 containerd[1986]: time="2025-05-17T00:20:28.076883955Z" level=info msg="CreateContainer within sandbox \"93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4\"" May 17 00:20:28.077564 containerd[1986]: time="2025-05-17T00:20:28.077533172Z" level=info msg="StartContainer for \"735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4\"" May 17 00:20:28.089168 systemd[1]: Started cri-containerd-d6f4e40772f8c2234537f3fac6fdb444d16ca7c2bd44815094a9b00c8fb00fda.scope - libcontainer container d6f4e40772f8c2234537f3fac6fdb444d16ca7c2bd44815094a9b00c8fb00fda. May 17 00:20:28.091136 kubelet[2806]: W0517 00:20:28.091068 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-59&limit=500&resourceVersion=0": dial tcp 172.31.27.59:6443: connect: connection refused May 17 00:20:28.091660 kubelet[2806]: E0517 00:20:28.091588 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-59&limit=500&resourceVersion=0\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:28.129138 systemd[1]: Started cri-containerd-a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90.scope - libcontainer container a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90. May 17 00:20:28.151153 systemd[1]: Started cri-containerd-735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4.scope - libcontainer container 735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4. May 17 00:20:28.182031 containerd[1986]: time="2025-05-17T00:20:28.181990417Z" level=info msg="StartContainer for \"d6f4e40772f8c2234537f3fac6fdb444d16ca7c2bd44815094a9b00c8fb00fda\" returns successfully" May 17 00:20:28.209566 containerd[1986]: time="2025-05-17T00:20:28.209394716Z" level=info msg="StartContainer for \"a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90\" returns successfully" May 17 00:20:28.245728 containerd[1986]: time="2025-05-17T00:20:28.245658144Z" level=info msg="StartContainer for \"735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4\" returns successfully" May 17 00:20:28.255011 kubelet[2806]: I0517 00:20:28.254530 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-59" May 17 00:20:28.255011 kubelet[2806]: E0517 00:20:28.254865 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.59:6443/api/v1/nodes\": dial tcp 172.31.27.59:6443: connect: connection refused" node="ip-172-31-27-59" May 17 00:20:28.659955 kubelet[2806]: E0517 00:20:28.659177 2806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.59:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:28.691423 kubelet[2806]: E0517 00:20:28.691279 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.59:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-59.184028898c01207a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-59,UID:ip-172-31-27-59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-59,},FirstTimestamp:2025-05-17 00:20:26.64426713 +0000 UTC m=+0.416949153,LastTimestamp:2025-05-17 00:20:26.64426713 +0000 UTC m=+0.416949153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-59,}" May 17 00:20:29.859588 kubelet[2806]: I0517 00:20:29.859054 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-59" May 17 00:20:30.561139 kubelet[2806]: E0517 00:20:30.561094 2806 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-59\" not found" node="ip-172-31-27-59" May 17 00:20:30.619545 kubelet[2806]: I0517 00:20:30.619499 2806 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-59" May 17 00:20:30.619545 kubelet[2806]: E0517 00:20:30.619542 2806 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-27-59\": node \"ip-172-31-27-59\" not found" May 17 00:20:30.643849 kubelet[2806]: E0517 00:20:30.643812 2806 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-59\" not found" May 17 00:20:30.744856 kubelet[2806]: E0517 00:20:30.744800 2806 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-59\" not found" May 17 00:20:30.846075 kubelet[2806]: E0517 00:20:30.845958 2806 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-59\" not found" May 17 00:20:31.023599 kubelet[2806]: E0517 00:20:31.023539 2806 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-27-59\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-27-59" May 17 00:20:31.639600 kubelet[2806]: I0517 00:20:31.639547 2806 apiserver.go:52] "Watching apiserver" May 17 00:20:31.665209 kubelet[2806]: I0517 00:20:31.665176 2806 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:20:32.681010 systemd[1]: Reloading requested from client PID 3074 ('systemctl') (unit session-9.scope)... May 17 00:20:32.681031 systemd[1]: Reloading... May 17 00:20:32.780965 zram_generator::config[3117]: No configuration found. May 17 00:20:32.916411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:33.020864 systemd[1]: Reloading finished in 339 ms. May 17 00:20:33.063714 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:33.076609 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:20:33.076993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:33.083316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:33.360278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:33.369450 (kubelet)[3174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:20:33.447197 kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:33.448947 kubelet[3174]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:20:33.448947 kubelet[3174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:33.448947 kubelet[3174]: I0517 00:20:33.447645 3174 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:20:33.454888 kubelet[3174]: I0517 00:20:33.454850 3174 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:20:33.454888 kubelet[3174]: I0517 00:20:33.454879 3174 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:20:33.455746 kubelet[3174]: I0517 00:20:33.455710 3174 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:20:33.459537 kubelet[3174]: I0517 00:20:33.459396 3174 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:20:33.466820 kubelet[3174]: I0517 00:20:33.466768 3174 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:20:33.470666 kubelet[3174]: E0517 00:20:33.470607 3174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:20:33.470666 kubelet[3174]: I0517 00:20:33.470639 3174 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:20:33.472776 kubelet[3174]: I0517 00:20:33.472743 3174 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:20:33.472892 kubelet[3174]: I0517 00:20:33.472836 3174 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:20:33.472999 kubelet[3174]: I0517 00:20:33.472968 3174 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:20:33.473217 kubelet[3174]: I0517 00:20:33.472996 3174 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:20:33.473217 kubelet[3174]: I0517 00:20:33.473205 3174 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:20:33.473383 kubelet[3174]: I0517 00:20:33.473226 3174 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:20:33.473383 kubelet[3174]: I0517 00:20:33.473265 3174 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:33.473383 kubelet[3174]: I0517 00:20:33.473365 3174 kubelet.go:408] "Attempting to sync node with API server" May 17 00:20:33.473383 kubelet[3174]: I0517 00:20:33.473379 3174 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:20:33.473489 kubelet[3174]: I0517 00:20:33.473405 3174 kubelet.go:314] "Adding apiserver pod source" May 17 00:20:33.473489 kubelet[3174]: I0517 00:20:33.473417 3174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:20:33.487003 kubelet[3174]: I0517 00:20:33.485418 3174 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:20:33.487003 kubelet[3174]: I0517 00:20:33.485925 3174 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:20:33.487003 kubelet[3174]: I0517 00:20:33.486542 3174 server.go:1274] "Started kubelet" May 17 00:20:33.492669 kubelet[3174]: I0517 00:20:33.492644 3174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:20:33.508691 kubelet[3174]: I0517 00:20:33.508630 3174 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:20:33.509070 kubelet[3174]: I0517 00:20:33.509038 3174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:20:33.509779 kubelet[3174]: I0517 00:20:33.509757 3174 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:20:33.510235 kubelet[3174]: I0517 00:20:33.510215 3174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:20:33.511841 kubelet[3174]: I0517 00:20:33.511498 3174 server.go:449] "Adding debug handlers to kubelet server" May 17 00:20:33.513001 kubelet[3174]: I0517 00:20:33.512898 3174 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:20:33.516809 kubelet[3174]: I0517 00:20:33.516782 3174 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:20:33.516982 kubelet[3174]: I0517 00:20:33.516968 3174 reconciler.go:26] "Reconciler: start to sync state" May 17 00:20:33.520033 kubelet[3174]: I0517 00:20:33.519998 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:20:33.520764 kubelet[3174]: E0517 00:20:33.520340 3174 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:20:33.521305 kubelet[3174]: I0517 00:20:33.521201 3174 factory.go:221] Registration of the systemd container factory successfully May 17 00:20:33.521855 kubelet[3174]: I0517 00:20:33.521593 3174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:20:33.523845 kubelet[3174]: I0517 00:20:33.523819 3174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:20:33.524091 kubelet[3174]: I0517 00:20:33.523850 3174 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:20:33.524091 kubelet[3174]: I0517 00:20:33.523872 3174 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:20:33.524091 kubelet[3174]: E0517 00:20:33.523917 3174 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:20:33.534282 kubelet[3174]: I0517 00:20:33.534157 3174 factory.go:221] Registration of the containerd container factory successfully May 17 00:20:33.584305 kubelet[3174]: I0517 00:20:33.584270 3174 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:20:33.584305 kubelet[3174]: I0517 00:20:33.584288 3174 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:20:33.584305 kubelet[3174]: I0517 00:20:33.584308 3174 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:33.584529 kubelet[3174]: I0517 00:20:33.584495 3174 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:20:33.584529 kubelet[3174]: I0517 00:20:33.584510 3174 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:20:33.584610 kubelet[3174]: I0517 00:20:33.584533 3174 policy_none.go:49] "None policy: Start" May 17 00:20:33.585444 kubelet[3174]: I0517 00:20:33.585418 3174 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:20:33.585444 kubelet[3174]: I0517 00:20:33.585444 3174 state_mem.go:35] "Initializing new in-memory state store" May 17 00:20:33.585657 kubelet[3174]: I0517 00:20:33.585635 3174 state_mem.go:75] "Updated machine memory state" May 17 00:20:33.589856 kubelet[3174]: I0517 00:20:33.589822 3174 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:20:33.590060 kubelet[3174]: I0517 00:20:33.590037 3174 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:20:33.590122 kubelet[3174]: I0517 00:20:33.590056 3174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:20:33.591098 kubelet[3174]: I0517 00:20:33.590821 3174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:20:33.693498 sudo[3208]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:20:33.693802 sudo[3208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:20:33.700127 kubelet[3174]: I0517 00:20:33.700101 3174 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-59" May 17 00:20:33.708043 kubelet[3174]: I0517 00:20:33.707894 3174 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-27-59" May 17 00:20:33.708233 kubelet[3174]: I0517 00:20:33.708214 3174 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-59" May 17 00:20:33.718222 kubelet[3174]: I0517 00:20:33.718175 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a94c201d4e9b737cc6128e9af4bbb8c9-ca-certs\") pod \"kube-apiserver-ip-172-31-27-59\" (UID: \"a94c201d4e9b737cc6128e9af4bbb8c9\") " pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:33.819563 kubelet[3174]: I0517 00:20:33.819054 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:33.819563 kubelet[3174]: I0517 00:20:33.819105 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:33.819563 kubelet[3174]: I0517 00:20:33.819131 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/191aa19f835251fc59b2510797f1ab84-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-59\" (UID: \"191aa19f835251fc59b2510797f1ab84\") " pod="kube-system/kube-scheduler-ip-172-31-27-59" May 17 00:20:33.819563 kubelet[3174]: I0517 00:20:33.819158 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a94c201d4e9b737cc6128e9af4bbb8c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-59\" (UID: \"a94c201d4e9b737cc6128e9af4bbb8c9\") " pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:33.819563 kubelet[3174]: I0517 00:20:33.819186 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a94c201d4e9b737cc6128e9af4bbb8c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-59\" (UID: \"a94c201d4e9b737cc6128e9af4bbb8c9\") " pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:33.820003 kubelet[3174]: I0517 00:20:33.819212 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:33.820003 kubelet[3174]: I0517 00:20:33.819236 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:33.820003 kubelet[3174]: I0517 00:20:33.819262 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fcc5800ff1ba089733f90a1280e9780-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-59\" (UID: \"7fcc5800ff1ba089733f90a1280e9780\") " pod="kube-system/kube-controller-manager-ip-172-31-27-59" May 17 00:20:34.311622 sudo[3208]: pam_unix(sudo:session): session closed for user root May 17 00:20:34.477765 kubelet[3174]: I0517 00:20:34.477721 3174 apiserver.go:52] "Watching apiserver" May 17 00:20:34.517350 kubelet[3174]: I0517 00:20:34.517310 3174 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:20:34.574083 kubelet[3174]: E0517 00:20:34.573924 3174 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-59\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-59" May 17 00:20:34.609242 kubelet[3174]: I0517 00:20:34.608924 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-59" podStartSLOduration=1.608883757 podStartE2EDuration="1.608883757s" podCreationTimestamp="2025-05-17 00:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:34.607656073 +0000 UTC m=+1.231907717" watchObservedRunningTime="2025-05-17 00:20:34.608883757 +0000 UTC m=+1.233135395" May 17 00:20:34.609242 kubelet[3174]: I0517 00:20:34.609111 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-59" podStartSLOduration=1.609099932 podStartE2EDuration="1.609099932s" podCreationTimestamp="2025-05-17 00:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:34.59633677 +0000 UTC m=+1.220588416" watchObservedRunningTime="2025-05-17 00:20:34.609099932 +0000 UTC m=+1.233351576" May 17 00:20:34.632243 kubelet[3174]: I0517 00:20:34.632060 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-59" podStartSLOduration=1.6320265649999999 podStartE2EDuration="1.632026565s" podCreationTimestamp="2025-05-17 00:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:34.621312088 +0000 UTC m=+1.245563730" watchObservedRunningTime="2025-05-17 00:20:34.632026565 +0000 UTC m=+1.256278212" May 17 00:20:36.116397 sudo[2305]: pam_unix(sudo:session): session closed for user root May 17 00:20:36.139347 sshd[2302]: pam_unix(sshd:session): session closed for user core May 17 00:20:36.143008 systemd[1]: sshd@8-172.31.27.59:22-147.75.109.163:57260.service: Deactivated successfully. May 17 00:20:36.144806 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:20:36.145025 systemd[1]: session-9.scope: Consumed 5.251s CPU time, 141.4M memory peak, 0B memory swap peak. May 17 00:20:36.146388 systemd-logind[1951]: Session 9 logged out. Waiting for processes to exit. May 17 00:20:36.147482 systemd-logind[1951]: Removed session 9. May 17 00:20:37.896682 update_engine[1952]: I20250517 00:20:37.895735 1952 update_attempter.cc:509] Updating boot flags... May 17 00:20:37.951011 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3259) May 17 00:20:39.509530 kubelet[3174]: I0517 00:20:39.509503 3174 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:20:39.510913 containerd[1986]: time="2025-05-17T00:20:39.510543503Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:20:39.511232 kubelet[3174]: I0517 00:20:39.510718 3174 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:20:40.315433 systemd[1]: Created slice kubepods-besteffort-poddf524321_03b0_4662_b37f_355c8c7d3299.slice - libcontainer container kubepods-besteffort-poddf524321_03b0_4662_b37f_355c8c7d3299.slice. May 17 00:20:40.346669 systemd[1]: Created slice kubepods-besteffort-pod8c7a9f4e_bf3a_48f1_ab01_6619987505e9.slice - libcontainer container kubepods-besteffort-pod8c7a9f4e_bf3a_48f1_ab01_6619987505e9.slice. May 17 00:20:40.386448 systemd[1]: Created slice kubepods-burstable-podf368d8ba_9f24_46f4_8599_b6a7a4486fef.slice - libcontainer container kubepods-burstable-podf368d8ba_9f24_46f4_8599_b6a7a4486fef.slice. May 17 00:20:40.457911 kubelet[3174]: I0517 00:20:40.457632 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8djt\" (UniqueName: \"kubernetes.io/projected/8c7a9f4e-bf3a-48f1-ab01-6619987505e9-kube-api-access-l8djt\") pod \"kube-proxy-sfdx5\" (UID: \"8c7a9f4e-bf3a-48f1-ab01-6619987505e9\") " pod="kube-system/kube-proxy-sfdx5" May 17 00:20:40.457911 kubelet[3174]: I0517 00:20:40.457683 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df524321-03b0-4662-b37f-355c8c7d3299-cilium-config-path\") pod \"cilium-operator-5d85765b45-5gsjl\" (UID: \"df524321-03b0-4662-b37f-355c8c7d3299\") " pod="kube-system/cilium-operator-5d85765b45-5gsjl" May 17 00:20:40.457911 kubelet[3174]: I0517 00:20:40.457719 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtptc\" (UniqueName: \"kubernetes.io/projected/df524321-03b0-4662-b37f-355c8c7d3299-kube-api-access-qtptc\") pod \"cilium-operator-5d85765b45-5gsjl\" (UID: \"df524321-03b0-4662-b37f-355c8c7d3299\") " pod="kube-system/cilium-operator-5d85765b45-5gsjl" May 17 00:20:40.457911 kubelet[3174]: I0517 00:20:40.457753 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c7a9f4e-bf3a-48f1-ab01-6619987505e9-xtables-lock\") pod \"kube-proxy-sfdx5\" (UID: \"8c7a9f4e-bf3a-48f1-ab01-6619987505e9\") " pod="kube-system/kube-proxy-sfdx5" May 17 00:20:40.457911 kubelet[3174]: I0517 00:20:40.457780 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c7a9f4e-bf3a-48f1-ab01-6619987505e9-kube-proxy\") pod \"kube-proxy-sfdx5\" (UID: \"8c7a9f4e-bf3a-48f1-ab01-6619987505e9\") " pod="kube-system/kube-proxy-sfdx5" May 17 00:20:40.459081 kubelet[3174]: I0517 00:20:40.457811 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c7a9f4e-bf3a-48f1-ab01-6619987505e9-lib-modules\") pod \"kube-proxy-sfdx5\" (UID: \"8c7a9f4e-bf3a-48f1-ab01-6619987505e9\") " pod="kube-system/kube-proxy-sfdx5" May 17 00:20:40.558858 kubelet[3174]: I0517 00:20:40.558785 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-bpf-maps\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.558858 kubelet[3174]: I0517 00:20:40.558861 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-config-path\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.558858 kubelet[3174]: I0517 00:20:40.558882 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-run\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.558858 kubelet[3174]: I0517 00:20:40.558897 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-cgroup\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.558858 kubelet[3174]: I0517 00:20:40.558911 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-net\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.558858 kubelet[3174]: I0517 00:20:40.558938 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-kernel\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559480 kubelet[3174]: I0517 00:20:40.558964 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cni-path\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559480 kubelet[3174]: I0517 00:20:40.558980 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f368d8ba-9f24-46f4-8599-b6a7a4486fef-clustermesh-secrets\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559480 kubelet[3174]: I0517 00:20:40.558998 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hubble-tls\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559480 kubelet[3174]: I0517 00:20:40.559025 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-xtables-lock\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559480 kubelet[3174]: I0517 00:20:40.559056 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-lib-modules\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559480 kubelet[3174]: I0517 00:20:40.559091 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hostproc\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559960 kubelet[3174]: I0517 00:20:40.559113 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-etc-cni-netd\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.559960 kubelet[3174]: I0517 00:20:40.559134 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltq48\" (UniqueName: \"kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-kube-api-access-ltq48\") pod \"cilium-bkcbd\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " pod="kube-system/cilium-bkcbd" May 17 00:20:40.625335 containerd[1986]: time="2025-05-17T00:20:40.624823140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5gsjl,Uid:df524321-03b0-4662-b37f-355c8c7d3299,Namespace:kube-system,Attempt:0,}" May 17 00:20:40.653095 containerd[1986]: time="2025-05-17T00:20:40.652744606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sfdx5,Uid:8c7a9f4e-bf3a-48f1-ab01-6619987505e9,Namespace:kube-system,Attempt:0,}" May 17 00:20:40.681598 containerd[1986]: time="2025-05-17T00:20:40.680827442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:40.690365 containerd[1986]: time="2025-05-17T00:20:40.680917248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:40.690365 containerd[1986]: time="2025-05-17T00:20:40.683085904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:40.690365 containerd[1986]: time="2025-05-17T00:20:40.683260478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:40.720351 containerd[1986]: time="2025-05-17T00:20:40.720010775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:40.720351 containerd[1986]: time="2025-05-17T00:20:40.720092078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:40.720351 containerd[1986]: time="2025-05-17T00:20:40.720128316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:40.723130 systemd[1]: Started cri-containerd-28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959.scope - libcontainer container 28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959. May 17 00:20:40.723743 containerd[1986]: time="2025-05-17T00:20:40.722971168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:40.751151 systemd[1]: Started cri-containerd-a8363bea025f3b99b503c6c579c607ad0775f54dc54863a5e6f4cfe8fcdf6f75.scope - libcontainer container a8363bea025f3b99b503c6c579c607ad0775f54dc54863a5e6f4cfe8fcdf6f75. May 17 00:20:40.791455 containerd[1986]: time="2025-05-17T00:20:40.791396726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sfdx5,Uid:8c7a9f4e-bf3a-48f1-ab01-6619987505e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8363bea025f3b99b503c6c579c607ad0775f54dc54863a5e6f4cfe8fcdf6f75\"" May 17 00:20:40.799949 containerd[1986]: time="2025-05-17T00:20:40.799348266Z" level=info msg="CreateContainer within sandbox \"a8363bea025f3b99b503c6c579c607ad0775f54dc54863a5e6f4cfe8fcdf6f75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:20:40.804957 containerd[1986]: time="2025-05-17T00:20:40.804632394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5gsjl,Uid:df524321-03b0-4662-b37f-355c8c7d3299,Namespace:kube-system,Attempt:0,} returns sandbox id \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\"" May 17 00:20:40.809900 containerd[1986]: time="2025-05-17T00:20:40.809311282Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:20:40.833801 containerd[1986]: time="2025-05-17T00:20:40.833760083Z" level=info msg="CreateContainer within sandbox \"a8363bea025f3b99b503c6c579c607ad0775f54dc54863a5e6f4cfe8fcdf6f75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6238880083a41c830fd07652056505e1bced1fcc2d5a96834b874d2f6549982a\"" May 17 00:20:40.835334 containerd[1986]: time="2025-05-17T00:20:40.835300324Z" level=info msg="StartContainer for \"6238880083a41c830fd07652056505e1bced1fcc2d5a96834b874d2f6549982a\"" May 17 00:20:40.860149 systemd[1]: Started cri-containerd-6238880083a41c830fd07652056505e1bced1fcc2d5a96834b874d2f6549982a.scope - libcontainer container 6238880083a41c830fd07652056505e1bced1fcc2d5a96834b874d2f6549982a. May 17 00:20:40.894993 containerd[1986]: time="2025-05-17T00:20:40.894815318Z" level=info msg="StartContainer for \"6238880083a41c830fd07652056505e1bced1fcc2d5a96834b874d2f6549982a\" returns successfully" May 17 00:20:40.991254 containerd[1986]: time="2025-05-17T00:20:40.991211826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkcbd,Uid:f368d8ba-9f24-46f4-8599-b6a7a4486fef,Namespace:kube-system,Attempt:0,}" May 17 00:20:41.038426 containerd[1986]: time="2025-05-17T00:20:41.034438869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:41.038426 containerd[1986]: time="2025-05-17T00:20:41.034522855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:41.038426 containerd[1986]: time="2025-05-17T00:20:41.034561646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:41.038426 containerd[1986]: time="2025-05-17T00:20:41.035075037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:41.062141 systemd[1]: Started cri-containerd-6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa.scope - libcontainer container 6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa. May 17 00:20:41.091613 containerd[1986]: time="2025-05-17T00:20:41.091501524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkcbd,Uid:f368d8ba-9f24-46f4-8599-b6a7a4486fef,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\"" May 17 00:20:42.357874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207761793.mount: Deactivated successfully. May 17 00:20:44.919265 containerd[1986]: time="2025-05-17T00:20:44.919207746Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:44.921227 containerd[1986]: time="2025-05-17T00:20:44.921031162Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 17 00:20:44.923503 containerd[1986]: time="2025-05-17T00:20:44.923245893Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:44.924666 containerd[1986]: time="2025-05-17T00:20:44.924633812Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.115227714s" May 17 00:20:44.924785 containerd[1986]: time="2025-05-17T00:20:44.924771160Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:20:44.926081 containerd[1986]: time="2025-05-17T00:20:44.926055855Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:20:44.927966 containerd[1986]: time="2025-05-17T00:20:44.927871568Z" level=info msg="CreateContainer within sandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:20:44.953160 containerd[1986]: time="2025-05-17T00:20:44.953111523Z" level=info msg="CreateContainer within sandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\"" May 17 00:20:44.955103 containerd[1986]: time="2025-05-17T00:20:44.955062535Z" level=info msg="StartContainer for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\"" May 17 00:20:44.997113 systemd[1]: Started cri-containerd-f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3.scope - libcontainer container f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3. May 17 00:20:45.020973 kubelet[3174]: I0517 00:20:45.020175 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sfdx5" podStartSLOduration=5.020152047 podStartE2EDuration="5.020152047s" podCreationTimestamp="2025-05-17 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:41.608816031 +0000 UTC m=+8.233067677" watchObservedRunningTime="2025-05-17 00:20:45.020152047 +0000 UTC m=+11.644403694" May 17 00:20:45.040306 containerd[1986]: time="2025-05-17T00:20:45.040255699Z" level=info msg="StartContainer for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" returns successfully" May 17 00:20:47.369171 kubelet[3174]: I0517 00:20:47.369116 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5gsjl" podStartSLOduration=3.247849971 podStartE2EDuration="7.364999926s" podCreationTimestamp="2025-05-17 00:20:40 +0000 UTC" firstStartedPulling="2025-05-17 00:20:40.808772473 +0000 UTC m=+7.433024099" lastFinishedPulling="2025-05-17 00:20:44.925922428 +0000 UTC m=+11.550174054" observedRunningTime="2025-05-17 00:20:45.761585993 +0000 UTC m=+12.385837641" watchObservedRunningTime="2025-05-17 00:20:47.364999926 +0000 UTC m=+13.989251573" May 17 00:20:50.414544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458911476.mount: Deactivated successfully. May 17 00:20:52.969857 containerd[1986]: time="2025-05-17T00:20:52.969789284Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:52.971802 containerd[1986]: time="2025-05-17T00:20:52.971754771Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 17 00:20:52.975951 containerd[1986]: time="2025-05-17T00:20:52.973849076Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:52.975951 containerd[1986]: time="2025-05-17T00:20:52.975606948Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.049518874s" May 17 00:20:52.975951 containerd[1986]: time="2025-05-17T00:20:52.975642710Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:20:52.979314 containerd[1986]: time="2025-05-17T00:20:52.979106241Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:20:53.108119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3261843472.mount: Deactivated successfully. May 17 00:20:53.114884 containerd[1986]: time="2025-05-17T00:20:53.114830922Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\"" May 17 00:20:53.115808 containerd[1986]: time="2025-05-17T00:20:53.115656786Z" level=info msg="StartContainer for \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\"" May 17 00:20:53.207121 systemd[1]: Started cri-containerd-9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e.scope - libcontainer container 9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e. May 17 00:20:53.239471 containerd[1986]: time="2025-05-17T00:20:53.239327664Z" level=info msg="StartContainer for \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\" returns successfully" May 17 00:20:53.253767 systemd[1]: cri-containerd-9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e.scope: Deactivated successfully. May 17 00:20:53.459357 containerd[1986]: time="2025-05-17T00:20:53.444320502Z" level=info msg="shim disconnected" id=9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e namespace=k8s.io May 17 00:20:53.459357 containerd[1986]: time="2025-05-17T00:20:53.459358684Z" level=warning msg="cleaning up after shim disconnected" id=9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e namespace=k8s.io May 17 00:20:53.459633 containerd[1986]: time="2025-05-17T00:20:53.459375469Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:53.652027 containerd[1986]: time="2025-05-17T00:20:53.651135790Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:20:53.672298 containerd[1986]: time="2025-05-17T00:20:53.672251776Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\"" May 17 00:20:53.675598 containerd[1986]: time="2025-05-17T00:20:53.675555431Z" level=info msg="StartContainer for \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\"" May 17 00:20:53.709169 systemd[1]: Started cri-containerd-5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274.scope - libcontainer container 5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274. May 17 00:20:53.739235 containerd[1986]: time="2025-05-17T00:20:53.739159855Z" level=info msg="StartContainer for \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\" returns successfully" May 17 00:20:53.751375 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:20:53.751848 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:53.752044 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:53.759777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:20:53.760002 systemd[1]: cri-containerd-5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274.scope: Deactivated successfully. May 17 00:20:53.806117 containerd[1986]: time="2025-05-17T00:20:53.806059383Z" level=info msg="shim disconnected" id=5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274 namespace=k8s.io May 17 00:20:53.806117 containerd[1986]: time="2025-05-17T00:20:53.806108650Z" level=warning msg="cleaning up after shim disconnected" id=5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274 namespace=k8s.io May 17 00:20:53.806117 containerd[1986]: time="2025-05-17T00:20:53.806117191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:53.819370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:20:54.103714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e-rootfs.mount: Deactivated successfully. May 17 00:20:54.656294 containerd[1986]: time="2025-05-17T00:20:54.656254439Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:20:54.694040 containerd[1986]: time="2025-05-17T00:20:54.693999122Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\"" May 17 00:20:54.697551 containerd[1986]: time="2025-05-17T00:20:54.695559485Z" level=info msg="StartContainer for \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\"" May 17 00:20:54.764100 systemd[1]: Started cri-containerd-093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584.scope - libcontainer container 093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584. May 17 00:20:54.795892 containerd[1986]: time="2025-05-17T00:20:54.795849247Z" level=info msg="StartContainer for \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\" returns successfully" May 17 00:20:54.805315 systemd[1]: cri-containerd-093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584.scope: Deactivated successfully. May 17 00:20:54.839213 containerd[1986]: time="2025-05-17T00:20:54.839157757Z" level=info msg="shim disconnected" id=093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584 namespace=k8s.io May 17 00:20:54.839213 containerd[1986]: time="2025-05-17T00:20:54.839204782Z" level=warning msg="cleaning up after shim disconnected" id=093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584 namespace=k8s.io May 17 00:20:54.839213 containerd[1986]: time="2025-05-17T00:20:54.839213446Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:55.104499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584-rootfs.mount: Deactivated successfully. May 17 00:20:55.667660 containerd[1986]: time="2025-05-17T00:20:55.667573482Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:20:55.693505 containerd[1986]: time="2025-05-17T00:20:55.693442128Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\"" May 17 00:20:55.696105 containerd[1986]: time="2025-05-17T00:20:55.695257219Z" level=info msg="StartContainer for \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\"" May 17 00:20:55.733148 systemd[1]: Started cri-containerd-114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5.scope - libcontainer container 114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5. May 17 00:20:55.760246 systemd[1]: cri-containerd-114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5.scope: Deactivated successfully. May 17 00:20:55.767604 containerd[1986]: time="2025-05-17T00:20:55.767552093Z" level=info msg="StartContainer for \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\" returns successfully" May 17 00:20:55.801138 containerd[1986]: time="2025-05-17T00:20:55.801084608Z" level=info msg="shim disconnected" id=114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5 namespace=k8s.io May 17 00:20:55.801138 containerd[1986]: time="2025-05-17T00:20:55.801131131Z" level=warning msg="cleaning up after shim disconnected" id=114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5 namespace=k8s.io May 17 00:20:55.801138 containerd[1986]: time="2025-05-17T00:20:55.801139646Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:56.103775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5-rootfs.mount: Deactivated successfully. May 17 00:20:56.665484 containerd[1986]: time="2025-05-17T00:20:56.665438284Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:20:56.687686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408420267.mount: Deactivated successfully. May 17 00:20:56.694088 containerd[1986]: time="2025-05-17T00:20:56.693982549Z" level=info msg="CreateContainer within sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\"" May 17 00:20:56.695054 containerd[1986]: time="2025-05-17T00:20:56.695024955Z" level=info msg="StartContainer for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\"" May 17 00:20:56.739206 systemd[1]: Started cri-containerd-e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602.scope - libcontainer container e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602. May 17 00:20:56.775745 containerd[1986]: time="2025-05-17T00:20:56.775398457Z" level=info msg="StartContainer for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" returns successfully" May 17 00:20:57.021773 kubelet[3174]: I0517 00:20:57.021704 3174 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:20:57.078618 systemd[1]: Created slice kubepods-burstable-pod4093affb_4f34_49ba_826b_968751c55a9c.slice - libcontainer container kubepods-burstable-pod4093affb_4f34_49ba_826b_968751c55a9c.slice. May 17 00:20:57.087532 systemd[1]: Created slice kubepods-burstable-pod97390de9_716c_4a4f_9e5f_49cd14fcacf4.slice - libcontainer container kubepods-burstable-pod97390de9_716c_4a4f_9e5f_49cd14fcacf4.slice. May 17 00:20:57.108402 systemd[1]: run-containerd-runc-k8s.io-e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602-runc.UOqy0Z.mount: Deactivated successfully. May 17 00:20:57.178385 kubelet[3174]: I0517 00:20:57.178345 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcqsb\" (UniqueName: \"kubernetes.io/projected/97390de9-716c-4a4f-9e5f-49cd14fcacf4-kube-api-access-vcqsb\") pod \"coredns-7c65d6cfc9-xwsdb\" (UID: \"97390de9-716c-4a4f-9e5f-49cd14fcacf4\") " pod="kube-system/coredns-7c65d6cfc9-xwsdb" May 17 00:20:57.178385 kubelet[3174]: I0517 00:20:57.178393 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4093affb-4f34-49ba-826b-968751c55a9c-config-volume\") pod \"coredns-7c65d6cfc9-mdqc8\" (UID: \"4093affb-4f34-49ba-826b-968751c55a9c\") " pod="kube-system/coredns-7c65d6cfc9-mdqc8" May 17 00:20:57.178602 kubelet[3174]: I0517 00:20:57.178421 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9wt\" (UniqueName: \"kubernetes.io/projected/4093affb-4f34-49ba-826b-968751c55a9c-kube-api-access-pk9wt\") pod \"coredns-7c65d6cfc9-mdqc8\" (UID: \"4093affb-4f34-49ba-826b-968751c55a9c\") " pod="kube-system/coredns-7c65d6cfc9-mdqc8" May 17 00:20:57.178602 kubelet[3174]: I0517 00:20:57.178447 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97390de9-716c-4a4f-9e5f-49cd14fcacf4-config-volume\") pod \"coredns-7c65d6cfc9-xwsdb\" (UID: \"97390de9-716c-4a4f-9e5f-49cd14fcacf4\") " pod="kube-system/coredns-7c65d6cfc9-xwsdb" May 17 00:20:57.386972 containerd[1986]: time="2025-05-17T00:20:57.386756801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mdqc8,Uid:4093affb-4f34-49ba-826b-968751c55a9c,Namespace:kube-system,Attempt:0,}" May 17 00:20:57.394385 containerd[1986]: time="2025-05-17T00:20:57.394114263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xwsdb,Uid:97390de9-716c-4a4f-9e5f-49cd14fcacf4,Namespace:kube-system,Attempt:0,}" May 17 00:20:59.026455 systemd-networkd[1889]: cilium_host: Link UP May 17 00:20:59.026715 (udev-worker)[4037]: Network interface NamePolicy= disabled on kernel command line. May 17 00:20:59.027514 systemd-networkd[1889]: cilium_net: Link UP May 17 00:20:59.028182 systemd-networkd[1889]: cilium_net: Gained carrier May 17 00:20:59.028408 systemd-networkd[1889]: cilium_host: Gained carrier May 17 00:20:59.031680 (udev-worker)[4096]: Network interface NamePolicy= disabled on kernel command line. May 17 00:20:59.164336 systemd-networkd[1889]: cilium_vxlan: Link UP May 17 00:20:59.164345 systemd-networkd[1889]: cilium_vxlan: Gained carrier May 17 00:20:59.179973 systemd-networkd[1889]: cilium_net: Gained IPv6LL May 17 00:20:59.573961 kernel: NET: Registered PF_ALG protocol family May 17 00:21:00.059893 systemd-networkd[1889]: cilium_host: Gained IPv6LL May 17 00:21:00.375470 (udev-worker)[4113]: Network interface NamePolicy= disabled on kernel command line. May 17 00:21:00.381148 systemd-networkd[1889]: lxc_health: Link UP May 17 00:21:00.393174 systemd-networkd[1889]: cilium_vxlan: Gained IPv6LL May 17 00:21:00.401824 systemd-networkd[1889]: lxc_health: Gained carrier May 17 00:21:01.022901 kubelet[3174]: I0517 00:21:01.022103 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bkcbd" podStartSLOduration=9.138842317 podStartE2EDuration="21.022081122s" podCreationTimestamp="2025-05-17 00:20:40 +0000 UTC" firstStartedPulling="2025-05-17 00:20:41.093499477 +0000 UTC m=+7.717751103" lastFinishedPulling="2025-05-17 00:20:52.976738282 +0000 UTC m=+19.600989908" observedRunningTime="2025-05-17 00:20:57.684274011 +0000 UTC m=+24.308525659" watchObservedRunningTime="2025-05-17 00:21:01.022081122 +0000 UTC m=+27.646332772" May 17 00:21:01.029642 systemd-networkd[1889]: lxc995b8c621c71: Link UP May 17 00:21:01.039953 kernel: eth0: renamed from tmp5708f May 17 00:21:01.043047 systemd-networkd[1889]: lxc995b8c621c71: Gained carrier May 17 00:21:01.081704 systemd-networkd[1889]: lxc750e81986542: Link UP May 17 00:21:01.089070 kernel: eth0: renamed from tmp39218 May 17 00:21:01.095958 systemd-networkd[1889]: lxc750e81986542: Gained carrier May 17 00:21:01.530116 systemd-networkd[1889]: lxc_health: Gained IPv6LL May 17 00:21:02.682215 systemd-networkd[1889]: lxc995b8c621c71: Gained IPv6LL May 17 00:21:02.714954 kubelet[3174]: I0517 00:21:02.714761 3174 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:02.874149 systemd-networkd[1889]: lxc750e81986542: Gained IPv6LL May 17 00:21:05.114039 ntpd[1943]: Listen normally on 8 cilium_host 192.168.0.9:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 8 cilium_host 192.168.0.9:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 9 cilium_net [fe80::b4c4:69ff:fe72:8741%4]:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 10 cilium_host [fe80::6cc7:c3ff:fe5a:c859%5]:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 11 cilium_vxlan [fe80::e8d8:4eff:fe1a:f977%6]:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 12 lxc_health [fe80::e4f8:9fff:fe84:17db%8]:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 13 lxc995b8c621c71 [fe80::4885:22ff:fed7:a03e%10]:123 May 17 00:21:05.114824 ntpd[1943]: 17 May 00:21:05 ntpd[1943]: Listen normally on 14 lxc750e81986542 [fe80::846a:6bff:fe94:fe97%12]:123 May 17 00:21:05.114150 ntpd[1943]: Listen normally on 9 cilium_net [fe80::b4c4:69ff:fe72:8741%4]:123 May 17 00:21:05.114213 ntpd[1943]: Listen normally on 10 cilium_host [fe80::6cc7:c3ff:fe5a:c859%5]:123 May 17 00:21:05.114272 ntpd[1943]: Listen normally on 11 cilium_vxlan [fe80::e8d8:4eff:fe1a:f977%6]:123 May 17 00:21:05.114308 ntpd[1943]: Listen normally on 12 lxc_health [fe80::e4f8:9fff:fe84:17db%8]:123 May 17 00:21:05.114345 ntpd[1943]: Listen normally on 13 lxc995b8c621c71 [fe80::4885:22ff:fed7:a03e%10]:123 May 17 00:21:05.114380 ntpd[1943]: Listen normally on 14 lxc750e81986542 [fe80::846a:6bff:fe94:fe97%12]:123 May 17 00:21:05.729957 containerd[1986]: time="2025-05-17T00:21:05.726319339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:05.729957 containerd[1986]: time="2025-05-17T00:21:05.726418728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:05.729957 containerd[1986]: time="2025-05-17T00:21:05.726440644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:05.729957 containerd[1986]: time="2025-05-17T00:21:05.727068225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:05.743183 containerd[1986]: time="2025-05-17T00:21:05.742944718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:05.743183 containerd[1986]: time="2025-05-17T00:21:05.742998565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:05.743183 containerd[1986]: time="2025-05-17T00:21:05.743013047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:05.743183 containerd[1986]: time="2025-05-17T00:21:05.743082800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:05.760954 systemd[1]: run-containerd-runc-k8s.io-39218be5cd2090624a9daf1d6e47ec6b29ce169dc538cf4594cbde98e5e37a07-runc.aM7F5D.mount: Deactivated successfully. May 17 00:21:05.778810 systemd[1]: Started cri-containerd-39218be5cd2090624a9daf1d6e47ec6b29ce169dc538cf4594cbde98e5e37a07.scope - libcontainer container 39218be5cd2090624a9daf1d6e47ec6b29ce169dc538cf4594cbde98e5e37a07. May 17 00:21:05.802456 systemd[1]: Started cri-containerd-5708f5c5021ab864576130070c4fdeae4d13042d1527099edb7a2d804ddcd0a4.scope - libcontainer container 5708f5c5021ab864576130070c4fdeae4d13042d1527099edb7a2d804ddcd0a4. May 17 00:21:05.887013 containerd[1986]: time="2025-05-17T00:21:05.885645545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mdqc8,Uid:4093affb-4f34-49ba-826b-968751c55a9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5708f5c5021ab864576130070c4fdeae4d13042d1527099edb7a2d804ddcd0a4\"" May 17 00:21:05.912825 containerd[1986]: time="2025-05-17T00:21:05.912783612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xwsdb,Uid:97390de9-716c-4a4f-9e5f-49cd14fcacf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"39218be5cd2090624a9daf1d6e47ec6b29ce169dc538cf4594cbde98e5e37a07\"" May 17 00:21:05.930511 containerd[1986]: time="2025-05-17T00:21:05.930463958Z" level=info msg="CreateContainer within sandbox \"5708f5c5021ab864576130070c4fdeae4d13042d1527099edb7a2d804ddcd0a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:05.933308 containerd[1986]: time="2025-05-17T00:21:05.933166824Z" level=info msg="CreateContainer within sandbox \"39218be5cd2090624a9daf1d6e47ec6b29ce169dc538cf4594cbde98e5e37a07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:05.974517 containerd[1986]: time="2025-05-17T00:21:05.974455257Z" level=info msg="CreateContainer within sandbox \"5708f5c5021ab864576130070c4fdeae4d13042d1527099edb7a2d804ddcd0a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"804828ff0f4e09926765f20027d31d480e81649fe67a6dccedf6359c14ebe468\"" May 17 00:21:05.975433 containerd[1986]: time="2025-05-17T00:21:05.975401501Z" level=info msg="StartContainer for \"804828ff0f4e09926765f20027d31d480e81649fe67a6dccedf6359c14ebe468\"" May 17 00:21:05.979877 containerd[1986]: time="2025-05-17T00:21:05.979760025Z" level=info msg="CreateContainer within sandbox \"39218be5cd2090624a9daf1d6e47ec6b29ce169dc538cf4594cbde98e5e37a07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fe91a8b488b2cb45ef844b511a036a56c22ef44958dfc41b20737c82d7f4108\"" May 17 00:21:05.982037 containerd[1986]: time="2025-05-17T00:21:05.981644191Z" level=info msg="StartContainer for \"9fe91a8b488b2cb45ef844b511a036a56c22ef44958dfc41b20737c82d7f4108\"" May 17 00:21:06.038305 systemd[1]: Started cri-containerd-804828ff0f4e09926765f20027d31d480e81649fe67a6dccedf6359c14ebe468.scope - libcontainer container 804828ff0f4e09926765f20027d31d480e81649fe67a6dccedf6359c14ebe468. May 17 00:21:06.059213 systemd[1]: Started cri-containerd-9fe91a8b488b2cb45ef844b511a036a56c22ef44958dfc41b20737c82d7f4108.scope - libcontainer container 9fe91a8b488b2cb45ef844b511a036a56c22ef44958dfc41b20737c82d7f4108. May 17 00:21:06.120257 containerd[1986]: time="2025-05-17T00:21:06.120068630Z" level=info msg="StartContainer for \"9fe91a8b488b2cb45ef844b511a036a56c22ef44958dfc41b20737c82d7f4108\" returns successfully" May 17 00:21:06.120257 containerd[1986]: time="2025-05-17T00:21:06.120120984Z" level=info msg="StartContainer for \"804828ff0f4e09926765f20027d31d480e81649fe67a6dccedf6359c14ebe468\" returns successfully" May 17 00:21:06.560310 systemd[1]: Started sshd@9-172.31.27.59:22-147.75.109.163:33048.service - OpenSSH per-connection server daemon (147.75.109.163:33048). May 17 00:21:06.708601 kubelet[3174]: I0517 00:21:06.707561 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xwsdb" podStartSLOduration=26.707536582 podStartE2EDuration="26.707536582s" podCreationTimestamp="2025-05-17 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:06.706002049 +0000 UTC m=+33.330253697" watchObservedRunningTime="2025-05-17 00:21:06.707536582 +0000 UTC m=+33.331788240" May 17 00:21:06.725173 kubelet[3174]: I0517 00:21:06.724472 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mdqc8" podStartSLOduration=26.724450688 podStartE2EDuration="26.724450688s" podCreationTimestamp="2025-05-17 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:06.724234751 +0000 UTC m=+33.348486398" watchObservedRunningTime="2025-05-17 00:21:06.724450688 +0000 UTC m=+33.348702344" May 17 00:21:06.747203 sshd[4623]: Accepted publickey for core from 147.75.109.163 port 33048 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:06.750424 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:06.760193 systemd-logind[1951]: New session 10 of user core. May 17 00:21:06.766306 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:21:07.533323 sshd[4623]: pam_unix(sshd:session): session closed for user core May 17 00:21:07.537478 systemd[1]: sshd@9-172.31.27.59:22-147.75.109.163:33048.service: Deactivated successfully. May 17 00:21:07.540455 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:21:07.541619 systemd-logind[1951]: Session 10 logged out. Waiting for processes to exit. May 17 00:21:07.542955 systemd-logind[1951]: Removed session 10. May 17 00:21:12.572335 systemd[1]: Started sshd@10-172.31.27.59:22-147.75.109.163:45692.service - OpenSSH per-connection server daemon (147.75.109.163:45692). May 17 00:21:12.727561 sshd[4646]: Accepted publickey for core from 147.75.109.163 port 45692 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:12.729043 sshd[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:12.734400 systemd-logind[1951]: New session 11 of user core. May 17 00:21:12.741124 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:21:12.945527 sshd[4646]: pam_unix(sshd:session): session closed for user core May 17 00:21:12.948775 systemd[1]: sshd@10-172.31.27.59:22-147.75.109.163:45692.service: Deactivated successfully. May 17 00:21:12.950662 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:21:12.952007 systemd-logind[1951]: Session 11 logged out. Waiting for processes to exit. May 17 00:21:12.953542 systemd-logind[1951]: Removed session 11. May 17 00:21:17.978388 systemd[1]: Started sshd@11-172.31.27.59:22-147.75.109.163:45696.service - OpenSSH per-connection server daemon (147.75.109.163:45696). May 17 00:21:18.139112 sshd[4661]: Accepted publickey for core from 147.75.109.163 port 45696 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:18.140606 sshd[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:18.145495 systemd-logind[1951]: New session 12 of user core. May 17 00:21:18.155196 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:21:18.340206 sshd[4661]: pam_unix(sshd:session): session closed for user core May 17 00:21:18.343479 systemd[1]: sshd@11-172.31.27.59:22-147.75.109.163:45696.service: Deactivated successfully. May 17 00:21:18.345179 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:21:18.346778 systemd-logind[1951]: Session 12 logged out. Waiting for processes to exit. May 17 00:21:18.348131 systemd-logind[1951]: Removed session 12. May 17 00:21:23.372118 systemd[1]: Started sshd@12-172.31.27.59:22-147.75.109.163:48946.service - OpenSSH per-connection server daemon (147.75.109.163:48946). May 17 00:21:23.539247 sshd[4675]: Accepted publickey for core from 147.75.109.163 port 48946 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:23.539822 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:23.544916 systemd-logind[1951]: New session 13 of user core. May 17 00:21:23.553168 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:21:23.746973 sshd[4675]: pam_unix(sshd:session): session closed for user core May 17 00:21:23.752667 systemd[1]: sshd@12-172.31.27.59:22-147.75.109.163:48946.service: Deactivated successfully. May 17 00:21:23.758295 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:21:23.759234 systemd-logind[1951]: Session 13 logged out. Waiting for processes to exit. May 17 00:21:23.760306 systemd-logind[1951]: Removed session 13. May 17 00:21:23.782279 systemd[1]: Started sshd@13-172.31.27.59:22-147.75.109.163:48962.service - OpenSSH per-connection server daemon (147.75.109.163:48962). May 17 00:21:23.955227 sshd[4689]: Accepted publickey for core from 147.75.109.163 port 48962 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:23.956828 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:23.962267 systemd-logind[1951]: New session 14 of user core. May 17 00:21:23.969178 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:21:24.217621 sshd[4689]: pam_unix(sshd:session): session closed for user core May 17 00:21:24.225135 systemd-logind[1951]: Session 14 logged out. Waiting for processes to exit. May 17 00:21:24.225537 systemd[1]: sshd@13-172.31.27.59:22-147.75.109.163:48962.service: Deactivated successfully. May 17 00:21:24.229295 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:21:24.232128 systemd-logind[1951]: Removed session 14. May 17 00:21:24.250319 systemd[1]: Started sshd@14-172.31.27.59:22-147.75.109.163:48978.service - OpenSSH per-connection server daemon (147.75.109.163:48978). May 17 00:21:24.411533 sshd[4700]: Accepted publickey for core from 147.75.109.163 port 48978 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:24.413087 sshd[4700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:24.418032 systemd-logind[1951]: New session 15 of user core. May 17 00:21:24.425179 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:21:24.621141 sshd[4700]: pam_unix(sshd:session): session closed for user core May 17 00:21:24.624744 systemd[1]: sshd@14-172.31.27.59:22-147.75.109.163:48978.service: Deactivated successfully. May 17 00:21:24.627118 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:21:24.629007 systemd-logind[1951]: Session 15 logged out. Waiting for processes to exit. May 17 00:21:24.630854 systemd-logind[1951]: Removed session 15. May 17 00:21:29.654172 systemd[1]: Started sshd@15-172.31.27.59:22-147.75.109.163:37822.service - OpenSSH per-connection server daemon (147.75.109.163:37822). May 17 00:21:29.819050 sshd[4714]: Accepted publickey for core from 147.75.109.163 port 37822 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:29.820444 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:29.824767 systemd-logind[1951]: New session 16 of user core. May 17 00:21:29.835406 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:21:30.016907 sshd[4714]: pam_unix(sshd:session): session closed for user core May 17 00:21:30.020212 systemd[1]: sshd@15-172.31.27.59:22-147.75.109.163:37822.service: Deactivated successfully. May 17 00:21:30.022549 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:21:30.024323 systemd-logind[1951]: Session 16 logged out. Waiting for processes to exit. May 17 00:21:30.025571 systemd-logind[1951]: Removed session 16. May 17 00:21:35.052282 systemd[1]: Started sshd@16-172.31.27.59:22-147.75.109.163:37830.service - OpenSSH per-connection server daemon (147.75.109.163:37830). May 17 00:21:35.203813 sshd[4729]: Accepted publickey for core from 147.75.109.163 port 37830 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:35.205242 sshd[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:35.209732 systemd-logind[1951]: New session 17 of user core. May 17 00:21:35.217224 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:21:35.404167 sshd[4729]: pam_unix(sshd:session): session closed for user core May 17 00:21:35.408287 systemd[1]: sshd@16-172.31.27.59:22-147.75.109.163:37830.service: Deactivated successfully. May 17 00:21:35.410297 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:21:35.411058 systemd-logind[1951]: Session 17 logged out. Waiting for processes to exit. May 17 00:21:35.412174 systemd-logind[1951]: Removed session 17. May 17 00:21:35.434013 systemd[1]: Started sshd@17-172.31.27.59:22-147.75.109.163:37846.service - OpenSSH per-connection server daemon (147.75.109.163:37846). May 17 00:21:35.589653 sshd[4742]: Accepted publickey for core from 147.75.109.163 port 37846 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:35.591455 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:35.597179 systemd-logind[1951]: New session 18 of user core. May 17 00:21:35.602428 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:21:36.160060 sshd[4742]: pam_unix(sshd:session): session closed for user core May 17 00:21:36.163874 systemd[1]: sshd@17-172.31.27.59:22-147.75.109.163:37846.service: Deactivated successfully. May 17 00:21:36.165765 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:21:36.166808 systemd-logind[1951]: Session 18 logged out. Waiting for processes to exit. May 17 00:21:36.168240 systemd-logind[1951]: Removed session 18. May 17 00:21:36.197374 systemd[1]: Started sshd@18-172.31.27.59:22-147.75.109.163:37848.service - OpenSSH per-connection server daemon (147.75.109.163:37848). May 17 00:21:36.363400 sshd[4754]: Accepted publickey for core from 147.75.109.163 port 37848 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:36.368154 sshd[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:36.375124 systemd-logind[1951]: New session 19 of user core. May 17 00:21:36.381176 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:21:38.029782 sshd[4754]: pam_unix(sshd:session): session closed for user core May 17 00:21:38.047338 systemd[1]: sshd@18-172.31.27.59:22-147.75.109.163:37848.service: Deactivated successfully. May 17 00:21:38.051192 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:21:38.052853 systemd-logind[1951]: Session 19 logged out. Waiting for processes to exit. May 17 00:21:38.069824 systemd[1]: Started sshd@19-172.31.27.59:22-147.75.109.163:37068.service - OpenSSH per-connection server daemon (147.75.109.163:37068). May 17 00:21:38.072003 systemd-logind[1951]: Removed session 19. May 17 00:21:38.226791 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 37068 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:38.228481 sshd[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:38.233050 systemd-logind[1951]: New session 20 of user core. May 17 00:21:38.238296 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:21:38.826530 sshd[4772]: pam_unix(sshd:session): session closed for user core May 17 00:21:38.830015 systemd[1]: sshd@19-172.31.27.59:22-147.75.109.163:37068.service: Deactivated successfully. May 17 00:21:38.833895 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:21:38.837062 systemd-logind[1951]: Session 20 logged out. Waiting for processes to exit. May 17 00:21:38.838642 systemd-logind[1951]: Removed session 20. May 17 00:21:38.859357 systemd[1]: Started sshd@20-172.31.27.59:22-147.75.109.163:37082.service - OpenSSH per-connection server daemon (147.75.109.163:37082). May 17 00:21:39.025844 sshd[4783]: Accepted publickey for core from 147.75.109.163 port 37082 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:39.027446 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:39.031413 systemd-logind[1951]: New session 21 of user core. May 17 00:21:39.037142 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:21:39.225769 sshd[4783]: pam_unix(sshd:session): session closed for user core May 17 00:21:39.230515 systemd[1]: sshd@20-172.31.27.59:22-147.75.109.163:37082.service: Deactivated successfully. May 17 00:21:39.232599 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:21:39.233498 systemd-logind[1951]: Session 21 logged out. Waiting for processes to exit. May 17 00:21:39.235101 systemd-logind[1951]: Removed session 21. May 17 00:21:44.261070 systemd[1]: Started sshd@21-172.31.27.59:22-147.75.109.163:37094.service - OpenSSH per-connection server daemon (147.75.109.163:37094). May 17 00:21:44.429568 sshd[4798]: Accepted publickey for core from 147.75.109.163 port 37094 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:44.431387 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:44.436539 systemd-logind[1951]: New session 22 of user core. May 17 00:21:44.441115 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:21:44.617752 sshd[4798]: pam_unix(sshd:session): session closed for user core May 17 00:21:44.621388 systemd[1]: sshd@21-172.31.27.59:22-147.75.109.163:37094.service: Deactivated successfully. May 17 00:21:44.623389 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:21:44.624296 systemd-logind[1951]: Session 22 logged out. Waiting for processes to exit. May 17 00:21:44.625657 systemd-logind[1951]: Removed session 22. May 17 00:21:49.658339 systemd[1]: Started sshd@22-172.31.27.59:22-147.75.109.163:32974.service - OpenSSH per-connection server daemon (147.75.109.163:32974). May 17 00:21:49.812332 sshd[4814]: Accepted publickey for core from 147.75.109.163 port 32974 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:49.813870 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:49.819114 systemd-logind[1951]: New session 23 of user core. May 17 00:21:49.824977 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:21:50.002872 sshd[4814]: pam_unix(sshd:session): session closed for user core May 17 00:21:50.005753 systemd[1]: sshd@22-172.31.27.59:22-147.75.109.163:32974.service: Deactivated successfully. May 17 00:21:50.008693 systemd-logind[1951]: Session 23 logged out. Waiting for processes to exit. May 17 00:21:50.009361 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:21:50.010348 systemd-logind[1951]: Removed session 23. May 17 00:21:52.933721 update_engine[1952]: I20250517 00:21:52.933642 1952 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:21:52.933721 update_engine[1952]: I20250517 00:21:52.933710 1952 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:21:52.937090 update_engine[1952]: I20250517 00:21:52.937031 1952 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:21:52.937520 update_engine[1952]: I20250517 00:21:52.937494 1952 omaha_request_params.cc:62] Current group set to lts May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937618 1952 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937631 1952 update_attempter.cc:643] Scheduling an action processor start. May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937651 1952 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937688 1952 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937743 1952 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937750 1952 omaha_request_action.cc:272] Request: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: May 17 00:21:52.938853 update_engine[1952]: I20250517 00:21:52.937757 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:21:52.949220 locksmithd[1991]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:21:52.950231 update_engine[1952]: I20250517 00:21:52.950189 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:21:52.950564 update_engine[1952]: I20250517 00:21:52.950503 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:21:52.999102 update_engine[1952]: E20250517 00:21:52.999027 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:21:52.999233 update_engine[1952]: I20250517 00:21:52.999132 1952 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:21:55.037331 systemd[1]: Started sshd@23-172.31.27.59:22-147.75.109.163:32976.service - OpenSSH per-connection server daemon (147.75.109.163:32976). May 17 00:21:55.189347 sshd[4827]: Accepted publickey for core from 147.75.109.163 port 32976 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:55.191150 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:55.197038 systemd-logind[1951]: New session 24 of user core. May 17 00:21:55.203165 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:21:55.389023 sshd[4827]: pam_unix(sshd:session): session closed for user core May 17 00:21:55.392766 systemd[1]: sshd@23-172.31.27.59:22-147.75.109.163:32976.service: Deactivated successfully. May 17 00:21:55.395013 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:21:55.396765 systemd-logind[1951]: Session 24 logged out. Waiting for processes to exit. May 17 00:21:55.398554 systemd-logind[1951]: Removed session 24. May 17 00:21:55.425726 systemd[1]: Started sshd@24-172.31.27.59:22-147.75.109.163:32988.service - OpenSSH per-connection server daemon (147.75.109.163:32988). May 17 00:21:55.581772 sshd[4840]: Accepted publickey for core from 147.75.109.163 port 32988 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:55.583339 sshd[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:55.588154 systemd-logind[1951]: New session 25 of user core. May 17 00:21:55.594292 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:21:57.080584 systemd[1]: run-containerd-runc-k8s.io-e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602-runc.1En30A.mount: Deactivated successfully. May 17 00:21:57.112292 containerd[1986]: time="2025-05-17T00:21:57.112239253Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:21:57.147177 containerd[1986]: time="2025-05-17T00:21:57.147138185Z" level=info msg="StopContainer for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" with timeout 2 (s)" May 17 00:21:57.152725 containerd[1986]: time="2025-05-17T00:21:57.152390034Z" level=info msg="Stop container \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" with signal terminated" May 17 00:21:57.154708 containerd[1986]: time="2025-05-17T00:21:57.154668224Z" level=info msg="StopContainer for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" with timeout 30 (s)" May 17 00:21:57.155011 containerd[1986]: time="2025-05-17T00:21:57.154976868Z" level=info msg="Stop container \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" with signal terminated" May 17 00:21:57.164419 systemd-networkd[1889]: lxc_health: Link DOWN May 17 00:21:57.164429 systemd-networkd[1889]: lxc_health: Lost carrier May 17 00:21:57.174758 systemd[1]: cri-containerd-f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3.scope: Deactivated successfully. May 17 00:21:57.188500 systemd[1]: cri-containerd-e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602.scope: Deactivated successfully. May 17 00:21:57.188713 systemd[1]: cri-containerd-e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602.scope: Consumed 7.721s CPU time. May 17 00:21:57.204808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3-rootfs.mount: Deactivated successfully. May 17 00:21:57.223134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602-rootfs.mount: Deactivated successfully. May 17 00:21:57.228962 containerd[1986]: time="2025-05-17T00:21:57.228740866Z" level=info msg="shim disconnected" id=e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602 namespace=k8s.io May 17 00:21:57.228962 containerd[1986]: time="2025-05-17T00:21:57.228803111Z" level=warning msg="cleaning up after shim disconnected" id=e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602 namespace=k8s.io May 17 00:21:57.228962 containerd[1986]: time="2025-05-17T00:21:57.228812078Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:57.231725 containerd[1986]: time="2025-05-17T00:21:57.231584723Z" level=info msg="shim disconnected" id=f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3 namespace=k8s.io May 17 00:21:57.231725 containerd[1986]: time="2025-05-17T00:21:57.231631092Z" level=warning msg="cleaning up after shim disconnected" id=f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3 namespace=k8s.io May 17 00:21:57.231725 containerd[1986]: time="2025-05-17T00:21:57.231641615Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:57.249635 containerd[1986]: time="2025-05-17T00:21:57.249452842Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:21:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:21:57.257327 containerd[1986]: time="2025-05-17T00:21:57.257213361Z" level=info msg="StopContainer for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" returns successfully" May 17 00:21:57.263924 containerd[1986]: time="2025-05-17T00:21:57.261186900Z" level=info msg="StopPodSandbox for \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\"" May 17 00:21:57.263924 containerd[1986]: time="2025-05-17T00:21:57.261238528Z" level=info msg="Container to stop \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:21:57.264596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959-shm.mount: Deactivated successfully. May 17 00:21:57.267742 containerd[1986]: time="2025-05-17T00:21:57.267542022Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:21:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:21:57.274437 systemd[1]: cri-containerd-28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959.scope: Deactivated successfully. May 17 00:21:57.282090 containerd[1986]: time="2025-05-17T00:21:57.281978374Z" level=info msg="StopContainer for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" returns successfully" May 17 00:21:57.282780 containerd[1986]: time="2025-05-17T00:21:57.282755163Z" level=info msg="StopPodSandbox for \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\"" May 17 00:21:57.283001 containerd[1986]: time="2025-05-17T00:21:57.282976481Z" level=info msg="Container to stop \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:21:57.283088 containerd[1986]: time="2025-05-17T00:21:57.283075873Z" level=info msg="Container to stop \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:21:57.283161 containerd[1986]: time="2025-05-17T00:21:57.283149339Z" level=info msg="Container to stop \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:21:57.283368 containerd[1986]: time="2025-05-17T00:21:57.283343217Z" level=info msg="Container to stop \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:21:57.283437 containerd[1986]: time="2025-05-17T00:21:57.283425473Z" level=info msg="Container to stop \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:21:57.295738 systemd[1]: cri-containerd-6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa.scope: Deactivated successfully. May 17 00:21:57.330787 containerd[1986]: time="2025-05-17T00:21:57.330539372Z" level=info msg="shim disconnected" id=6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa namespace=k8s.io May 17 00:21:57.331136 containerd[1986]: time="2025-05-17T00:21:57.331109220Z" level=warning msg="cleaning up after shim disconnected" id=6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa namespace=k8s.io May 17 00:21:57.331320 containerd[1986]: time="2025-05-17T00:21:57.331300487Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:57.332783 containerd[1986]: time="2025-05-17T00:21:57.332719218Z" level=info msg="shim disconnected" id=28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959 namespace=k8s.io May 17 00:21:57.332783 containerd[1986]: time="2025-05-17T00:21:57.332781882Z" level=warning msg="cleaning up after shim disconnected" id=28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959 namespace=k8s.io May 17 00:21:57.333154 containerd[1986]: time="2025-05-17T00:21:57.332793118Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:21:57.355290 containerd[1986]: time="2025-05-17T00:21:57.355234756Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:21:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:21:57.367710 containerd[1986]: time="2025-05-17T00:21:57.366633615Z" level=info msg="TearDown network for sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" successfully" May 17 00:21:57.367710 containerd[1986]: time="2025-05-17T00:21:57.366684229Z" level=info msg="StopPodSandbox for \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" returns successfully" May 17 00:21:57.367710 containerd[1986]: time="2025-05-17T00:21:57.367598905Z" level=info msg="TearDown network for sandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" successfully" May 17 00:21:57.367710 containerd[1986]: time="2025-05-17T00:21:57.367621524Z" level=info msg="StopPodSandbox for \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" returns successfully" May 17 00:21:57.508656 kubelet[3174]: I0517 00:21:57.508611 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-bpf-maps\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.508656 kubelet[3174]: I0517 00:21:57.508669 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-config-path\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.509401 kubelet[3174]: I0517 00:21:57.508691 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df524321-03b0-4662-b37f-355c8c7d3299-cilium-config-path\") pod \"df524321-03b0-4662-b37f-355c8c7d3299\" (UID: \"df524321-03b0-4662-b37f-355c8c7d3299\") " May 17 00:21:57.509401 kubelet[3174]: I0517 00:21:57.508709 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f368d8ba-9f24-46f4-8599-b6a7a4486fef-clustermesh-secrets\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.509401 kubelet[3174]: I0517 00:21:57.508724 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-etc-cni-netd\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.509401 kubelet[3174]: I0517 00:21:57.508741 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltq48\" (UniqueName: \"kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-kube-api-access-ltq48\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.509401 kubelet[3174]: I0517 00:21:57.508760 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtptc\" (UniqueName: \"kubernetes.io/projected/df524321-03b0-4662-b37f-355c8c7d3299-kube-api-access-qtptc\") pod \"df524321-03b0-4662-b37f-355c8c7d3299\" (UID: \"df524321-03b0-4662-b37f-355c8c7d3299\") " May 17 00:21:57.509401 kubelet[3174]: I0517 00:21:57.508774 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-cgroup\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.510988 kubelet[3174]: I0517 00:21:57.508787 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-net\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.510988 kubelet[3174]: I0517 00:21:57.508802 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-xtables-lock\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.510988 kubelet[3174]: I0517 00:21:57.508818 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cni-path\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.510988 kubelet[3174]: I0517 00:21:57.508836 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hubble-tls\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.520131 kubelet[3174]: I0517 00:21:57.520078 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-run\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.520131 kubelet[3174]: I0517 00:21:57.520129 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-lib-modules\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.520313 kubelet[3174]: I0517 00:21:57.520148 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-kernel\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.520313 kubelet[3174]: I0517 00:21:57.520167 3174 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hostproc\") pod \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\" (UID: \"f368d8ba-9f24-46f4-8599-b6a7a4486fef\") " May 17 00:21:57.522359 kubelet[3174]: I0517 00:21:57.520718 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hostproc" (OuterVolumeSpecName: "hostproc") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.526881 kubelet[3174]: I0517 00:21:57.526157 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.526881 kubelet[3174]: I0517 00:21:57.526507 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.526881 kubelet[3174]: I0517 00:21:57.526542 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.526881 kubelet[3174]: I0517 00:21:57.526558 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.526881 kubelet[3174]: I0517 00:21:57.526572 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cni-path" (OuterVolumeSpecName: "cni-path") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.528493 kubelet[3174]: I0517 00:21:57.528464 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:21:57.529212 kubelet[3174]: I0517 00:21:57.529182 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:21:57.529275 kubelet[3174]: I0517 00:21:57.529237 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.529275 kubelet[3174]: I0517 00:21:57.529257 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.529275 kubelet[3174]: I0517 00:21:57.529272 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.529370 kubelet[3174]: I0517 00:21:57.529325 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df524321-03b0-4662-b37f-355c8c7d3299-kube-api-access-qtptc" (OuterVolumeSpecName: "kube-api-access-qtptc") pod "df524321-03b0-4662-b37f-355c8c7d3299" (UID: "df524321-03b0-4662-b37f-355c8c7d3299"). InnerVolumeSpecName "kube-api-access-qtptc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:21:57.529370 kubelet[3174]: I0517 00:21:57.529360 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:21:57.531012 kubelet[3174]: I0517 00:21:57.530988 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df524321-03b0-4662-b37f-355c8c7d3299-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df524321-03b0-4662-b37f-355c8c7d3299" (UID: "df524321-03b0-4662-b37f-355c8c7d3299"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:21:57.531171 kubelet[3174]: I0517 00:21:57.531152 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-kube-api-access-ltq48" (OuterVolumeSpecName: "kube-api-access-ltq48") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "kube-api-access-ltq48". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:21:57.532177 kubelet[3174]: I0517 00:21:57.532148 3174 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f368d8ba-9f24-46f4-8599-b6a7a4486fef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f368d8ba-9f24-46f4-8599-b6a7a4486fef" (UID: "f368d8ba-9f24-46f4-8599-b6a7a4486fef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:21:57.539895 systemd[1]: Removed slice kubepods-besteffort-poddf524321_03b0_4662_b37f_355c8c7d3299.slice - libcontainer container kubepods-besteffort-poddf524321_03b0_4662_b37f_355c8c7d3299.slice. May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620496 3174 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltq48\" (UniqueName: \"kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-kube-api-access-ltq48\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620531 3174 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtptc\" (UniqueName: \"kubernetes.io/projected/df524321-03b0-4662-b37f-355c8c7d3299-kube-api-access-qtptc\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620547 3174 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-cgroup\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620555 3174 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-net\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620565 3174 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-xtables-lock\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620573 3174 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cni-path\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620580 3174 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hubble-tls\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620624 kubelet[3174]: I0517 00:21:57.620587 3174 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-run\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620594 3174 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-lib-modules\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620602 3174 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-host-proc-sys-kernel\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620609 3174 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-hostproc\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620616 3174 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-bpf-maps\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620624 3174 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f368d8ba-9f24-46f4-8599-b6a7a4486fef-cilium-config-path\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620631 3174 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df524321-03b0-4662-b37f-355c8c7d3299-cilium-config-path\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620638 3174 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f368d8ba-9f24-46f4-8599-b6a7a4486fef-clustermesh-secrets\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.620987 kubelet[3174]: I0517 00:21:57.620645 3174 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f368d8ba-9f24-46f4-8599-b6a7a4486fef-etc-cni-netd\") on node \"ip-172-31-27-59\" DevicePath \"\"" May 17 00:21:57.797775 systemd[1]: Removed slice kubepods-burstable-podf368d8ba_9f24_46f4_8599_b6a7a4486fef.slice - libcontainer container kubepods-burstable-podf368d8ba_9f24_46f4_8599_b6a7a4486fef.slice. May 17 00:21:57.797909 systemd[1]: kubepods-burstable-podf368d8ba_9f24_46f4_8599_b6a7a4486fef.slice: Consumed 7.801s CPU time. May 17 00:21:57.860827 kubelet[3174]: I0517 00:21:57.860771 3174 scope.go:117] "RemoveContainer" containerID="e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602" May 17 00:21:57.870310 containerd[1986]: time="2025-05-17T00:21:57.870232543Z" level=info msg="RemoveContainer for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\"" May 17 00:21:57.880277 containerd[1986]: time="2025-05-17T00:21:57.880034464Z" level=info msg="RemoveContainer for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" returns successfully" May 17 00:21:57.905152 kubelet[3174]: I0517 00:21:57.905101 3174 scope.go:117] "RemoveContainer" containerID="114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5" May 17 00:21:57.908018 containerd[1986]: time="2025-05-17T00:21:57.907985766Z" level=info msg="RemoveContainer for \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\"" May 17 00:21:57.913294 containerd[1986]: time="2025-05-17T00:21:57.913146175Z" level=info msg="RemoveContainer for \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\" returns successfully" May 17 00:21:57.914122 kubelet[3174]: I0517 00:21:57.913575 3174 scope.go:117] "RemoveContainer" containerID="093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584" May 17 00:21:57.916018 containerd[1986]: time="2025-05-17T00:21:57.915362946Z" level=info msg="RemoveContainer for \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\"" May 17 00:21:57.922725 containerd[1986]: time="2025-05-17T00:21:57.922659973Z" level=info msg="RemoveContainer for \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\" returns successfully" May 17 00:21:57.923315 kubelet[3174]: I0517 00:21:57.923274 3174 scope.go:117] "RemoveContainer" containerID="5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274" May 17 00:21:57.925416 containerd[1986]: time="2025-05-17T00:21:57.925240391Z" level=info msg="RemoveContainer for \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\"" May 17 00:21:57.932216 containerd[1986]: time="2025-05-17T00:21:57.932029180Z" level=info msg="RemoveContainer for \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\" returns successfully" May 17 00:21:57.932341 kubelet[3174]: I0517 00:21:57.932261 3174 scope.go:117] "RemoveContainer" containerID="9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e" May 17 00:21:57.933475 containerd[1986]: time="2025-05-17T00:21:57.933448739Z" level=info msg="RemoveContainer for \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\"" May 17 00:21:57.938940 containerd[1986]: time="2025-05-17T00:21:57.938900551Z" level=info msg="RemoveContainer for \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\" returns successfully" May 17 00:21:57.939238 kubelet[3174]: I0517 00:21:57.939172 3174 scope.go:117] "RemoveContainer" containerID="e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602" May 17 00:21:57.957113 containerd[1986]: time="2025-05-17T00:21:57.948198595Z" level=error msg="ContainerStatus for \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\": not found" May 17 00:21:57.957419 kubelet[3174]: E0517 00:21:57.957380 3174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\": not found" containerID="e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602" May 17 00:21:57.980239 kubelet[3174]: I0517 00:21:57.957429 3174 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602"} err="failed to get container status \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\": rpc error: code = NotFound desc = an error occurred when try to find container \"e04e580f2aae15b8660e01781b7adece5e4c01ad0854ea5c109bfbc595b5f602\": not found" May 17 00:21:57.980239 kubelet[3174]: I0517 00:21:57.980227 3174 scope.go:117] "RemoveContainer" containerID="114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5" May 17 00:21:57.980580 containerd[1986]: time="2025-05-17T00:21:57.980538600Z" level=error msg="ContainerStatus for \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\": not found" May 17 00:21:57.980767 kubelet[3174]: E0517 00:21:57.980733 3174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\": not found" containerID="114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5" May 17 00:21:57.980854 kubelet[3174]: I0517 00:21:57.980768 3174 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5"} err="failed to get container status \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\": rpc error: code = NotFound desc = an error occurred when try to find container \"114c14ce970318eabc9153690b3942819e1d7d1d17812ad2770e50a796505cf5\": not found" May 17 00:21:57.980854 kubelet[3174]: I0517 00:21:57.980789 3174 scope.go:117] "RemoveContainer" containerID="093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584" May 17 00:21:57.981044 containerd[1986]: time="2025-05-17T00:21:57.980998279Z" level=error msg="ContainerStatus for \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\": not found" May 17 00:21:57.981151 kubelet[3174]: E0517 00:21:57.981118 3174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\": not found" containerID="093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584" May 17 00:21:57.981151 kubelet[3174]: I0517 00:21:57.981141 3174 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584"} err="failed to get container status \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\": rpc error: code = NotFound desc = an error occurred when try to find container \"093ff6ed7165fdb1e5210aad488439799e37d4d00e903a30dffb3bb80baa2584\": not found" May 17 00:21:57.981259 kubelet[3174]: I0517 00:21:57.981158 3174 scope.go:117] "RemoveContainer" containerID="5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274" May 17 00:21:57.981346 containerd[1986]: time="2025-05-17T00:21:57.981315744Z" level=error msg="ContainerStatus for \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\": not found" May 17 00:21:57.981455 kubelet[3174]: E0517 00:21:57.981436 3174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\": not found" containerID="5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274" May 17 00:21:57.981489 kubelet[3174]: I0517 00:21:57.981459 3174 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274"} err="failed to get container status \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b4a5a4318ac5dafc0f12f00c053b5053ca49f6aef08536e349010f12ad94274\": not found" May 17 00:21:57.981489 kubelet[3174]: I0517 00:21:57.981474 3174 scope.go:117] "RemoveContainer" containerID="9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e" May 17 00:21:57.981725 containerd[1986]: time="2025-05-17T00:21:57.981628267Z" level=error msg="ContainerStatus for \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\": not found" May 17 00:21:57.981826 kubelet[3174]: E0517 00:21:57.981730 3174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\": not found" containerID="9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e" May 17 00:21:57.981826 kubelet[3174]: I0517 00:21:57.981748 3174 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e"} err="failed to get container status \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\": rpc error: code = NotFound desc = an error occurred when try to find container \"9990483723f310ba92684a4ca3855208ae9835a0ac477977d6a6531ac431148e\": not found" May 17 00:21:57.981826 kubelet[3174]: I0517 00:21:57.981761 3174 scope.go:117] "RemoveContainer" containerID="f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3" May 17 00:21:57.982912 containerd[1986]: time="2025-05-17T00:21:57.982889365Z" level=info msg="RemoveContainer for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\"" May 17 00:21:57.988041 containerd[1986]: time="2025-05-17T00:21:57.987982491Z" level=info msg="RemoveContainer for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" returns successfully" May 17 00:21:57.988257 kubelet[3174]: I0517 00:21:57.988233 3174 scope.go:117] "RemoveContainer" containerID="f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3" May 17 00:21:57.988525 containerd[1986]: time="2025-05-17T00:21:57.988494065Z" level=error msg="ContainerStatus for \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\": not found" May 17 00:21:57.988804 kubelet[3174]: E0517 00:21:57.988766 3174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\": not found" containerID="f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3" May 17 00:21:57.988804 kubelet[3174]: I0517 00:21:57.988796 3174 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3"} err="failed to get container status \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8417bfd98f0891cfda8b00564f9d90f6f53bca3a804ed86f3d8cffc894353e3\": not found" May 17 00:21:58.075004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa-rootfs.mount: Deactivated successfully. May 17 00:21:58.075116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa-shm.mount: Deactivated successfully. May 17 00:21:58.075178 systemd[1]: var-lib-kubelet-pods-f368d8ba\x2d9f24\x2d46f4\x2d8599\x2db6a7a4486fef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dltq48.mount: Deactivated successfully. May 17 00:21:58.075236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959-rootfs.mount: Deactivated successfully. May 17 00:21:58.075285 systemd[1]: var-lib-kubelet-pods-f368d8ba\x2d9f24\x2d46f4\x2d8599\x2db6a7a4486fef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:21:58.075338 systemd[1]: var-lib-kubelet-pods-f368d8ba\x2d9f24\x2d46f4\x2d8599\x2db6a7a4486fef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:21:58.075393 systemd[1]: var-lib-kubelet-pods-df524321\x2d03b0\x2d4662\x2db37f\x2d355c8c7d3299-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtptc.mount: Deactivated successfully. May 17 00:21:58.632998 kubelet[3174]: E0517 00:21:58.622430 3174 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:21:59.029606 sshd[4840]: pam_unix(sshd:session): session closed for user core May 17 00:21:59.035248 systemd[1]: sshd@24-172.31.27.59:22-147.75.109.163:32988.service: Deactivated successfully. May 17 00:21:59.037426 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:21:59.039076 systemd-logind[1951]: Session 25 logged out. Waiting for processes to exit. May 17 00:21:59.040510 systemd-logind[1951]: Removed session 25. May 17 00:21:59.068368 systemd[1]: Started sshd@25-172.31.27.59:22-147.75.109.163:41030.service - OpenSSH per-connection server daemon (147.75.109.163:41030). May 17 00:21:59.243191 sshd[5005]: Accepted publickey for core from 147.75.109.163 port 41030 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:21:59.244722 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:59.250054 systemd-logind[1951]: New session 26 of user core. May 17 00:21:59.253112 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:21:59.527949 kubelet[3174]: I0517 00:21:59.527173 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df524321-03b0-4662-b37f-355c8c7d3299" path="/var/lib/kubelet/pods/df524321-03b0-4662-b37f-355c8c7d3299/volumes" May 17 00:21:59.527949 kubelet[3174]: I0517 00:21:59.527556 3174 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" path="/var/lib/kubelet/pods/f368d8ba-9f24-46f4-8599-b6a7a4486fef/volumes" May 17 00:21:59.958133 kubelet[3174]: E0517 00:21:59.958005 3174 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" containerName="mount-bpf-fs" May 17 00:21:59.958133 kubelet[3174]: E0517 00:21:59.958045 3174 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" containerName="clean-cilium-state" May 17 00:21:59.958133 kubelet[3174]: E0517 00:21:59.958053 3174 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" containerName="cilium-agent" May 17 00:21:59.958133 kubelet[3174]: E0517 00:21:59.958063 3174 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df524321-03b0-4662-b37f-355c8c7d3299" containerName="cilium-operator" May 17 00:21:59.958133 kubelet[3174]: E0517 00:21:59.958071 3174 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" containerName="mount-cgroup" May 17 00:21:59.958133 kubelet[3174]: E0517 00:21:59.958081 3174 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" containerName="apply-sysctl-overwrites" May 17 00:21:59.958133 kubelet[3174]: I0517 00:21:59.958128 3174 memory_manager.go:354] "RemoveStaleState removing state" podUID="df524321-03b0-4662-b37f-355c8c7d3299" containerName="cilium-operator" May 17 00:21:59.958800 kubelet[3174]: I0517 00:21:59.958143 3174 memory_manager.go:354] "RemoveStaleState removing state" podUID="f368d8ba-9f24-46f4-8599-b6a7a4486fef" containerName="cilium-agent" May 17 00:21:59.962856 sshd[5005]: pam_unix(sshd:session): session closed for user core May 17 00:21:59.973590 systemd-logind[1951]: Session 26 logged out. Waiting for processes to exit. May 17 00:21:59.974513 systemd[1]: sshd@25-172.31.27.59:22-147.75.109.163:41030.service: Deactivated successfully. May 17 00:21:59.979732 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:22:00.005523 systemd-logind[1951]: Removed session 26. May 17 00:22:00.017107 systemd[1]: Started sshd@26-172.31.27.59:22-147.75.109.163:41038.service - OpenSSH per-connection server daemon (147.75.109.163:41038). May 17 00:22:00.062396 kubelet[3174]: I0517 00:22:00.060477 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-cni-path\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062396 kubelet[3174]: I0517 00:22:00.060533 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e253ed42-c7cd-4ade-b15c-3332298a1b63-hubble-tls\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062396 kubelet[3174]: I0517 00:22:00.060566 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-cilium-run\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062396 kubelet[3174]: I0517 00:22:00.060591 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e253ed42-c7cd-4ade-b15c-3332298a1b63-cilium-ipsec-secrets\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062396 kubelet[3174]: I0517 00:22:00.060619 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e253ed42-c7cd-4ade-b15c-3332298a1b63-cilium-config-path\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062396 kubelet[3174]: I0517 00:22:00.060640 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e253ed42-c7cd-4ade-b15c-3332298a1b63-clustermesh-secrets\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062757 kubelet[3174]: I0517 00:22:00.060662 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-hostproc\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062757 kubelet[3174]: I0517 00:22:00.060684 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-etc-cni-netd\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062757 kubelet[3174]: I0517 00:22:00.061001 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-xtables-lock\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.062757 kubelet[3174]: I0517 00:22:00.061040 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-cilium-cgroup\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.063587 kubelet[3174]: I0517 00:22:00.063090 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-host-proc-sys-net\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.063587 kubelet[3174]: I0517 00:22:00.063155 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-lib-modules\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.063587 kubelet[3174]: I0517 00:22:00.063179 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-host-proc-sys-kernel\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.063587 kubelet[3174]: I0517 00:22:00.063204 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e253ed42-c7cd-4ade-b15c-3332298a1b63-bpf-maps\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.063587 kubelet[3174]: I0517 00:22:00.063226 3174 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlqfg\" (UniqueName: \"kubernetes.io/projected/e253ed42-c7cd-4ade-b15c-3332298a1b63-kube-api-access-vlqfg\") pod \"cilium-pwqnl\" (UID: \"e253ed42-c7cd-4ade-b15c-3332298a1b63\") " pod="kube-system/cilium-pwqnl" May 17 00:22:00.078010 systemd[1]: Created slice kubepods-burstable-pode253ed42_c7cd_4ade_b15c_3332298a1b63.slice - libcontainer container kubepods-burstable-pode253ed42_c7cd_4ade_b15c_3332298a1b63.slice. May 17 00:22:00.113851 ntpd[1943]: Deleting interface #12 lxc_health, fe80::e4f8:9fff:fe84:17db%8#123, interface stats: received=0, sent=0, dropped=0, active_time=55 secs May 17 00:22:00.114448 ntpd[1943]: 17 May 00:22:00 ntpd[1943]: Deleting interface #12 lxc_health, fe80::e4f8:9fff:fe84:17db%8#123, interface stats: received=0, sent=0, dropped=0, active_time=55 secs May 17 00:22:00.223200 sshd[5017]: Accepted publickey for core from 147.75.109.163 port 41038 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:22:00.225214 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:00.235950 systemd-logind[1951]: New session 27 of user core. May 17 00:22:00.246282 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:22:00.365090 sshd[5017]: pam_unix(sshd:session): session closed for user core May 17 00:22:00.368221 systemd[1]: sshd@26-172.31.27.59:22-147.75.109.163:41038.service: Deactivated successfully. May 17 00:22:00.370133 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:22:00.371715 systemd-logind[1951]: Session 27 logged out. Waiting for processes to exit. May 17 00:22:00.373194 systemd-logind[1951]: Removed session 27. May 17 00:22:00.382966 containerd[1986]: time="2025-05-17T00:22:00.382827873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwqnl,Uid:e253ed42-c7cd-4ade-b15c-3332298a1b63,Namespace:kube-system,Attempt:0,}" May 17 00:22:00.404521 systemd[1]: Started sshd@27-172.31.27.59:22-147.75.109.163:41052.service - OpenSSH per-connection server daemon (147.75.109.163:41052). May 17 00:22:00.429939 containerd[1986]: time="2025-05-17T00:22:00.429791833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:22:00.429939 containerd[1986]: time="2025-05-17T00:22:00.429874067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:22:00.430269 containerd[1986]: time="2025-05-17T00:22:00.429897437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.431219 containerd[1986]: time="2025-05-17T00:22:00.431158983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:22:00.448145 systemd[1]: Started cri-containerd-dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e.scope - libcontainer container dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e. May 17 00:22:00.473030 containerd[1986]: time="2025-05-17T00:22:00.472986855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwqnl,Uid:e253ed42-c7cd-4ade-b15c-3332298a1b63,Namespace:kube-system,Attempt:0,} returns sandbox id \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\"" May 17 00:22:00.477342 containerd[1986]: time="2025-05-17T00:22:00.477311293Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:22:00.502436 containerd[1986]: time="2025-05-17T00:22:00.502389699Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3\"" May 17 00:22:00.504161 containerd[1986]: time="2025-05-17T00:22:00.503082961Z" level=info msg="StartContainer for \"bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3\"" May 17 00:22:00.532144 systemd[1]: Started cri-containerd-bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3.scope - libcontainer container bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3. May 17 00:22:00.561877 containerd[1986]: time="2025-05-17T00:22:00.561818101Z" level=info msg="StartContainer for \"bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3\" returns successfully" May 17 00:22:00.571978 sshd[5029]: Accepted publickey for core from 147.75.109.163 port 41052 ssh2: RSA SHA256:E8bmmc3B2wMCD2qz/Q76BSqC6iw/7h1QTQJbwYDuOA4 May 17 00:22:00.572852 systemd[1]: cri-containerd-bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3.scope: Deactivated successfully. May 17 00:22:00.574478 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:22:00.582635 systemd-logind[1951]: New session 28 of user core. May 17 00:22:00.588147 systemd[1]: Started session-28.scope - Session 28 of User core. May 17 00:22:00.632258 containerd[1986]: time="2025-05-17T00:22:00.632194714Z" level=info msg="shim disconnected" id=bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3 namespace=k8s.io May 17 00:22:00.632258 containerd[1986]: time="2025-05-17T00:22:00.632256587Z" level=warning msg="cleaning up after shim disconnected" id=bffa6186b073d20000d2a9846b9ed26a43ce2692b0351753318470a44f2a7db3 namespace=k8s.io May 17 00:22:00.632258 containerd[1986]: time="2025-05-17T00:22:00.632265778Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:00.884506 containerd[1986]: time="2025-05-17T00:22:00.884159305Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:22:00.906284 containerd[1986]: time="2025-05-17T00:22:00.906222543Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789\"" May 17 00:22:00.907596 containerd[1986]: time="2025-05-17T00:22:00.906818180Z" level=info msg="StartContainer for \"f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789\"" May 17 00:22:00.936120 systemd[1]: Started cri-containerd-f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789.scope - libcontainer container f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789. May 17 00:22:00.964607 containerd[1986]: time="2025-05-17T00:22:00.963859197Z" level=info msg="StartContainer for \"f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789\" returns successfully" May 17 00:22:00.971313 systemd[1]: cri-containerd-f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789.scope: Deactivated successfully. May 17 00:22:01.008491 containerd[1986]: time="2025-05-17T00:22:01.008429146Z" level=info msg="shim disconnected" id=f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789 namespace=k8s.io May 17 00:22:01.008491 containerd[1986]: time="2025-05-17T00:22:01.008479231Z" level=warning msg="cleaning up after shim disconnected" id=f2f5c915391861f66b155aa0303f87e7ad9b4bc5451f93e99aa52f2cfe6ac789 namespace=k8s.io May 17 00:22:01.008491 containerd[1986]: time="2025-05-17T00:22:01.008487832Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:01.888026 containerd[1986]: time="2025-05-17T00:22:01.887981358Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:22:01.936667 containerd[1986]: time="2025-05-17T00:22:01.936610746Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376\"" May 17 00:22:01.937522 containerd[1986]: time="2025-05-17T00:22:01.937486677Z" level=info msg="StartContainer for \"4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376\"" May 17 00:22:01.985176 systemd[1]: Started cri-containerd-4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376.scope - libcontainer container 4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376. May 17 00:22:02.019297 containerd[1986]: time="2025-05-17T00:22:02.019060616Z" level=info msg="StartContainer for \"4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376\" returns successfully" May 17 00:22:02.030012 systemd[1]: cri-containerd-4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376.scope: Deactivated successfully. May 17 00:22:02.097971 containerd[1986]: time="2025-05-17T00:22:02.097790879Z" level=info msg="shim disconnected" id=4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376 namespace=k8s.io May 17 00:22:02.097971 containerd[1986]: time="2025-05-17T00:22:02.097883526Z" level=warning msg="cleaning up after shim disconnected" id=4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376 namespace=k8s.io May 17 00:22:02.097971 containerd[1986]: time="2025-05-17T00:22:02.097917229Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:02.116093 containerd[1986]: time="2025-05-17T00:22:02.116044575Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:22:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:22:02.179992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f49bd37e8017ae049e287a47351d5ec8f056b260087ad9d180e029f122dc376-rootfs.mount: Deactivated successfully. May 17 00:22:02.896490 update_engine[1952]: I20250517 00:22:02.896323 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:22:02.898271 update_engine[1952]: I20250517 00:22:02.896641 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:22:02.898271 update_engine[1952]: I20250517 00:22:02.896896 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:22:02.899126 update_engine[1952]: E20250517 00:22:02.898801 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:22:02.899126 update_engine[1952]: I20250517 00:22:02.898876 1952 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:22:02.899566 containerd[1986]: time="2025-05-17T00:22:02.898802330Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:22:02.926259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4122784827.mount: Deactivated successfully. May 17 00:22:02.929846 containerd[1986]: time="2025-05-17T00:22:02.929803139Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8\"" May 17 00:22:02.930887 containerd[1986]: time="2025-05-17T00:22:02.930720901Z" level=info msg="StartContainer for \"f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8\"" May 17 00:22:02.973190 systemd[1]: Started cri-containerd-f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8.scope - libcontainer container f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8. May 17 00:22:03.016226 systemd[1]: cri-containerd-f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8.scope: Deactivated successfully. May 17 00:22:03.022528 containerd[1986]: time="2025-05-17T00:22:03.022407042Z" level=info msg="StartContainer for \"f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8\" returns successfully" May 17 00:22:03.058464 containerd[1986]: time="2025-05-17T00:22:03.058377041Z" level=info msg="shim disconnected" id=f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8 namespace=k8s.io May 17 00:22:03.058464 containerd[1986]: time="2025-05-17T00:22:03.058431006Z" level=warning msg="cleaning up after shim disconnected" id=f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8 namespace=k8s.io May 17 00:22:03.058464 containerd[1986]: time="2025-05-17T00:22:03.058439906Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:03.178414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f47150a9523dace0b30db6e55bd183f20a3bae9909d6576135593cda35c0c3c8-rootfs.mount: Deactivated successfully. May 17 00:22:03.634184 kubelet[3174]: E0517 00:22:03.633974 3174 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:22:03.903835 containerd[1986]: time="2025-05-17T00:22:03.903708074Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:22:03.934066 containerd[1986]: time="2025-05-17T00:22:03.933915258Z" level=info msg="CreateContainer within sandbox \"dda539e557afc890a58f5408b510b02803f83cc0a0c787fcf935171c3816b04e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0a5139a99f444c5e563eecef585f6ff929d783474d49a306b879b3daa3d099b\"" May 17 00:22:03.935891 containerd[1986]: time="2025-05-17T00:22:03.934870150Z" level=info msg="StartContainer for \"c0a5139a99f444c5e563eecef585f6ff929d783474d49a306b879b3daa3d099b\"" May 17 00:22:03.975477 systemd[1]: Started cri-containerd-c0a5139a99f444c5e563eecef585f6ff929d783474d49a306b879b3daa3d099b.scope - libcontainer container c0a5139a99f444c5e563eecef585f6ff929d783474d49a306b879b3daa3d099b. May 17 00:22:04.013001 containerd[1986]: time="2025-05-17T00:22:04.012923564Z" level=info msg="StartContainer for \"c0a5139a99f444c5e563eecef585f6ff929d783474d49a306b879b3daa3d099b\" returns successfully" May 17 00:22:04.178709 systemd[1]: run-containerd-runc-k8s.io-c0a5139a99f444c5e563eecef585f6ff929d783474d49a306b879b3daa3d099b-runc.peBhOo.mount: Deactivated successfully. May 17 00:22:04.603969 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:22:05.594545 kubelet[3174]: I0517 00:22:05.594502 3174 setters.go:600] "Node became not ready" node="ip-172-31-27-59" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:22:05Z","lastTransitionTime":"2025-05-17T00:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:22:07.556746 systemd-networkd[1889]: lxc_health: Link UP May 17 00:22:07.561705 (udev-worker)[5873]: Network interface NamePolicy= disabled on kernel command line. May 17 00:22:07.562838 systemd-networkd[1889]: lxc_health: Gained carrier May 17 00:22:08.418063 kubelet[3174]: I0517 00:22:08.417073 3174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pwqnl" podStartSLOduration=9.417046857 podStartE2EDuration="9.417046857s" podCreationTimestamp="2025-05-17 00:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:22:04.918631053 +0000 UTC m=+91.542882699" watchObservedRunningTime="2025-05-17 00:22:08.417046857 +0000 UTC m=+95.041298498" May 17 00:22:08.988095 systemd-networkd[1889]: lxc_health: Gained IPv6LL May 17 00:22:11.114031 ntpd[1943]: Listen normally on 15 lxc_health [fe80::508e:42ff:fe6d:6798%14]:123 May 17 00:22:11.114569 ntpd[1943]: 17 May 00:22:11 ntpd[1943]: Listen normally on 15 lxc_health [fe80::508e:42ff:fe6d:6798%14]:123 May 17 00:22:12.896805 update_engine[1952]: I20250517 00:22:12.896728 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:22:12.897201 update_engine[1952]: I20250517 00:22:12.897022 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:22:12.897249 update_engine[1952]: I20250517 00:22:12.897235 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:22:12.897868 update_engine[1952]: E20250517 00:22:12.897825 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:22:12.898069 update_engine[1952]: I20250517 00:22:12.897889 1952 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:22:13.898881 kubelet[3174]: E0517 00:22:13.898805 3174 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35336->127.0.0.1:46849: write tcp 127.0.0.1:35336->127.0.0.1:46849: write: connection reset by peer May 17 00:22:13.925214 sshd[5029]: pam_unix(sshd:session): session closed for user core May 17 00:22:13.929810 systemd[1]: sshd@27-172.31.27.59:22-147.75.109.163:41052.service: Deactivated successfully. May 17 00:22:13.931675 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:22:13.932403 systemd-logind[1951]: Session 28 logged out. Waiting for processes to exit. May 17 00:22:13.934184 systemd-logind[1951]: Removed session 28. May 17 00:22:22.903815 update_engine[1952]: I20250517 00:22:22.903743 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:22:22.904211 update_engine[1952]: I20250517 00:22:22.904011 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:22:22.904245 update_engine[1952]: I20250517 00:22:22.904208 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:22:22.905009 update_engine[1952]: E20250517 00:22:22.904965 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:22:22.905125 update_engine[1952]: I20250517 00:22:22.905028 1952 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:22:22.905125 update_engine[1952]: I20250517 00:22:22.905038 1952 omaha_request_action.cc:617] Omaha request response: May 17 00:22:22.905125 update_engine[1952]: E20250517 00:22:22.905112 1952 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:22:22.908364 update_engine[1952]: I20250517 00:22:22.908138 1952 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:22:22.908364 update_engine[1952]: I20250517 00:22:22.908179 1952 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:22:22.908364 update_engine[1952]: I20250517 00:22:22.908190 1952 update_attempter.cc:306] Processing Done. May 17 00:22:22.908364 update_engine[1952]: E20250517 00:22:22.908210 1952 update_attempter.cc:619] Update failed. May 17 00:22:22.913225 update_engine[1952]: I20250517 00:22:22.913147 1952 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:22:22.913225 update_engine[1952]: I20250517 00:22:22.913194 1952 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:22:22.913225 update_engine[1952]: I20250517 00:22:22.913206 1952 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:22:22.913435 update_engine[1952]: I20250517 00:22:22.913301 1952 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:22:22.913435 update_engine[1952]: I20250517 00:22:22.913338 1952 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:22:22.913435 update_engine[1952]: I20250517 00:22:22.913347 1952 omaha_request_action.cc:272] Request: May 17 00:22:22.913435 update_engine[1952]: May 17 00:22:22.913435 update_engine[1952]: May 17 00:22:22.913435 update_engine[1952]: May 17 00:22:22.913435 update_engine[1952]: May 17 00:22:22.913435 update_engine[1952]: May 17 00:22:22.913435 update_engine[1952]: May 17 00:22:22.913435 update_engine[1952]: I20250517 00:22:22.913355 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:22:22.913784 update_engine[1952]: I20250517 00:22:22.913578 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:22:22.914188 update_engine[1952]: I20250517 00:22:22.913823 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:22:22.914287 locksmithd[1991]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:22:22.914899 update_engine[1952]: E20250517 00:22:22.914856 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.914947 1952 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.914960 1952 omaha_request_action.cc:617] Omaha request response: May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.914970 1952 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.914978 1952 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.914985 1952 update_attempter.cc:306] Processing Done. May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.914994 1952 update_attempter.cc:310] Error event sent. May 17 00:22:22.915063 update_engine[1952]: I20250517 00:22:22.915006 1952 update_check_scheduler.cc:74] Next update check in 48m34s May 17 00:22:22.915392 locksmithd[1991]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:22:33.542586 containerd[1986]: time="2025-05-17T00:22:33.542535813Z" level=info msg="StopPodSandbox for \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\"" May 17 00:22:33.543323 containerd[1986]: time="2025-05-17T00:22:33.542633011Z" level=info msg="TearDown network for sandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" successfully" May 17 00:22:33.543323 containerd[1986]: time="2025-05-17T00:22:33.542646298Z" level=info msg="StopPodSandbox for \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" returns successfully" May 17 00:22:33.543323 containerd[1986]: time="2025-05-17T00:22:33.543157691Z" level=info msg="RemovePodSandbox for \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\"" May 17 00:22:33.545389 containerd[1986]: time="2025-05-17T00:22:33.545344847Z" level=info msg="Forcibly stopping sandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\"" May 17 00:22:33.545502 containerd[1986]: time="2025-05-17T00:22:33.545419204Z" level=info msg="TearDown network for sandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" successfully" May 17 00:22:33.551006 containerd[1986]: time="2025-05-17T00:22:33.550956434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:33.551184 containerd[1986]: time="2025-05-17T00:22:33.551039702Z" level=info msg="RemovePodSandbox \"28e1cbb4cffa19fd1d69a85abff1b579bc02d935bdbf5ac7b174abc066e04959\" returns successfully" May 17 00:22:33.551623 containerd[1986]: time="2025-05-17T00:22:33.551591109Z" level=info msg="StopPodSandbox for \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\"" May 17 00:22:33.551710 containerd[1986]: time="2025-05-17T00:22:33.551680803Z" level=info msg="TearDown network for sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" successfully" May 17 00:22:33.551710 containerd[1986]: time="2025-05-17T00:22:33.551696679Z" level=info msg="StopPodSandbox for \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" returns successfully" May 17 00:22:33.552128 containerd[1986]: time="2025-05-17T00:22:33.552098631Z" level=info msg="RemovePodSandbox for \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\"" May 17 00:22:33.552233 containerd[1986]: time="2025-05-17T00:22:33.552128803Z" level=info msg="Forcibly stopping sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\"" May 17 00:22:33.552233 containerd[1986]: time="2025-05-17T00:22:33.552186964Z" level=info msg="TearDown network for sandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" successfully" May 17 00:22:33.557522 containerd[1986]: time="2025-05-17T00:22:33.557471567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:22:33.557626 containerd[1986]: time="2025-05-17T00:22:33.557542113Z" level=info msg="RemovePodSandbox \"6bb8391816c0c6e8fbba3ea68671a14b54c40b5949637e52d2a3868d1cd73baa\" returns successfully" May 17 00:22:51.104663 systemd[1]: cri-containerd-a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90.scope: Deactivated successfully. May 17 00:22:51.105527 systemd[1]: cri-containerd-a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90.scope: Consumed 2.889s CPU time, 25.8M memory peak, 0B memory swap peak. May 17 00:22:51.133320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90-rootfs.mount: Deactivated successfully. May 17 00:22:51.148954 containerd[1986]: time="2025-05-17T00:22:51.148809941Z" level=info msg="shim disconnected" id=a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90 namespace=k8s.io May 17 00:22:51.148954 containerd[1986]: time="2025-05-17T00:22:51.148856137Z" level=warning msg="cleaning up after shim disconnected" id=a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90 namespace=k8s.io May 17 00:22:51.148954 containerd[1986]: time="2025-05-17T00:22:51.148864047Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:51.991093 kubelet[3174]: I0517 00:22:51.991049 3174 scope.go:117] "RemoveContainer" containerID="a2eafbdf4aed74037ba469cf1696f3aad497324040ca434ca28c56347ce6ad90" May 17 00:22:51.996136 containerd[1986]: time="2025-05-17T00:22:51.996089806Z" level=info msg="CreateContainer within sandbox \"d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:22:52.018517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242086374.mount: Deactivated successfully. May 17 00:22:52.020629 containerd[1986]: time="2025-05-17T00:22:52.020590176Z" level=info msg="CreateContainer within sandbox \"d68e40d4b35069b4f9b03b6ac710bdab02a25fedb12626988968878ccde1494b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"58749348c48de8aa0b64e2cfd288fcd368cf0d6cdf5a27f4a8e46c5bc79b4fd1\"" May 17 00:22:52.021207 containerd[1986]: time="2025-05-17T00:22:52.021176453Z" level=info msg="StartContainer for \"58749348c48de8aa0b64e2cfd288fcd368cf0d6cdf5a27f4a8e46c5bc79b4fd1\"" May 17 00:22:52.060149 systemd[1]: Started cri-containerd-58749348c48de8aa0b64e2cfd288fcd368cf0d6cdf5a27f4a8e46c5bc79b4fd1.scope - libcontainer container 58749348c48de8aa0b64e2cfd288fcd368cf0d6cdf5a27f4a8e46c5bc79b4fd1. May 17 00:22:52.112417 containerd[1986]: time="2025-05-17T00:22:52.112348534Z" level=info msg="StartContainer for \"58749348c48de8aa0b64e2cfd288fcd368cf0d6cdf5a27f4a8e46c5bc79b4fd1\" returns successfully" May 17 00:22:55.808138 systemd[1]: cri-containerd-735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4.scope: Deactivated successfully. May 17 00:22:55.808414 systemd[1]: cri-containerd-735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4.scope: Consumed 1.510s CPU time, 18.1M memory peak, 0B memory swap peak. May 17 00:22:55.835589 kubelet[3174]: E0517 00:22:55.834473 3174 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-59?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:22:55.856068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4-rootfs.mount: Deactivated successfully. May 17 00:22:55.878597 containerd[1986]: time="2025-05-17T00:22:55.878515808Z" level=info msg="shim disconnected" id=735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4 namespace=k8s.io May 17 00:22:55.878597 containerd[1986]: time="2025-05-17T00:22:55.878591838Z" level=warning msg="cleaning up after shim disconnected" id=735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4 namespace=k8s.io May 17 00:22:55.878597 containerd[1986]: time="2025-05-17T00:22:55.878606116Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:22:56.000565 kubelet[3174]: I0517 00:22:56.000532 3174 scope.go:117] "RemoveContainer" containerID="735eef7a01d32082b25f68e31d66c54e8762a01cff9acb11a489c500b4a0e5e4" May 17 00:22:56.003497 containerd[1986]: time="2025-05-17T00:22:56.003460297Z" level=info msg="CreateContainer within sandbox \"93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:22:56.025048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3275096054.mount: Deactivated successfully. May 17 00:22:56.031686 containerd[1986]: time="2025-05-17T00:22:56.031646696Z" level=info msg="CreateContainer within sandbox \"93ae21c6438b87f45fbb04feb2ce1f1be4addacc9318a759a84e24c4ecf205fd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ab607617e6c09dbe77af19ec2b4a2be09141a8e4ace6dc7d9eb08db42f49f313\"" May 17 00:22:56.032147 containerd[1986]: time="2025-05-17T00:22:56.032123887Z" level=info msg="StartContainer for \"ab607617e6c09dbe77af19ec2b4a2be09141a8e4ace6dc7d9eb08db42f49f313\"" May 17 00:22:56.065149 systemd[1]: Started cri-containerd-ab607617e6c09dbe77af19ec2b4a2be09141a8e4ace6dc7d9eb08db42f49f313.scope - libcontainer container ab607617e6c09dbe77af19ec2b4a2be09141a8e4ace6dc7d9eb08db42f49f313. May 17 00:22:56.113395 containerd[1986]: time="2025-05-17T00:22:56.113335949Z" level=info msg="StartContainer for \"ab607617e6c09dbe77af19ec2b4a2be09141a8e4ace6dc7d9eb08db42f49f313\" returns successfully" May 17 00:23:05.836121 kubelet[3174]: E0517 00:23:05.835895 3174 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-59?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"