May 17 00:04:56.278271 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 17 00:04:56.278326 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:04:56.278354 kernel: KASLR disabled due to lack of seed May 17 00:04:56.278372 kernel: efi: EFI v2.7 by EDK II May 17 00:04:56.278389 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 17 00:04:56.278406 kernel: ACPI: Early table checksum verification disabled May 17 00:04:56.278426 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 17 00:04:56.278444 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:04:56.278461 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:04:56.278479 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 17 00:04:56.278503 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:04:56.278521 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 17 00:04:56.278537 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 17 00:04:56.278555 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 17 00:04:56.278577 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:04:56.278600 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 17 00:04:56.278619 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 17 00:04:56.278636 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 17 00:04:56.278654 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 17 00:04:56.278671 kernel: printk: bootconsole [uart0] enabled May 17 00:04:56.278689 kernel: NUMA: Failed to initialise from firmware May 17 00:04:56.278707 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 17 00:04:56.278725 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 17 00:04:56.278742 kernel: Zone ranges: May 17 00:04:56.278819 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:04:56.278846 kernel: DMA32 empty May 17 00:04:56.278874 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 17 00:04:56.278894 kernel: Movable zone start for each node May 17 00:04:56.278911 kernel: Early memory node ranges May 17 00:04:56.278928 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 17 00:04:56.278946 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 17 00:04:56.278963 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 17 00:04:56.278981 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 17 00:04:56.278998 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 17 00:04:56.279016 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 17 00:04:56.279033 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 17 00:04:56.279050 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 17 00:04:56.279067 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 17 00:04:56.279091 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 17 00:04:56.279109 kernel: psci: probing for conduit method from ACPI. May 17 00:04:56.279134 kernel: psci: PSCIv1.0 detected in firmware. May 17 00:04:56.279152 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:04:56.279198 kernel: psci: Trusted OS migration not required May 17 00:04:56.279225 kernel: psci: SMC Calling Convention v1.1 May 17 00:04:56.279243 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:04:56.279261 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:04:56.279280 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:04:56.279298 kernel: Detected PIPT I-cache on CPU0 May 17 00:04:56.279316 kernel: CPU features: detected: GIC system register CPU interface May 17 00:04:56.279334 kernel: CPU features: detected: Spectre-v2 May 17 00:04:56.279352 kernel: CPU features: detected: Spectre-v3a May 17 00:04:56.279370 kernel: CPU features: detected: Spectre-BHB May 17 00:04:56.279389 kernel: CPU features: detected: ARM erratum 1742098 May 17 00:04:56.279407 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 17 00:04:56.279431 kernel: alternatives: applying boot alternatives May 17 00:04:56.279452 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:04:56.279473 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:04:56.279494 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:04:56.279512 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:04:56.279530 kernel: Fallback order for Node 0: 0 May 17 00:04:56.279549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 17 00:04:56.279566 kernel: Policy zone: Normal May 17 00:04:56.279584 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:04:56.279602 kernel: software IO TLB: area num 2. May 17 00:04:56.279620 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 17 00:04:56.279646 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) May 17 00:04:56.279665 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:04:56.279683 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:04:56.279705 kernel: rcu: RCU event tracing is enabled. May 17 00:04:56.279730 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:04:56.279752 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:04:56.279819 kernel: Tracing variant of Tasks RCU enabled. May 17 00:04:56.279838 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:04:56.279857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:04:56.279875 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:04:56.279894 kernel: GICv3: 96 SPIs implemented May 17 00:04:56.279921 kernel: GICv3: 0 Extended SPIs implemented May 17 00:04:56.279940 kernel: Root IRQ handler: gic_handle_irq May 17 00:04:56.279958 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:04:56.279976 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 17 00:04:56.279993 kernel: ITS [mem 0x10080000-0x1009ffff] May 17 00:04:56.280011 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:04:56.280030 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 17 00:04:56.280047 kernel: GICv3: using LPI property table @0x00000004000d0000 May 17 00:04:56.280065 kernel: ITS: Using hypervisor restricted LPI range [128] May 17 00:04:56.280082 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 17 00:04:56.280101 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:04:56.280118 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 17 00:04:56.280142 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 17 00:04:56.280161 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 17 00:04:56.280180 kernel: Console: colour dummy device 80x25 May 17 00:04:56.280199 kernel: printk: console [tty1] enabled May 17 00:04:56.280218 kernel: ACPI: Core revision 20230628 May 17 00:04:56.280237 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 17 00:04:56.280255 kernel: pid_max: default: 32768 minimum: 301 May 17 00:04:56.280274 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:04:56.280292 kernel: landlock: Up and running. May 17 00:04:56.280315 kernel: SELinux: Initializing. May 17 00:04:56.280334 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:04:56.280352 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:04:56.280372 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:04:56.280392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:04:56.280412 kernel: rcu: Hierarchical SRCU implementation. May 17 00:04:56.280431 kernel: rcu: Max phase no-delay instances is 400. May 17 00:04:56.280449 kernel: Platform MSI: ITS@0x10080000 domain created May 17 00:04:56.280467 kernel: PCI/MSI: ITS@0x10080000 domain created May 17 00:04:56.280491 kernel: Remapping and enabling EFI services. May 17 00:04:56.280509 kernel: smp: Bringing up secondary CPUs ... May 17 00:04:56.280527 kernel: Detected PIPT I-cache on CPU1 May 17 00:04:56.280544 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 17 00:04:56.280563 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 17 00:04:56.280581 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 17 00:04:56.280598 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:04:56.280617 kernel: SMP: Total of 2 processors activated. May 17 00:04:56.280635 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:04:56.280658 kernel: CPU features: detected: 32-bit EL1 Support May 17 00:04:56.280677 kernel: CPU features: detected: CRC32 instructions May 17 00:04:56.280697 kernel: CPU: All CPU(s) started at EL1 May 17 00:04:56.280729 kernel: alternatives: applying system-wide alternatives May 17 00:04:56.280752 kernel: devtmpfs: initialized May 17 00:04:56.282882 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:04:56.282906 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:04:56.282926 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:04:56.282946 kernel: SMBIOS 3.0.0 present. May 17 00:04:56.282966 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 17 00:04:56.282997 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:04:56.283017 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:04:56.283038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:04:56.283057 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:04:56.283099 kernel: audit: initializing netlink subsys (disabled) May 17 00:04:56.283124 kernel: audit: type=2000 audit(0.308:1): state=initialized audit_enabled=0 res=1 May 17 00:04:56.283147 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:04:56.283201 kernel: cpuidle: using governor menu May 17 00:04:56.283221 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:04:56.283240 kernel: ASID allocator initialised with 65536 entries May 17 00:04:56.283259 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:04:56.283278 kernel: Serial: AMBA PL011 UART driver May 17 00:04:56.283296 kernel: Modules: 17504 pages in range for non-PLT usage May 17 00:04:56.283316 kernel: Modules: 509024 pages in range for PLT usage May 17 00:04:56.283336 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:04:56.283356 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:04:56.283382 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:04:56.283401 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:04:56.283420 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:04:56.283439 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:04:56.283459 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:04:56.283479 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:04:56.283498 kernel: ACPI: Added _OSI(Module Device) May 17 00:04:56.283518 kernel: ACPI: Added _OSI(Processor Device) May 17 00:04:56.283538 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:04:56.283565 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:04:56.283585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:04:56.283605 kernel: ACPI: Interpreter enabled May 17 00:04:56.283624 kernel: ACPI: Using GIC for interrupt routing May 17 00:04:56.283644 kernel: ACPI: MCFG table detected, 1 entries May 17 00:04:56.283663 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 17 00:04:56.284090 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:04:56.284350 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:04:56.284622 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:04:56.285332 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 17 00:04:56.285611 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 17 00:04:56.285647 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 17 00:04:56.285667 kernel: acpiphp: Slot [1] registered May 17 00:04:56.285687 kernel: acpiphp: Slot [2] registered May 17 00:04:56.285706 kernel: acpiphp: Slot [3] registered May 17 00:04:56.285725 kernel: acpiphp: Slot [4] registered May 17 00:04:56.285788 kernel: acpiphp: Slot [5] registered May 17 00:04:56.285814 kernel: acpiphp: Slot [6] registered May 17 00:04:56.285834 kernel: acpiphp: Slot [7] registered May 17 00:04:56.285853 kernel: acpiphp: Slot [8] registered May 17 00:04:56.285873 kernel: acpiphp: Slot [9] registered May 17 00:04:56.285893 kernel: acpiphp: Slot [10] registered May 17 00:04:56.285912 kernel: acpiphp: Slot [11] registered May 17 00:04:56.285932 kernel: acpiphp: Slot [12] registered May 17 00:04:56.285952 kernel: acpiphp: Slot [13] registered May 17 00:04:56.285980 kernel: acpiphp: Slot [14] registered May 17 00:04:56.286000 kernel: acpiphp: Slot [15] registered May 17 00:04:56.286019 kernel: acpiphp: Slot [16] registered May 17 00:04:56.286039 kernel: acpiphp: Slot [17] registered May 17 00:04:56.286058 kernel: acpiphp: Slot [18] registered May 17 00:04:56.286080 kernel: acpiphp: Slot [19] registered May 17 00:04:56.286101 kernel: acpiphp: Slot [20] registered May 17 00:04:56.286120 kernel: acpiphp: Slot [21] registered May 17 00:04:56.286139 kernel: acpiphp: Slot [22] registered May 17 00:04:56.286159 kernel: acpiphp: Slot [23] registered May 17 00:04:56.286186 kernel: acpiphp: Slot [24] registered May 17 00:04:56.286206 kernel: acpiphp: Slot [25] registered May 17 00:04:56.286233 kernel: acpiphp: Slot [26] registered May 17 00:04:56.286253 kernel: acpiphp: Slot [27] registered May 17 00:04:56.286272 kernel: acpiphp: Slot [28] registered May 17 00:04:56.286290 kernel: acpiphp: Slot [29] registered May 17 00:04:56.286309 kernel: acpiphp: Slot [30] registered May 17 00:04:56.286328 kernel: acpiphp: Slot [31] registered May 17 00:04:56.286346 kernel: PCI host bridge to bus 0000:00 May 17 00:04:56.286627 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 17 00:04:56.288275 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:04:56.288547 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 17 00:04:56.288749 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 17 00:04:56.289255 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 17 00:04:56.289584 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 17 00:04:56.289969 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 17 00:04:56.290271 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:04:56.290520 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 17 00:04:56.292450 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:04:56.295180 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:04:56.295474 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 17 00:04:56.295835 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 17 00:04:56.296138 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 17 00:04:56.296387 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:04:56.296642 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 17 00:04:56.299029 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 17 00:04:56.299361 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 17 00:04:56.299628 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 17 00:04:56.299964 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 17 00:04:56.300232 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 17 00:04:56.300484 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:04:56.300753 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 17 00:04:56.301265 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:04:56.301290 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:04:56.301312 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:04:56.301333 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:04:56.301353 kernel: iommu: Default domain type: Translated May 17 00:04:56.301387 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:04:56.301408 kernel: efivars: Registered efivars operations May 17 00:04:56.301426 kernel: vgaarb: loaded May 17 00:04:56.301446 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:04:56.301465 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:04:56.301485 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:04:56.301504 kernel: pnp: PnP ACPI init May 17 00:04:56.301950 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 17 00:04:56.301991 kernel: pnp: PnP ACPI: found 1 devices May 17 00:04:56.302022 kernel: NET: Registered PF_INET protocol family May 17 00:04:56.302041 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:04:56.302061 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:04:56.302080 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:04:56.302099 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:04:56.302118 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:04:56.302138 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:04:56.302157 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:04:56.302176 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:04:56.302201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:04:56.302220 kernel: PCI: CLS 0 bytes, default 64 May 17 00:04:56.302239 kernel: kvm [1]: HYP mode not available May 17 00:04:56.302257 kernel: Initialise system trusted keyrings May 17 00:04:56.302277 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:04:56.302295 kernel: Key type asymmetric registered May 17 00:04:56.302314 kernel: Asymmetric key parser 'x509' registered May 17 00:04:56.302333 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:04:56.302352 kernel: io scheduler mq-deadline registered May 17 00:04:56.302376 kernel: io scheduler kyber registered May 17 00:04:56.302394 kernel: io scheduler bfq registered May 17 00:04:56.302658 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 17 00:04:56.302691 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:04:56.302710 kernel: ACPI: button: Power Button [PWRB] May 17 00:04:56.302730 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 17 00:04:56.302749 kernel: ACPI: button: Sleep Button [SLPB] May 17 00:04:56.302797 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:04:56.302828 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:04:56.303092 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 17 00:04:56.303128 kernel: printk: console [ttyS0] disabled May 17 00:04:56.303150 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 17 00:04:56.303195 kernel: printk: console [ttyS0] enabled May 17 00:04:56.303216 kernel: printk: bootconsole [uart0] disabled May 17 00:04:56.304894 kernel: thunder_xcv, ver 1.0 May 17 00:04:56.304917 kernel: thunder_bgx, ver 1.0 May 17 00:04:56.304941 kernel: nicpf, ver 1.0 May 17 00:04:56.304977 kernel: nicvf, ver 1.0 May 17 00:04:56.307383 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:04:56.307628 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:04:55 UTC (1747440295) May 17 00:04:56.307662 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:04:56.307682 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 17 00:04:56.307701 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:04:56.307720 kernel: watchdog: Hard watchdog permanently disabled May 17 00:04:56.307739 kernel: NET: Registered PF_INET6 protocol family May 17 00:04:56.307814 kernel: Segment Routing with IPv6 May 17 00:04:56.307835 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:04:56.307854 kernel: NET: Registered PF_PACKET protocol family May 17 00:04:56.307875 kernel: Key type dns_resolver registered May 17 00:04:56.307894 kernel: registered taskstats version 1 May 17 00:04:56.307913 kernel: Loading compiled-in X.509 certificates May 17 00:04:56.307932 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:04:56.307951 kernel: Key type .fscrypt registered May 17 00:04:56.307970 kernel: Key type fscrypt-provisioning registered May 17 00:04:56.307997 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:04:56.308017 kernel: ima: Allocated hash algorithm: sha1 May 17 00:04:56.308035 kernel: ima: No architecture policies found May 17 00:04:56.308054 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:04:56.308072 kernel: clk: Disabling unused clocks May 17 00:04:56.308090 kernel: Freeing unused kernel memory: 39424K May 17 00:04:56.308109 kernel: Run /init as init process May 17 00:04:56.308127 kernel: with arguments: May 17 00:04:56.308146 kernel: /init May 17 00:04:56.308164 kernel: with environment: May 17 00:04:56.308187 kernel: HOME=/ May 17 00:04:56.308206 kernel: TERM=linux May 17 00:04:56.308224 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:04:56.308248 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:04:56.308272 systemd[1]: Detected virtualization amazon. May 17 00:04:56.308293 systemd[1]: Detected architecture arm64. May 17 00:04:56.308313 systemd[1]: Running in initrd. May 17 00:04:56.308337 systemd[1]: No hostname configured, using default hostname. May 17 00:04:56.308357 systemd[1]: Hostname set to . May 17 00:04:56.308378 systemd[1]: Initializing machine ID from VM UUID. May 17 00:04:56.308398 systemd[1]: Queued start job for default target initrd.target. May 17 00:04:56.308418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:04:56.308439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:04:56.308461 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:04:56.308482 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:04:56.308507 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:04:56.308528 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:04:56.308551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:04:56.308572 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:04:56.308593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:04:56.308613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:04:56.308634 systemd[1]: Reached target paths.target - Path Units. May 17 00:04:56.308660 systemd[1]: Reached target slices.target - Slice Units. May 17 00:04:56.308681 systemd[1]: Reached target swap.target - Swaps. May 17 00:04:56.308701 systemd[1]: Reached target timers.target - Timer Units. May 17 00:04:56.308722 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:04:56.308742 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:04:56.310841 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:04:56.310882 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:04:56.310904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:04:56.310938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:04:56.310960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:04:56.310982 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:04:56.311004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:04:56.311025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:04:56.311047 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:04:56.311068 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:04:56.311090 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:04:56.311111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:04:56.311139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:04:56.311178 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:04:56.311205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:04:56.311227 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:04:56.311308 systemd-journald[251]: Collecting audit messages is disabled. May 17 00:04:56.311368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:04:56.311436 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:04:56.311465 kernel: Bridge firewalling registered May 17 00:04:56.311501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:04:56.311525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:04:56.311549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:04:56.311573 systemd-journald[251]: Journal started May 17 00:04:56.311615 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e6bd21da6e05f2210f1b3b622cbe5) is 8.0M, max 75.3M, 67.3M free. May 17 00:04:56.251417 systemd-modules-load[252]: Inserted module 'overlay' May 17 00:04:56.293607 systemd-modules-load[252]: Inserted module 'br_netfilter' May 17 00:04:56.323557 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:04:56.324314 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:04:56.343094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:04:56.359309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:04:56.367080 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:04:56.397301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:04:56.417027 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:04:56.438839 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:04:56.443481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:04:56.447334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:04:56.469113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:04:56.479861 dracut-cmdline[282]: dracut-dracut-053 May 17 00:04:56.487055 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:04:56.556946 systemd-resolved[290]: Positive Trust Anchors: May 17 00:04:56.556975 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:04:56.557038 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:04:56.697814 kernel: SCSI subsystem initialized May 17 00:04:56.706814 kernel: Loading iSCSI transport class v2.0-870. May 17 00:04:56.719827 kernel: iscsi: registered transport (tcp) May 17 00:04:56.743980 kernel: iscsi: registered transport (qla4xxx) May 17 00:04:56.744060 kernel: QLogic iSCSI HBA Driver May 17 00:04:56.802794 kernel: random: crng init done May 17 00:04:56.803135 systemd-resolved[290]: Defaulting to hostname 'linux'. May 17 00:04:56.807150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:04:56.823575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:04:56.851021 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:04:56.862091 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:04:56.914628 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:04:56.914711 kernel: device-mapper: uevent: version 1.0.3 May 17 00:04:56.916842 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:04:56.987833 kernel: raid6: neonx8 gen() 6625 MB/s May 17 00:04:57.004826 kernel: raid6: neonx4 gen() 6413 MB/s May 17 00:04:57.022822 kernel: raid6: neonx2 gen() 5371 MB/s May 17 00:04:57.039825 kernel: raid6: neonx1 gen() 3893 MB/s May 17 00:04:57.056822 kernel: raid6: int64x8 gen() 3750 MB/s May 17 00:04:57.073826 kernel: raid6: int64x4 gen() 3685 MB/s May 17 00:04:57.090829 kernel: raid6: int64x2 gen() 3553 MB/s May 17 00:04:57.108735 kernel: raid6: int64x1 gen() 2734 MB/s May 17 00:04:57.108849 kernel: raid6: using algorithm neonx8 gen() 6625 MB/s May 17 00:04:57.126973 kernel: raid6: .... xor() 4863 MB/s, rmw enabled May 17 00:04:57.127067 kernel: raid6: using neon recovery algorithm May 17 00:04:57.136605 kernel: xor: measuring software checksum speed May 17 00:04:57.136690 kernel: 8regs : 10605 MB/sec May 17 00:04:57.139074 kernel: 32regs : 10740 MB/sec May 17 00:04:57.139141 kernel: arm64_neon : 9096 MB/sec May 17 00:04:57.139183 kernel: xor: using function: 32regs (10740 MB/sec) May 17 00:04:57.226814 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:04:57.248836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:04:57.263049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:04:57.294883 systemd-udevd[470]: Using default interface naming scheme 'v255'. May 17 00:04:57.303285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:04:57.321098 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:04:57.364936 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 17 00:04:57.428434 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:04:57.441142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:04:57.566209 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:04:57.587238 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:04:57.620396 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:04:57.628017 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:04:57.630822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:04:57.633518 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:04:57.662854 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:04:57.703816 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:04:57.796571 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:04:57.796636 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 17 00:04:57.806694 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:04:57.807097 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:04:57.816085 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:04:57.816160 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:04:57.817533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:04:57.825024 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:84:65:4d:87:79 May 17 00:04:57.817693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:04:57.825271 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:04:57.827476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:04:57.827594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:04:57.833398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:04:57.856135 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:04:57.854983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:04:57.866813 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:04:57.866891 kernel: GPT:9289727 != 16777215 May 17 00:04:57.866920 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:04:57.866948 kernel: GPT:9289727 != 16777215 May 17 00:04:57.867035 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:04:57.869056 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:04:57.875117 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. May 17 00:04:57.886856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:04:57.902051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:04:57.964639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:04:57.981820 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (518) May 17 00:04:58.010049 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (538) May 17 00:04:58.048558 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:04:58.093680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:04:58.096930 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:04:58.128890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:04:58.170327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:04:58.186199 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:04:58.199227 disk-uuid[661]: Primary Header is updated. May 17 00:04:58.199227 disk-uuid[661]: Secondary Entries is updated. May 17 00:04:58.199227 disk-uuid[661]: Secondary Header is updated. May 17 00:04:58.208803 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:04:58.218284 kernel: GPT:disk_guids don't match. May 17 00:04:58.218360 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:04:58.218386 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:04:58.227810 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:04:59.229925 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:04:59.230687 disk-uuid[662]: The operation has completed successfully. May 17 00:04:59.419714 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:04:59.422122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:04:59.474111 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:04:59.491484 sh[1003]: Success May 17 00:04:59.518812 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:04:59.633450 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:04:59.639355 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:04:59.649989 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:04:59.684400 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:04:59.684461 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:04:59.684488 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:04:59.686109 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:04:59.687370 kernel: BTRFS info (device dm-0): using free space tree May 17 00:04:59.716798 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:04:59.727423 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:04:59.731323 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:04:59.744065 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:04:59.751562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:04:59.789727 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:04:59.789824 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:04:59.791329 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:04:59.798822 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:04:59.816243 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:04:59.820465 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:04:59.830187 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:04:59.842113 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:04:59.944851 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:04:59.961085 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:05:00.022164 systemd-networkd[1197]: lo: Link UP May 17 00:05:00.023708 systemd-networkd[1197]: lo: Gained carrier May 17 00:05:00.028287 systemd-networkd[1197]: Enumeration completed May 17 00:05:00.029194 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:05:00.033158 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:00.033166 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:05:00.039577 systemd-networkd[1197]: eth0: Link UP May 17 00:05:00.039584 systemd-networkd[1197]: eth0: Gained carrier May 17 00:05:00.039602 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:00.044312 systemd[1]: Reached target network.target - Network. May 17 00:05:00.063888 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.26.249/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:05:00.075122 ignition[1119]: Ignition 2.19.0 May 17 00:05:00.075627 ignition[1119]: Stage: fetch-offline May 17 00:05:00.076201 ignition[1119]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:00.080239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:05:00.076225 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:00.076699 ignition[1119]: Ignition finished successfully May 17 00:05:00.098185 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:05:00.120482 ignition[1205]: Ignition 2.19.0 May 17 00:05:00.120513 ignition[1205]: Stage: fetch May 17 00:05:00.122178 ignition[1205]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:00.122205 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:00.123277 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:00.146094 ignition[1205]: PUT result: OK May 17 00:05:00.148988 ignition[1205]: parsed url from cmdline: "" May 17 00:05:00.149011 ignition[1205]: no config URL provided May 17 00:05:00.149027 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:05:00.149053 ignition[1205]: no config at "/usr/lib/ignition/user.ign" May 17 00:05:00.149085 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:00.161630 ignition[1205]: PUT result: OK May 17 00:05:00.161711 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:05:00.165333 ignition[1205]: GET result: OK May 17 00:05:00.165482 ignition[1205]: parsing config with SHA512: 54b763d5d7f6664bfc4cd6911c33c23d7d549f533dc4eb146e32198e9f4bff5c73af6e1873a0c2379ee8a5faafc1863f7cab502567eb7715ddee456b3abe02cd May 17 00:05:00.173605 unknown[1205]: fetched base config from "system" May 17 00:05:00.173633 unknown[1205]: fetched base config from "system" May 17 00:05:00.175284 ignition[1205]: fetch: fetch complete May 17 00:05:00.173647 unknown[1205]: fetched user config from "aws" May 17 00:05:00.175587 ignition[1205]: fetch: fetch passed May 17 00:05:00.175735 ignition[1205]: Ignition finished successfully May 17 00:05:00.185900 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:05:00.203989 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:05:00.231169 ignition[1212]: Ignition 2.19.0 May 17 00:05:00.231190 ignition[1212]: Stage: kargs May 17 00:05:00.232310 ignition[1212]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:00.232338 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:00.232496 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:00.236704 ignition[1212]: PUT result: OK May 17 00:05:00.244743 ignition[1212]: kargs: kargs passed May 17 00:05:00.244884 ignition[1212]: Ignition finished successfully May 17 00:05:00.249987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:05:00.274149 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:05:00.297241 ignition[1218]: Ignition 2.19.0 May 17 00:05:00.297261 ignition[1218]: Stage: disks May 17 00:05:00.298437 ignition[1218]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:00.298462 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:00.298621 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:00.301330 ignition[1218]: PUT result: OK May 17 00:05:00.311172 ignition[1218]: disks: disks passed May 17 00:05:00.311276 ignition[1218]: Ignition finished successfully May 17 00:05:00.316115 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:05:00.320020 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:05:00.324082 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:05:00.326352 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:05:00.328274 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:05:00.330414 systemd[1]: Reached target basic.target - Basic System. May 17 00:05:00.350081 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:05:00.393165 systemd-fsck[1226]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:05:00.400205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:05:00.420119 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:05:00.507796 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:05:00.509171 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:05:00.510082 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:05:00.524274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:05:00.537264 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:05:00.541870 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:05:00.541959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:05:00.542009 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:05:00.569211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:05:00.572640 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1245) May 17 00:05:00.578060 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:00.578124 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:00.578151 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:00.584145 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:05:00.591817 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:00.595220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:05:00.694262 initrd-setup-root[1269]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:05:00.703743 initrd-setup-root[1276]: cut: /sysroot/etc/group: No such file or directory May 17 00:05:00.712798 initrd-setup-root[1283]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:05:00.721382 initrd-setup-root[1290]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:05:00.868629 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:05:00.878040 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:05:00.885069 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:05:00.906092 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:05:00.908413 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:00.944892 ignition[1358]: INFO : Ignition 2.19.0 May 17 00:05:00.944892 ignition[1358]: INFO : Stage: mount May 17 00:05:00.948103 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:00.948103 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:00.952910 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:00.950401 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:05:00.960822 ignition[1358]: INFO : PUT result: OK May 17 00:05:00.965386 ignition[1358]: INFO : mount: mount passed May 17 00:05:00.965386 ignition[1358]: INFO : Ignition finished successfully May 17 00:05:00.969923 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:05:00.981012 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:05:01.008181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:05:01.032470 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1369) May 17 00:05:01.032535 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:01.032563 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:01.035185 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:01.041809 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:01.043654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:05:01.086107 ignition[1386]: INFO : Ignition 2.19.0 May 17 00:05:01.086107 ignition[1386]: INFO : Stage: files May 17 00:05:01.089626 ignition[1386]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:01.089626 ignition[1386]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:01.089626 ignition[1386]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:01.097174 ignition[1386]: INFO : PUT result: OK May 17 00:05:01.101883 ignition[1386]: DEBUG : files: compiled without relabeling support, skipping May 17 00:05:01.106205 ignition[1386]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:05:01.106205 ignition[1386]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:05:01.117107 ignition[1386]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:05:01.125025 ignition[1386]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:05:01.128224 unknown[1386]: wrote ssh authorized keys file for user: core May 17 00:05:01.131403 ignition[1386]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:05:01.133902 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:05:01.133902 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:05:01.291207 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:05:01.947934 systemd-networkd[1197]: eth0: Gained IPv6LL May 17 00:05:02.145421 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:05:02.149455 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:05:02.152577 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:05:06.765992 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:05:06.918174 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:05:06.922292 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:05:07.675515 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:05:08.004364 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:05:08.004364 ignition[1386]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:05:08.012435 ignition[1386]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:05:08.012435 ignition[1386]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:05:08.012435 ignition[1386]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:05:08.012435 ignition[1386]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 17 00:05:08.012435 ignition[1386]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:05:08.012435 ignition[1386]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:05:08.012435 ignition[1386]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:05:08.012435 ignition[1386]: INFO : files: files passed May 17 00:05:08.012435 ignition[1386]: INFO : Ignition finished successfully May 17 00:05:08.039205 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:05:08.049023 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:05:08.060344 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:05:08.078375 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:05:08.078574 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:05:08.098691 initrd-setup-root-after-ignition[1414]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:08.102195 initrd-setup-root-after-ignition[1414]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:08.106363 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:08.112873 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:05:08.117864 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:05:08.132717 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:05:08.187632 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:05:08.188102 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:05:08.192496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:05:08.194531 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:05:08.196548 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:05:08.214972 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:05:08.239530 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:05:08.250080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:05:08.279946 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:08.284339 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:08.285479 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:05:08.285894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:05:08.286123 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:05:08.287377 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:05:08.287709 systemd[1]: Stopped target basic.target - Basic System. May 17 00:05:08.288325 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:05:08.288647 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:05:08.289258 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:05:08.289574 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:05:08.289904 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:05:08.290205 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:05:08.290501 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:05:08.290820 systemd[1]: Stopped target swap.target - Swaps. May 17 00:05:08.291042 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:05:08.291261 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:05:08.292297 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:08.292955 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:08.293178 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:05:08.313638 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:08.316317 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:05:08.316542 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:05:08.317541 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:05:08.317753 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:05:08.318512 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:05:08.318703 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:05:08.380243 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:05:08.384255 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:05:08.388820 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:05:08.390071 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:08.396023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:05:08.401363 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:05:08.413915 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:05:08.416968 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:05:08.438117 ignition[1438]: INFO : Ignition 2.19.0 May 17 00:05:08.438117 ignition[1438]: INFO : Stage: umount May 17 00:05:08.442818 ignition[1438]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:08.442818 ignition[1438]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:08.442818 ignition[1438]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:08.442541 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:05:08.452356 ignition[1438]: INFO : PUT result: OK May 17 00:05:08.457614 ignition[1438]: INFO : umount: umount passed May 17 00:05:08.459263 ignition[1438]: INFO : Ignition finished successfully May 17 00:05:08.463816 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:05:08.465854 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:05:08.468287 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:05:08.468467 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:05:08.471117 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:05:08.471276 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:05:08.473324 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:05:08.473981 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:05:08.485473 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:05:08.485559 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:05:08.487512 systemd[1]: Stopped target network.target - Network. May 17 00:05:08.489161 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:05:08.489245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:05:08.491506 systemd[1]: Stopped target paths.target - Path Units. May 17 00:05:08.493202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:05:08.495038 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:08.497492 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:05:08.499215 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:05:08.501060 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:05:08.501136 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:05:08.503063 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:05:08.503151 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:05:08.505866 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:05:08.505948 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:05:08.514439 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:05:08.514517 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:05:08.516553 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:05:08.516630 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:05:08.518984 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:05:08.521118 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:05:08.558970 systemd-networkd[1197]: eth0: DHCPv6 lease lost May 17 00:05:08.561687 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:05:08.561946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:05:08.584440 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:05:08.584642 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:05:08.591442 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:05:08.591540 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:08.609369 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:05:08.611167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:05:08.611278 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:05:08.613737 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:05:08.613869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:08.615932 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:05:08.616017 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:05:08.618444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:05:08.618520 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:08.638570 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:08.651369 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:05:08.654349 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:08.664103 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:05:08.664217 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:05:08.668233 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:05:08.668298 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:08.670337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:05:08.670421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:05:08.672650 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:05:08.672732 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:05:08.674897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:05:08.674975 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:08.698387 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:05:08.703190 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:05:08.703323 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:08.711458 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 17 00:05:08.711563 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:05:08.717152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:05:08.717249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:08.721615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:05:08.721703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:08.724566 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:05:08.724746 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:05:08.752035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:05:08.752435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:05:08.758263 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:05:08.775134 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:05:08.793056 systemd[1]: Switching root. May 17 00:05:08.827638 systemd-journald[251]: Journal stopped May 17 00:05:10.624216 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). May 17 00:05:10.624395 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:05:10.624452 kernel: SELinux: policy capability open_perms=1 May 17 00:05:10.624485 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:05:10.624517 kernel: SELinux: policy capability always_check_network=0 May 17 00:05:10.624556 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:05:10.626912 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:05:10.626980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:05:10.627013 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:05:10.627044 kernel: audit: type=1403 audit(1747440309.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:05:10.627097 systemd[1]: Successfully loaded SELinux policy in 49.443ms. May 17 00:05:10.627154 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.982ms. May 17 00:05:10.627196 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:05:10.627237 systemd[1]: Detected virtualization amazon. May 17 00:05:10.627270 systemd[1]: Detected architecture arm64. May 17 00:05:10.627302 systemd[1]: Detected first boot. May 17 00:05:10.627336 systemd[1]: Initializing machine ID from VM UUID. May 17 00:05:10.627370 zram_generator::config[1482]: No configuration found. May 17 00:05:10.627417 systemd[1]: Populated /etc with preset unit settings. May 17 00:05:10.627450 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:05:10.627485 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:05:10.627518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:05:10.627555 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:05:10.627590 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:05:10.627620 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:05:10.627660 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:05:10.627693 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:05:10.627724 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:05:10.629808 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:05:10.629879 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:05:10.629923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:10.629956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:10.629989 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:05:10.630022 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:05:10.630057 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:05:10.630094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:05:10.630125 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:05:10.630155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:10.630186 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:05:10.630221 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:05:10.630252 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:05:10.630285 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:05:10.630319 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:10.630352 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:05:10.630383 systemd[1]: Reached target slices.target - Slice Units. May 17 00:05:10.630415 systemd[1]: Reached target swap.target - Swaps. May 17 00:05:10.630449 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:05:10.630485 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:05:10.630516 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:10.630547 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:05:10.630576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:10.630606 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:05:10.630635 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:05:10.630665 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:05:10.630697 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:05:10.630741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:05:10.634877 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:05:10.634921 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:05:10.634954 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:05:10.634985 systemd[1]: Reached target machines.target - Containers. May 17 00:05:10.635015 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:05:10.635044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:10.635095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:05:10.635128 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:05:10.635166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:05:10.635200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:05:10.635229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:05:10.635259 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:05:10.635290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:10.635322 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:05:10.635353 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:05:10.635384 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:05:10.635416 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:05:10.635452 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:05:10.635482 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:05:10.635510 kernel: fuse: init (API version 7.39) May 17 00:05:10.635540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:05:10.635571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:05:10.635601 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:05:10.635633 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:05:10.635662 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:05:10.635692 systemd[1]: Stopped verity-setup.service. May 17 00:05:10.635725 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:05:10.635754 kernel: loop: module loaded May 17 00:05:10.635810 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:05:10.635841 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:05:10.635919 systemd-journald[1564]: Collecting audit messages is disabled. May 17 00:05:10.635974 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:05:10.636005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:05:10.636035 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:05:10.636068 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:10.636100 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:05:10.636132 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:05:10.636162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:05:10.636195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:05:10.636224 systemd-journald[1564]: Journal started May 17 00:05:10.636277 systemd-journald[1564]: Runtime Journal (/run/log/journal/ec2e6bd21da6e05f2210f1b3b622cbe5) is 8.0M, max 75.3M, 67.3M free. May 17 00:05:10.072797 systemd[1]: Queued start job for default target multi-user.target. May 17 00:05:10.644327 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:05:10.096607 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:05:10.097526 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:05:10.645784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:05:10.646122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:05:10.649328 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:05:10.650040 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:05:10.653927 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:10.654798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:10.658086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:05:10.661183 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:05:10.665485 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:05:10.670799 kernel: ACPI: bus type drm_connector registered May 17 00:05:10.676118 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:05:10.676467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:05:10.697996 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:05:10.710085 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:05:10.722944 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:05:10.725102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:05:10.725158 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:05:10.741521 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:05:10.749824 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:05:10.756622 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:05:10.758863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:10.761954 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:05:10.777136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:05:10.780970 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:05:10.787059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:05:10.791018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:05:10.808552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:05:10.819755 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:05:10.828166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:05:10.837954 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:05:10.840811 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:05:10.844171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:05:10.849851 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:05:10.869427 systemd-journald[1564]: Time spent on flushing to /var/log/journal/ec2e6bd21da6e05f2210f1b3b622cbe5 is 171.193ms for 914 entries. May 17 00:05:10.869427 systemd-journald[1564]: System Journal (/var/log/journal/ec2e6bd21da6e05f2210f1b3b622cbe5) is 8.0M, max 195.6M, 187.6M free. May 17 00:05:11.058877 systemd-journald[1564]: Received client request to flush runtime journal. May 17 00:05:11.059181 kernel: loop0: detected capacity change from 0 to 114328 May 17 00:05:11.059221 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:05:11.060349 kernel: loop1: detected capacity change from 0 to 203944 May 17 00:05:10.897108 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:05:10.901341 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:05:10.927207 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:05:10.966488 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:05:10.972578 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:05:10.987689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:11.000371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:11.010178 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:05:11.039830 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. May 17 00:05:11.039855 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. May 17 00:05:11.072584 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:05:11.075795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:05:11.092274 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:05:11.100670 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:05:11.120103 kernel: loop2: detected capacity change from 0 to 52536 May 17 00:05:11.184865 kernel: loop3: detected capacity change from 0 to 114432 May 17 00:05:11.193863 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:05:11.206897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:05:11.280812 kernel: loop4: detected capacity change from 0 to 114328 May 17 00:05:11.297461 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 17 00:05:11.297502 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 17 00:05:11.307845 kernel: loop5: detected capacity change from 0 to 203944 May 17 00:05:11.315539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:11.352794 kernel: loop6: detected capacity change from 0 to 52536 May 17 00:05:11.377849 kernel: loop7: detected capacity change from 0 to 114432 May 17 00:05:11.406126 (sd-merge)[1640]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:05:11.407136 (sd-merge)[1640]: Merged extensions into '/usr'. May 17 00:05:11.418058 systemd[1]: Reloading requested from client PID 1610 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:05:11.418085 systemd[1]: Reloading... May 17 00:05:11.544791 zram_generator::config[1667]: No configuration found. May 17 00:05:11.845838 ldconfig[1605]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:05:11.930536 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:12.047309 systemd[1]: Reloading finished in 627 ms. May 17 00:05:12.083485 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:05:12.089532 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:05:12.104181 systemd[1]: Starting ensure-sysext.service... May 17 00:05:12.114200 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:05:12.144972 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... May 17 00:05:12.145006 systemd[1]: Reloading... May 17 00:05:12.159795 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:05:12.161457 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:05:12.168116 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:05:12.168706 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. May 17 00:05:12.168870 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. May 17 00:05:12.187735 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:05:12.188829 systemd-tmpfiles[1720]: Skipping /boot May 17 00:05:12.215746 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:05:12.215795 systemd-tmpfiles[1720]: Skipping /boot May 17 00:05:12.313846 zram_generator::config[1750]: No configuration found. May 17 00:05:12.528607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:12.639246 systemd[1]: Reloading finished in 493 ms. May 17 00:05:12.666235 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:05:12.673716 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:12.691298 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:05:12.703311 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:05:12.710095 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:05:12.725051 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:05:12.732157 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:12.739109 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:05:12.752151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:12.765198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:05:12.774310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:05:12.781948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:12.784233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:12.788722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:12.791213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:12.801823 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:05:12.809843 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:05:12.813439 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:12.816821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:12.834426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:05:12.835417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:05:12.855577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:12.867242 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:05:12.874963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:12.877244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:12.877335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:05:12.877418 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:05:12.883492 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:05:12.888245 systemd[1]: Finished ensure-sysext.service. May 17 00:05:12.890585 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:05:12.896014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:05:12.896690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:05:12.932988 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:05:12.934035 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:05:12.956618 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:12.957044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:12.959641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:05:12.972297 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:05:12.984361 systemd-udevd[1809]: Using default interface naming scheme 'v255'. May 17 00:05:12.989111 augenrules[1839]: No rules May 17 00:05:12.992142 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:05:13.024117 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:05:13.055748 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:05:13.058940 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:05:13.062661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:13.076104 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:05:13.205186 systemd-resolved[1806]: Positive Trust Anchors: May 17 00:05:13.205729 systemd-resolved[1806]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:05:13.205929 systemd-resolved[1806]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:05:13.221702 systemd-resolved[1806]: Defaulting to hostname 'linux'. May 17 00:05:13.226506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:05:13.228817 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:13.247544 systemd-networkd[1852]: lo: Link UP May 17 00:05:13.247561 systemd-networkd[1852]: lo: Gained carrier May 17 00:05:13.251433 systemd-networkd[1852]: Enumeration completed May 17 00:05:13.252307 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:05:13.254586 systemd[1]: Reached target network.target - Network. May 17 00:05:13.286827 (udev-worker)[1859]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:13.323677 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:05:13.327376 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:05:13.406788 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:13.406804 systemd-networkd[1852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:05:13.409034 systemd-networkd[1852]: eth0: Link UP May 17 00:05:13.411905 systemd-networkd[1852]: eth0: Gained carrier May 17 00:05:13.411952 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:13.426328 systemd-networkd[1852]: eth0: DHCPv4 address 172.31.26.249/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:05:13.455830 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1859) May 17 00:05:13.577480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:13.741847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:05:13.748068 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:05:13.751020 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:05:13.764313 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:05:13.785871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:13.797208 lvm[1973]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:05:13.797056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:05:13.840672 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:05:13.843664 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:13.845969 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:05:13.848280 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:05:13.850669 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:05:13.853311 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:05:13.855506 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:05:13.857900 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:05:13.860268 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:05:13.860312 systemd[1]: Reached target paths.target - Path Units. May 17 00:05:13.862031 systemd[1]: Reached target timers.target - Timer Units. May 17 00:05:13.865279 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:05:13.869962 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:05:13.881035 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:05:13.885596 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:05:13.889034 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:05:13.892131 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:05:13.894212 systemd[1]: Reached target basic.target - Basic System. May 17 00:05:13.896403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:05:13.896641 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:05:13.905091 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:05:13.912122 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:05:13.919172 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:05:13.930826 lvm[1981]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:05:13.935013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:05:13.940038 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:05:13.942057 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:05:13.956059 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:05:13.962517 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:05:13.981197 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:05:13.988251 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:05:13.995261 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:05:14.004204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:05:14.016105 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:05:14.019310 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:05:14.020261 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:05:14.024529 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:05:14.039887 jq[1985]: false May 17 00:05:14.033029 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:05:14.051539 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:05:14.053348 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:05:14.092002 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:05:14.159609 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:05:14.160061 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:05:14.173623 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:05:14.173292 dbus-daemon[1984]: [system] SELinux support is enabled May 17 00:05:14.183891 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:05:14.188334 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1852 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:05:14.215132 jq[1996]: true May 17 00:05:14.199693 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:05:14.208335 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:02:25 UTC 2025 (1): Starting May 17 00:05:14.230365 extend-filesystems[1986]: Found loop4 May 17 00:05:14.230365 extend-filesystems[1986]: Found loop5 May 17 00:05:14.230365 extend-filesystems[1986]: Found loop6 May 17 00:05:14.230365 extend-filesystems[1986]: Found loop7 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p1 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p2 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p3 May 17 00:05:14.230365 extend-filesystems[1986]: Found usr May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p4 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p6 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p7 May 17 00:05:14.230365 extend-filesystems[1986]: Found nvme0n1p9 May 17 00:05:14.230365 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:02:25 UTC 2025 (1): Starting May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: ---------------------------------------------------- May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: corporation. Support and training for ntp-4 are May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: available at https://www.nwtime.org/support May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: ---------------------------------------------------- May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: proto: precision = 0.096 usec (-23) May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: basedate set to 2025-05-04 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: gps base set to 2025-05-04 (week 2365) May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Listen normally on 3 eth0 172.31.26.249:123 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Listen normally on 4 lo [::1]:123 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: bind(21) AF_INET6 fe80::484:65ff:fe4d:8779%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: unable to create socket on eth0 (5) for fe80::484:65ff:fe4d:8779%2#123 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: failed to init interface for address fe80::484:65ff:fe4d:8779%2 May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: Listening on routing socket on fd #21 for interface updates May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.286224 ntpd[1988]: 17 May 00:05:14 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.301433 update_engine[1995]: I20250517 00:05:14.205476 1995 main.cc:92] Flatcar Update Engine starting May 17 00:05:14.301433 update_engine[1995]: I20250517 00:05:14.218118 1995 update_check_scheduler.cc:74] Next update check in 11m39s May 17 00:05:14.199743 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:05:14.208384 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:05:14.202192 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:05:14.208404 ntpd[1988]: ---------------------------------------------------- May 17 00:05:14.202230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:05:14.208423 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, May 17 00:05:14.237449 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:05:14.208443 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:05:14.277908 (ntainerd)[2012]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:05:14.208462 ntpd[1988]: corporation. Support and training for ntp-4 are May 17 00:05:14.281068 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:05:14.208482 ntpd[1988]: available at https://www.nwtime.org/support May 17 00:05:14.281464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:05:14.208501 ntpd[1988]: ---------------------------------------------------- May 17 00:05:14.305698 systemd[1]: Started update-engine.service - Update Engine. May 17 00:05:14.210516 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:05:14.216570 ntpd[1988]: proto: precision = 0.096 usec (-23) May 17 00:05:14.217038 ntpd[1988]: basedate set to 2025-05-04 May 17 00:05:14.217064 ntpd[1988]: gps base set to 2025-05-04 (week 2365) May 17 00:05:14.220410 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:05:14.220498 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:05:14.220980 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:05:14.221049 ntpd[1988]: Listen normally on 3 eth0 172.31.26.249:123 May 17 00:05:14.221127 ntpd[1988]: Listen normally on 4 lo [::1]:123 May 17 00:05:14.221202 ntpd[1988]: bind(21) AF_INET6 fe80::484:65ff:fe4d:8779%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:14.221241 ntpd[1988]: unable to create socket on eth0 (5) for fe80::484:65ff:fe4d:8779%2#123 May 17 00:05:14.221273 ntpd[1988]: failed to init interface for address fe80::484:65ff:fe4d:8779%2 May 17 00:05:14.221327 ntpd[1988]: Listening on routing socket on fd #21 for interface updates May 17 00:05:14.228901 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.229671 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.327084 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:05:14.350468 tar[2007]: linux-arm64/helm May 17 00:05:14.357223 jq[2023]: true May 17 00:05:14.401177 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 May 17 00:05:14.414635 extend-filesystems[2037]: resize2fs 1.47.1 (20-May-2024) May 17 00:05:14.456788 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:05:14.476907 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:05:14.622465 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:05:14.640722 coreos-metadata[1983]: May 17 00:05:14.640 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:05:14.654802 extend-filesystems[2037]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:05:14.654802 extend-filesystems[2037]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:05:14.654802 extend-filesystems[2037]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:05:14.667352 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 May 17 00:05:14.683382 coreos-metadata[1983]: May 17 00:05:14.661 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:05:14.670735 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:05:14.671140 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:05:14.694818 coreos-metadata[1983]: May 17 00:05:14.692 INFO Fetch successful May 17 00:05:14.694818 coreos-metadata[1983]: May 17 00:05:14.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:05:14.696165 coreos-metadata[1983]: May 17 00:05:14.695 INFO Fetch successful May 17 00:05:14.696165 coreos-metadata[1983]: May 17 00:05:14.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:05:14.697037 coreos-metadata[1983]: May 17 00:05:14.696 INFO Fetch successful May 17 00:05:14.697037 coreos-metadata[1983]: May 17 00:05:14.696 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:05:14.699871 coreos-metadata[1983]: May 17 00:05:14.697 INFO Fetch successful May 17 00:05:14.699871 coreos-metadata[1983]: May 17 00:05:14.697 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:05:14.707238 coreos-metadata[1983]: May 17 00:05:14.703 INFO Fetch failed with 404: resource not found May 17 00:05:14.707238 coreos-metadata[1983]: May 17 00:05:14.703 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:05:14.708741 coreos-metadata[1983]: May 17 00:05:14.708 INFO Fetch successful May 17 00:05:14.708741 coreos-metadata[1983]: May 17 00:05:14.708 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:05:14.710904 bash[2066]: Updated "/home/core/.ssh/authorized_keys" May 17 00:05:14.711349 coreos-metadata[1983]: May 17 00:05:14.710 INFO Fetch successful May 17 00:05:14.711349 coreos-metadata[1983]: May 17 00:05:14.710 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:05:14.722828 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:05:14.734045 coreos-metadata[1983]: May 17 00:05:14.719 INFO Fetch successful May 17 00:05:14.734045 coreos-metadata[1983]: May 17 00:05:14.719 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:05:14.734045 coreos-metadata[1983]: May 17 00:05:14.728 INFO Fetch successful May 17 00:05:14.734045 coreos-metadata[1983]: May 17 00:05:14.728 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:05:14.734045 coreos-metadata[1983]: May 17 00:05:14.731 INFO Fetch successful May 17 00:05:14.736668 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1867) May 17 00:05:14.735089 systemd[1]: Starting sshkeys.service... May 17 00:05:14.758598 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:05:14.762886 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) May 17 00:05:14.766350 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:05:14.770130 systemd-logind[1993]: New seat seat0. May 17 00:05:14.833694 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:05:14.887835 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:05:14.897590 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:05:14.920088 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:05:14.921870 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:05:14.925817 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:05:14.937385 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:05:14.953883 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:05:14.959073 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:05:14.982213 containerd[2012]: time="2025-05-17T00:05:14.982086424Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:05:15.018791 polkitd[2095]: Started polkitd version 121 May 17 00:05:15.035184 polkitd[2095]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:05:15.035330 polkitd[2095]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:05:15.037836 polkitd[2095]: Finished loading, compiling and executing 2 rules May 17 00:05:15.039906 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:05:15.040185 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:05:15.043393 polkitd[2095]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:05:15.117784 systemd-hostnamed[2022]: Hostname set to (transient) May 17 00:05:15.117942 systemd-resolved[1806]: System hostname changed to 'ip-172-31-26-249'. May 17 00:05:15.147847 containerd[2012]: time="2025-05-17T00:05:15.147213565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.154983 containerd[2012]: time="2025-05-17T00:05:15.154899829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.154983 containerd[2012]: time="2025-05-17T00:05:15.154975921Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:05:15.155157 containerd[2012]: time="2025-05-17T00:05:15.155014429Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155358361Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155408437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155536189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155569681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155909245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155943721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.155976421Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.156005125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.156171517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.157796 containerd[2012]: time="2025-05-17T00:05:15.156550105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.158849 containerd[2012]: time="2025-05-17T00:05:15.156741229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.158915 containerd[2012]: time="2025-05-17T00:05:15.158851765Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:05:15.159136 containerd[2012]: time="2025-05-17T00:05:15.159096889Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:05:15.159252 containerd[2012]: time="2025-05-17T00:05:15.159212413Z" level=info msg="metadata content store policy set" policy=shared May 17 00:05:15.169683 containerd[2012]: time="2025-05-17T00:05:15.169611409Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:05:15.169829 containerd[2012]: time="2025-05-17T00:05:15.169731889Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:05:15.169913 containerd[2012]: time="2025-05-17T00:05:15.169859773Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:05:15.169965 containerd[2012]: time="2025-05-17T00:05:15.169903333Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:05:15.169965 containerd[2012]: time="2025-05-17T00:05:15.169952113Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:05:15.170522 containerd[2012]: time="2025-05-17T00:05:15.170231773Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:05:15.171019 containerd[2012]: time="2025-05-17T00:05:15.170976229Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:05:15.171297 containerd[2012]: time="2025-05-17T00:05:15.171249829Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:05:15.171361 containerd[2012]: time="2025-05-17T00:05:15.171298753Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:05:15.171361 containerd[2012]: time="2025-05-17T00:05:15.171333421Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:05:15.171471 containerd[2012]: time="2025-05-17T00:05:15.171366289Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171471 containerd[2012]: time="2025-05-17T00:05:15.171396853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171471 containerd[2012]: time="2025-05-17T00:05:15.171426073Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171471 containerd[2012]: time="2025-05-17T00:05:15.171457261Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171632 containerd[2012]: time="2025-05-17T00:05:15.171489361Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171632 containerd[2012]: time="2025-05-17T00:05:15.171519757Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171632 containerd[2012]: time="2025-05-17T00:05:15.171563245Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:05:15.171632 containerd[2012]: time="2025-05-17T00:05:15.171591085Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.171630229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.171661153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.171690577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.171721405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172551553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172594933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172624657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172675585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172708189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172744285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172804633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172841833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172884505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172931245Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:05:15.173933 containerd[2012]: time="2025-05-17T00:05:15.172987969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:05:15.174589 containerd[2012]: time="2025-05-17T00:05:15.173019253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:05:15.174589 containerd[2012]: time="2025-05-17T00:05:15.173046229Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.174894217Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.174976921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.175009417Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.175058101Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.175119241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.175160965Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.175186741Z" level=info msg="NRI interface is disabled by configuration." May 17 00:05:15.177780 containerd[2012]: time="2025-05-17T00:05:15.175212769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:05:15.178243 containerd[2012]: time="2025-05-17T00:05:15.175706149Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:05:15.178243 containerd[2012]: time="2025-05-17T00:05:15.175850797Z" level=info msg="Connect containerd service" May 17 00:05:15.178243 containerd[2012]: time="2025-05-17T00:05:15.175928029Z" level=info msg="using legacy CRI server" May 17 00:05:15.178243 containerd[2012]: time="2025-05-17T00:05:15.175946545Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:05:15.178243 containerd[2012]: time="2025-05-17T00:05:15.176094865Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.184271113Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.184910605Z" level=info msg="Start subscribing containerd event" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.185001157Z" level=info msg="Start recovering state" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.185128897Z" level=info msg="Start event monitor" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.185154181Z" level=info msg="Start snapshots syncer" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.185177977Z" level=info msg="Start cni network conf syncer for default" May 17 00:05:15.185629 containerd[2012]: time="2025-05-17T00:05:15.185207257Z" level=info msg="Start streaming server" May 17 00:05:15.191254 containerd[2012]: time="2025-05-17T00:05:15.188590837Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:05:15.191254 containerd[2012]: time="2025-05-17T00:05:15.188731645Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:05:15.189142 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:05:15.197832 systemd-networkd[1852]: eth0: Gained IPv6LL May 17 00:05:15.200155 containerd[2012]: time="2025-05-17T00:05:15.198198673Z" level=info msg="containerd successfully booted in 0.227357s" May 17 00:05:15.227096 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:05:15.230927 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:05:15.264047 coreos-metadata[2088]: May 17 00:05:15.263 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:05:15.264641 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:05:15.273167 coreos-metadata[2088]: May 17 00:05:15.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:05:15.271977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:15.294800 coreos-metadata[2088]: May 17 00:05:15.281 INFO Fetch successful May 17 00:05:15.294800 coreos-metadata[2088]: May 17 00:05:15.284 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:05:15.294800 coreos-metadata[2088]: May 17 00:05:15.293 INFO Fetch successful May 17 00:05:15.284868 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:05:15.307671 unknown[2088]: wrote ssh authorized keys file for user: core May 17 00:05:15.413015 update-ssh-keys[2178]: Updated "/home/core/.ssh/authorized_keys" May 17 00:05:15.421536 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:05:15.434536 systemd[1]: Finished sshkeys.service. May 17 00:05:15.471870 amazon-ssm-agent[2165]: Initializing new seelog logger May 17 00:05:15.476553 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete May 17 00:05:15.476710 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.476710 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.481226 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.491062 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.491062 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.491239 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.491468 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.491468 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.491571 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.492416 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO Proxy environment variables: May 17 00:05:15.506971 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.506971 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.506971 amazon-ssm-agent[2165]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.503453 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:05:15.597472 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO https_proxy: May 17 00:05:15.700195 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO http_proxy: May 17 00:05:15.799911 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO no_proxy: May 17 00:05:15.899794 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO Checking if agent identity type OnPrem can be assumed May 17 00:05:15.997647 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO Checking if agent identity type EC2 can be assumed May 17 00:05:16.098754 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO Agent will take identity from EC2 May 17 00:05:16.164346 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:05:16.196146 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.282636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:05:16.294247 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:05:16.300927 systemd[1]: Started sshd@0-172.31.26.249:22-139.178.89.65:40734.service - OpenSSH per-connection server daemon (139.178.89.65:40734). May 17 00:05:16.307687 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.361888 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:05:16.363829 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:05:16.382359 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:05:16.406888 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.430411 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:05:16.448502 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:05:16.462168 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:05:16.464826 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:05:16.506780 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:05:16.520071 tar[2007]: linux-arm64/LICENSE May 17 00:05:16.520611 tar[2007]: linux-arm64/README.md May 17 00:05:16.555120 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:05:16.594867 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 40734 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:16.601035 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:16.605223 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 17 00:05:16.621858 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:05:16.634314 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [Registrar] Starting registrar module May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:16 INFO [EC2Identity] EC2 registration was successful. May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:16 INFO [CredentialRefresher] credentialRefresher has started May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:16 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:05:16.639977 amazon-ssm-agent[2165]: 2025-05-17 00:05:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:05:16.644967 systemd-logind[1993]: New session 1 of user core. May 17 00:05:16.678436 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:05:16.695421 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:05:16.704861 amazon-ssm-agent[2165]: 2025-05-17 00:05:16 INFO [CredentialRefresher] Next credential rotation will be in 31.516572125266666 minutes May 17 00:05:16.707183 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:05:16.939596 systemd[2232]: Queued start job for default target default.target. May 17 00:05:16.948312 systemd[2232]: Created slice app.slice - User Application Slice. May 17 00:05:16.948387 systemd[2232]: Reached target paths.target - Paths. May 17 00:05:16.948421 systemd[2232]: Reached target timers.target - Timers. May 17 00:05:16.953016 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:05:16.985444 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:05:16.985686 systemd[2232]: Reached target sockets.target - Sockets. May 17 00:05:16.985719 systemd[2232]: Reached target basic.target - Basic System. May 17 00:05:16.985856 systemd[2232]: Reached target default.target - Main User Target. May 17 00:05:16.985919 systemd[2232]: Startup finished in 266ms. May 17 00:05:16.986050 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:05:16.998086 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:05:17.157369 systemd[1]: Started sshd@1-172.31.26.249:22-139.178.89.65:48194.service - OpenSSH per-connection server daemon (139.178.89.65:48194). May 17 00:05:17.209074 ntpd[1988]: Listen normally on 6 eth0 [fe80::484:65ff:fe4d:8779%2]:123 May 17 00:05:17.211617 ntpd[1988]: 17 May 00:05:17 ntpd[1988]: Listen normally on 6 eth0 [fe80::484:65ff:fe4d:8779%2]:123 May 17 00:05:17.342146 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 48194 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:17.345065 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:17.353025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:17.358237 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:05:17.363001 systemd[1]: Startup finished in 1.274s (kernel) + 13.310s (initrd) + 8.285s (userspace) = 22.870s. May 17 00:05:17.367864 systemd-logind[1993]: New session 2 of user core. May 17 00:05:17.371058 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:17.373060 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:05:17.506129 sshd[2243]: pam_unix(sshd:session): session closed for user core May 17 00:05:17.513244 systemd[1]: sshd@1-172.31.26.249:22-139.178.89.65:48194.service: Deactivated successfully. May 17 00:05:17.517191 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:05:17.520881 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. May 17 00:05:17.524522 systemd-logind[1993]: Removed session 2. May 17 00:05:17.543440 systemd[1]: Started sshd@2-172.31.26.249:22-139.178.89.65:48210.service - OpenSSH per-connection server daemon (139.178.89.65:48210). May 17 00:05:17.670453 amazon-ssm-agent[2165]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:05:17.721637 sshd[2260]: Accepted publickey for core from 139.178.89.65 port 48210 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:17.725318 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:17.737852 systemd-logind[1993]: New session 3 of user core. May 17 00:05:17.743502 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:05:17.772165 amazon-ssm-agent[2165]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2267) started May 17 00:05:17.872315 amazon-ssm-agent[2165]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:05:17.877058 sshd[2260]: pam_unix(sshd:session): session closed for user core May 17 00:05:17.888217 systemd[1]: sshd@2-172.31.26.249:22-139.178.89.65:48210.service: Deactivated successfully. May 17 00:05:17.896643 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:05:17.900275 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. May 17 00:05:17.924314 systemd[1]: Started sshd@3-172.31.26.249:22-139.178.89.65:48216.service - OpenSSH per-connection server daemon (139.178.89.65:48216). May 17 00:05:17.927308 systemd-logind[1993]: Removed session 3. May 17 00:05:18.103091 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 48216 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:18.106058 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:18.114577 systemd-logind[1993]: New session 4 of user core. May 17 00:05:18.122046 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:05:18.257615 sshd[2281]: pam_unix(sshd:session): session closed for user core May 17 00:05:18.266805 systemd[1]: sshd@3-172.31.26.249:22-139.178.89.65:48216.service: Deactivated successfully. May 17 00:05:18.273109 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:05:18.275129 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. May 17 00:05:18.292251 systemd-logind[1993]: Removed session 4. May 17 00:05:18.299462 systemd[1]: Started sshd@4-172.31.26.249:22-139.178.89.65:48230.service - OpenSSH per-connection server daemon (139.178.89.65:48230). May 17 00:05:18.306952 kubelet[2250]: E0517 00:05:18.306864 2250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:18.312936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:18.313413 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:18.314077 systemd[1]: kubelet.service: Consumed 1.377s CPU time. May 17 00:05:18.473122 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 48230 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:18.475704 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:18.484075 systemd-logind[1993]: New session 5 of user core. May 17 00:05:18.495042 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:05:18.613350 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:05:18.614522 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:18.633399 sudo[2294]: pam_unix(sudo:session): session closed for user root May 17 00:05:18.659176 sshd[2290]: pam_unix(sshd:session): session closed for user core May 17 00:05:18.664679 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. May 17 00:05:18.665986 systemd[1]: sshd@4-172.31.26.249:22-139.178.89.65:48230.service: Deactivated successfully. May 17 00:05:18.670202 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:05:18.675432 systemd-logind[1993]: Removed session 5. May 17 00:05:18.699278 systemd[1]: Started sshd@5-172.31.26.249:22-139.178.89.65:48244.service - OpenSSH per-connection server daemon (139.178.89.65:48244). May 17 00:05:18.860904 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 48244 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:18.863482 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:18.872090 systemd-logind[1993]: New session 6 of user core. May 17 00:05:18.880053 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:05:18.983437 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:05:18.984213 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:18.990304 sudo[2303]: pam_unix(sudo:session): session closed for user root May 17 00:05:19.000929 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:05:19.001572 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:19.022295 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:05:19.041548 auditctl[2306]: No rules May 17 00:05:19.042348 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:05:19.042827 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:05:19.051674 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:05:19.104323 augenrules[2324]: No rules May 17 00:05:19.107127 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:05:19.110461 sudo[2302]: pam_unix(sudo:session): session closed for user root May 17 00:05:19.133705 sshd[2299]: pam_unix(sshd:session): session closed for user core May 17 00:05:19.140744 systemd[1]: sshd@5-172.31.26.249:22-139.178.89.65:48244.service: Deactivated successfully. May 17 00:05:19.145426 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:05:19.146644 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. May 17 00:05:19.149508 systemd-logind[1993]: Removed session 6. May 17 00:05:19.173326 systemd[1]: Started sshd@6-172.31.26.249:22-139.178.89.65:48258.service - OpenSSH per-connection server daemon (139.178.89.65:48258). May 17 00:05:19.347974 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 48258 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:19.350824 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:19.360106 systemd-logind[1993]: New session 7 of user core. May 17 00:05:19.372018 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:05:19.476289 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:05:19.477009 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:19.908239 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:05:19.909860 (dockerd)[2351]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:05:20.269273 dockerd[2351]: time="2025-05-17T00:05:20.269090047Z" level=info msg="Starting up" May 17 00:05:20.581672 dockerd[2351]: time="2025-05-17T00:05:20.581142956Z" level=info msg="Loading containers: start." May 17 00:05:20.729894 kernel: Initializing XFRM netlink socket May 17 00:05:20.763277 (udev-worker)[2377]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:20.850121 systemd-networkd[1852]: docker0: Link UP May 17 00:05:20.872279 dockerd[2351]: time="2025-05-17T00:05:20.872125234Z" level=info msg="Loading containers: done." May 17 00:05:20.906050 dockerd[2351]: time="2025-05-17T00:05:20.905896258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:05:20.906269 dockerd[2351]: time="2025-05-17T00:05:20.906055258Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:05:20.906269 dockerd[2351]: time="2025-05-17T00:05:20.906250534Z" level=info msg="Daemon has completed initialization" May 17 00:05:20.906682 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck246377110-merged.mount: Deactivated successfully. May 17 00:05:20.957674 dockerd[2351]: time="2025-05-17T00:05:20.957554506Z" level=info msg="API listen on /run/docker.sock" May 17 00:05:20.958200 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:05:21.606696 systemd-resolved[1806]: Clock change detected. Flushing caches. May 17 00:05:22.461016 containerd[2012]: time="2025-05-17T00:05:22.460947984Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:05:23.106876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087227197.mount: Deactivated successfully. May 17 00:05:25.123453 containerd[2012]: time="2025-05-17T00:05:25.123391969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:25.125647 containerd[2012]: time="2025-05-17T00:05:25.125583601Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651974" May 17 00:05:25.126333 containerd[2012]: time="2025-05-17T00:05:25.126293173Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:25.132181 containerd[2012]: time="2025-05-17T00:05:25.132127633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:25.134630 containerd[2012]: time="2025-05-17T00:05:25.134581105Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 2.673566341s" May 17 00:05:25.134986 containerd[2012]: time="2025-05-17T00:05:25.134772073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:05:25.136982 containerd[2012]: time="2025-05-17T00:05:25.136930225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:05:27.267988 containerd[2012]: time="2025-05-17T00:05:27.267686788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:27.269821 containerd[2012]: time="2025-05-17T00:05:27.269768908Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459528" May 17 00:05:27.270538 containerd[2012]: time="2025-05-17T00:05:27.270258076Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:27.275931 containerd[2012]: time="2025-05-17T00:05:27.275843464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:27.278816 containerd[2012]: time="2025-05-17T00:05:27.278121760Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 2.140970903s" May 17 00:05:27.278816 containerd[2012]: time="2025-05-17T00:05:27.278182048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:05:27.279549 containerd[2012]: time="2025-05-17T00:05:27.279258652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:05:28.794933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:05:28.813636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:29.202953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:29.215586 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:29.297722 kubelet[2563]: E0517 00:05:29.297663 2563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:29.306099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:29.306659 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:29.410959 containerd[2012]: time="2025-05-17T00:05:29.410889942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:29.412557 containerd[2012]: time="2025-05-17T00:05:29.412469586Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125279" May 17 00:05:29.413667 containerd[2012]: time="2025-05-17T00:05:29.413596662Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:29.419534 containerd[2012]: time="2025-05-17T00:05:29.419231298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:29.422468 containerd[2012]: time="2025-05-17T00:05:29.421818738Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 2.142503686s" May 17 00:05:29.422468 containerd[2012]: time="2025-05-17T00:05:29.421882362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:05:29.423168 containerd[2012]: time="2025-05-17T00:05:29.422901150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:05:30.987118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523429653.mount: Deactivated successfully. May 17 00:05:31.512855 containerd[2012]: time="2025-05-17T00:05:31.511428645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:31.512855 containerd[2012]: time="2025-05-17T00:05:31.512806221Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871375" May 17 00:05:31.513783 containerd[2012]: time="2025-05-17T00:05:31.513724893Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:31.518766 containerd[2012]: time="2025-05-17T00:05:31.518713785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:31.519929 containerd[2012]: time="2025-05-17T00:05:31.519869253Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 2.096910251s" May 17 00:05:31.520041 containerd[2012]: time="2025-05-17T00:05:31.519926829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:05:31.521002 containerd[2012]: time="2025-05-17T00:05:31.520954461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:05:32.153071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476775485.mount: Deactivated successfully. May 17 00:05:33.393274 containerd[2012]: time="2025-05-17T00:05:33.393205246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:33.399548 containerd[2012]: time="2025-05-17T00:05:33.397881286Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:33.399548 containerd[2012]: time="2025-05-17T00:05:33.398048158Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 17 00:05:33.406431 containerd[2012]: time="2025-05-17T00:05:33.406363678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:33.411326 containerd[2012]: time="2025-05-17T00:05:33.411270190Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.890260217s" May 17 00:05:33.411535 containerd[2012]: time="2025-05-17T00:05:33.411482170Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:05:33.412366 containerd[2012]: time="2025-05-17T00:05:33.412192690Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:05:33.911373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220786241.mount: Deactivated successfully. May 17 00:05:33.920366 containerd[2012]: time="2025-05-17T00:05:33.920005741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:33.921656 containerd[2012]: time="2025-05-17T00:05:33.921604741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 17 00:05:33.922852 containerd[2012]: time="2025-05-17T00:05:33.922764637Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:33.927530 containerd[2012]: time="2025-05-17T00:05:33.927165661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:33.929013 containerd[2012]: time="2025-05-17T00:05:33.928820605Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.191739ms" May 17 00:05:33.929013 containerd[2012]: time="2025-05-17T00:05:33.928879105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:05:33.930376 containerd[2012]: time="2025-05-17T00:05:33.930088585Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:05:34.487016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367756444.mount: Deactivated successfully. May 17 00:05:37.104909 containerd[2012]: time="2025-05-17T00:05:37.104272608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:37.106809 containerd[2012]: time="2025-05-17T00:05:37.106743492Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 17 00:05:37.108221 containerd[2012]: time="2025-05-17T00:05:37.108131244Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:37.115135 containerd[2012]: time="2025-05-17T00:05:37.115081813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:37.118182 containerd[2012]: time="2025-05-17T00:05:37.117941185Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.1877986s" May 17 00:05:37.118182 containerd[2012]: time="2025-05-17T00:05:37.118003225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:05:39.544457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:05:39.554007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:39.909004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:39.914839 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:39.990518 kubelet[2717]: E0517 00:05:39.988956 2717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:39.993917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:39.994240 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:44.240087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:44.251006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:44.306684 systemd[1]: Reloading requested from client PID 2731 ('systemctl') (unit session-7.scope)... May 17 00:05:44.306883 systemd[1]: Reloading... May 17 00:05:44.532460 zram_generator::config[2772]: No configuration found. May 17 00:05:44.771162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:44.944602 systemd[1]: Reloading finished in 636 ms. May 17 00:05:45.025011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:45.044044 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:05:45.050233 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:45.051303 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:05:45.051785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:45.060111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:45.373699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:45.390305 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:05:45.462796 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:45.463300 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:05:45.463415 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:45.464474 kubelet[2838]: I0517 00:05:45.463737 2838 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:05:45.550543 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:05:46.844350 kubelet[2838]: I0517 00:05:46.843739 2838 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:05:46.844350 kubelet[2838]: I0517 00:05:46.843785 2838 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:05:46.846969 kubelet[2838]: I0517 00:05:46.844908 2838 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:05:46.894520 kubelet[2838]: E0517 00:05:46.894399 2838 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:46.904814 kubelet[2838]: I0517 00:05:46.904750 2838 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:05:46.915648 kubelet[2838]: E0517 00:05:46.915582 2838 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:05:46.915648 kubelet[2838]: I0517 00:05:46.915637 2838 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:05:46.923457 kubelet[2838]: I0517 00:05:46.923360 2838 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:05:46.923808 kubelet[2838]: I0517 00:05:46.923767 2838 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:05:46.924174 kubelet[2838]: I0517 00:05:46.924113 2838 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:05:46.924459 kubelet[2838]: I0517 00:05:46.924167 2838 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:05:46.924661 kubelet[2838]: I0517 00:05:46.924542 2838 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:05:46.924661 kubelet[2838]: I0517 00:05:46.924566 2838 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:05:46.924787 kubelet[2838]: I0517 00:05:46.924771 2838 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:46.930011 kubelet[2838]: I0517 00:05:46.929959 2838 kubelet.go:408] "Attempting to sync node with API server" May 17 00:05:46.930011 kubelet[2838]: I0517 00:05:46.930007 2838 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:05:46.930151 kubelet[2838]: I0517 00:05:46.930042 2838 kubelet.go:314] "Adding apiserver pod source" May 17 00:05:46.930151 kubelet[2838]: I0517 00:05:46.930090 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:05:46.931733 kubelet[2838]: W0517 00:05:46.931646 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-249&limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:46.931869 kubelet[2838]: E0517 00:05:46.931743 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-249&limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:46.938709 kubelet[2838]: W0517 00:05:46.938522 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:46.938709 kubelet[2838]: E0517 00:05:46.938609 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:46.939547 kubelet[2838]: I0517 00:05:46.939150 2838 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:05:46.941345 kubelet[2838]: I0517 00:05:46.940023 2838 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:05:46.941345 kubelet[2838]: W0517 00:05:46.940146 2838 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:05:46.941786 kubelet[2838]: I0517 00:05:46.941734 2838 server.go:1274] "Started kubelet" May 17 00:05:46.947557 kubelet[2838]: I0517 00:05:46.945866 2838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:05:46.947557 kubelet[2838]: I0517 00:05:46.946621 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:05:46.947557 kubelet[2838]: I0517 00:05:46.947413 2838 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:05:46.947557 kubelet[2838]: I0517 00:05:46.947444 2838 server.go:449] "Adding debug handlers to kubelet server" May 17 00:05:46.955591 kubelet[2838]: E0517 00:05:46.947771 2838 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.249:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-249.184027bcb9a51d31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-249,UID:ip-172-31-26-249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-249,},FirstTimestamp:2025-05-17 00:05:46.941693233 +0000 UTC m=+1.545024092,LastTimestamp:2025-05-17 00:05:46.941693233 +0000 UTC m=+1.545024092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-249,}" May 17 00:05:46.960858 kubelet[2838]: I0517 00:05:46.960802 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:05:46.962463 kubelet[2838]: I0517 00:05:46.962419 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:05:46.965085 kubelet[2838]: I0517 00:05:46.965035 2838 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:05:46.965333 kubelet[2838]: E0517 00:05:46.965305 2838 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:05:46.965465 kubelet[2838]: E0517 00:05:46.965423 2838 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-249\" not found" May 17 00:05:46.969564 kubelet[2838]: I0517 00:05:46.969274 2838 factory.go:221] Registration of the systemd container factory successfully May 17 00:05:46.969564 kubelet[2838]: I0517 00:05:46.969423 2838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:05:46.970623 kubelet[2838]: E0517 00:05:46.970348 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-249?timeout=10s\": dial tcp 172.31.26.249:6443: connect: connection refused" interval="200ms" May 17 00:05:46.973172 kubelet[2838]: I0517 00:05:46.973097 2838 factory.go:221] Registration of the containerd container factory successfully May 17 00:05:46.976099 kubelet[2838]: I0517 00:05:46.976045 2838 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:05:46.976247 kubelet[2838]: I0517 00:05:46.976148 2838 reconciler.go:26] "Reconciler: start to sync state" May 17 00:05:46.996223 kubelet[2838]: I0517 00:05:46.995271 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:05:46.997485 kubelet[2838]: I0517 00:05:46.997419 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:05:46.997485 kubelet[2838]: I0517 00:05:46.997470 2838 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:05:46.997485 kubelet[2838]: I0517 00:05:46.997625 2838 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:05:46.997863 kubelet[2838]: E0517 00:05:46.997716 2838 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:05:47.006135 kubelet[2838]: W0517 00:05:47.006036 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:47.006135 kubelet[2838]: E0517 00:05:47.006212 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:47.009119 kubelet[2838]: W0517 00:05:47.008938 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:47.009119 kubelet[2838]: E0517 00:05:47.009040 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:47.014540 kubelet[2838]: I0517 00:05:47.014278 2838 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:05:47.014540 kubelet[2838]: I0517 00:05:47.014315 2838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:05:47.014540 kubelet[2838]: I0517 00:05:47.014344 2838 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:47.018026 kubelet[2838]: I0517 00:05:47.017838 2838 policy_none.go:49] "None policy: Start" May 17 00:05:47.019736 kubelet[2838]: I0517 00:05:47.019582 2838 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:05:47.020113 kubelet[2838]: I0517 00:05:47.019946 2838 state_mem.go:35] "Initializing new in-memory state store" May 17 00:05:47.031704 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:05:47.043905 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:05:47.051637 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:05:47.065474 kubelet[2838]: I0517 00:05:47.065417 2838 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:05:47.066014 kubelet[2838]: E0517 00:05:47.065679 2838 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-249\" not found" May 17 00:05:47.066014 kubelet[2838]: I0517 00:05:47.065746 2838 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:05:47.066014 kubelet[2838]: I0517 00:05:47.065769 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:05:47.066429 kubelet[2838]: I0517 00:05:47.066373 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:05:47.069173 kubelet[2838]: E0517 00:05:47.069126 2838 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-249\" not found" May 17 00:05:47.115934 systemd[1]: Created slice kubepods-burstable-podf8c974167a6efbb53a436454e1bd1b2a.slice - libcontainer container kubepods-burstable-podf8c974167a6efbb53a436454e1bd1b2a.slice. May 17 00:05:47.139636 systemd[1]: Created slice kubepods-burstable-poda948d720e4a0ad6e02fd9b8c6bf82a0c.slice - libcontainer container kubepods-burstable-poda948d720e4a0ad6e02fd9b8c6bf82a0c.slice. May 17 00:05:47.149634 systemd[1]: Created slice kubepods-burstable-podb3b52693d464049fe0b5f93c62bc8192.slice - libcontainer container kubepods-burstable-podb3b52693d464049fe0b5f93c62bc8192.slice. May 17 00:05:47.167777 kubelet[2838]: I0517 00:05:47.167723 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-249" May 17 00:05:47.168334 kubelet[2838]: E0517 00:05:47.168256 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.249:6443/api/v1/nodes\": dial tcp 172.31.26.249:6443: connect: connection refused" node="ip-172-31-26-249" May 17 00:05:47.171197 kubelet[2838]: E0517 00:05:47.171139 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-249?timeout=10s\": dial tcp 172.31.26.249:6443: connect: connection refused" interval="400ms" May 17 00:05:47.276789 kubelet[2838]: I0517 00:05:47.276712 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8c974167a6efbb53a436454e1bd1b2a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-249\" (UID: \"f8c974167a6efbb53a436454e1bd1b2a\") " pod="kube-system/kube-apiserver-ip-172-31-26-249" May 17 00:05:47.276789 kubelet[2838]: I0517 00:05:47.276767 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:47.276789 kubelet[2838]: I0517 00:05:47.276805 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3b52693d464049fe0b5f93c62bc8192-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-249\" (UID: \"b3b52693d464049fe0b5f93c62bc8192\") " pod="kube-system/kube-scheduler-ip-172-31-26-249" May 17 00:05:47.277232 kubelet[2838]: I0517 00:05:47.276838 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8c974167a6efbb53a436454e1bd1b2a-ca-certs\") pod \"kube-apiserver-ip-172-31-26-249\" (UID: \"f8c974167a6efbb53a436454e1bd1b2a\") " pod="kube-system/kube-apiserver-ip-172-31-26-249" May 17 00:05:47.277232 kubelet[2838]: I0517 00:05:47.276870 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8c974167a6efbb53a436454e1bd1b2a-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-249\" (UID: \"f8c974167a6efbb53a436454e1bd1b2a\") " pod="kube-system/kube-apiserver-ip-172-31-26-249" May 17 00:05:47.277232 kubelet[2838]: I0517 00:05:47.276905 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:47.277232 kubelet[2838]: I0517 00:05:47.276954 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:47.277232 kubelet[2838]: I0517 00:05:47.277007 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:47.277468 kubelet[2838]: I0517 00:05:47.277047 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:47.371486 kubelet[2838]: I0517 00:05:47.371122 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-249" May 17 00:05:47.372236 kubelet[2838]: E0517 00:05:47.371937 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.249:6443/api/v1/nodes\": dial tcp 172.31.26.249:6443: connect: connection refused" node="ip-172-31-26-249" May 17 00:05:47.433389 containerd[2012]: time="2025-05-17T00:05:47.433332132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-249,Uid:f8c974167a6efbb53a436454e1bd1b2a,Namespace:kube-system,Attempt:0,}" May 17 00:05:47.447203 containerd[2012]: time="2025-05-17T00:05:47.447153720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-249,Uid:a948d720e4a0ad6e02fd9b8c6bf82a0c,Namespace:kube-system,Attempt:0,}" May 17 00:05:47.454430 containerd[2012]: time="2025-05-17T00:05:47.454284468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-249,Uid:b3b52693d464049fe0b5f93c62bc8192,Namespace:kube-system,Attempt:0,}" May 17 00:05:47.572408 kubelet[2838]: E0517 00:05:47.572310 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-249?timeout=10s\": dial tcp 172.31.26.249:6443: connect: connection refused" interval="800ms" May 17 00:05:47.754028 kubelet[2838]: W0517 00:05:47.753828 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:47.754028 kubelet[2838]: E0517 00:05:47.753930 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:47.774597 kubelet[2838]: I0517 00:05:47.774481 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-249" May 17 00:05:47.775219 kubelet[2838]: E0517 00:05:47.775169 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.249:6443/api/v1/nodes\": dial tcp 172.31.26.249:6443: connect: connection refused" node="ip-172-31-26-249" May 17 00:05:47.912721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766594594.mount: Deactivated successfully. May 17 00:05:47.921097 containerd[2012]: time="2025-05-17T00:05:47.920827562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:47.922523 containerd[2012]: time="2025-05-17T00:05:47.922448882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 17 00:05:47.923932 containerd[2012]: time="2025-05-17T00:05:47.923796350Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:47.926000 containerd[2012]: time="2025-05-17T00:05:47.925791662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:05:47.927602 containerd[2012]: time="2025-05-17T00:05:47.927316310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:05:47.927602 containerd[2012]: time="2025-05-17T00:05:47.927461714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:47.929171 containerd[2012]: time="2025-05-17T00:05:47.929087066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:47.934191 containerd[2012]: time="2025-05-17T00:05:47.934119518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.654954ms" May 17 00:05:47.938475 containerd[2012]: time="2025-05-17T00:05:47.937718714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:47.939740 containerd[2012]: time="2025-05-17T00:05:47.939668906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.276822ms" May 17 00:05:47.950791 containerd[2012]: time="2025-05-17T00:05:47.950718758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.160002ms" May 17 00:05:47.958387 kubelet[2838]: W0517 00:05:47.956014 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:47.958387 kubelet[2838]: E0517 00:05:47.956080 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:47.967221 kubelet[2838]: W0517 00:05:47.967171 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:47.967448 kubelet[2838]: E0517 00:05:47.967410 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:47.994016 kubelet[2838]: W0517 00:05:47.993850 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-249&limit=500&resourceVersion=0": dial tcp 172.31.26.249:6443: connect: connection refused May 17 00:05:47.994016 kubelet[2838]: E0517 00:05:47.993977 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-249&limit=500&resourceVersion=0\": dial tcp 172.31.26.249:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:48.134411 containerd[2012]: time="2025-05-17T00:05:48.132641147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:48.134411 containerd[2012]: time="2025-05-17T00:05:48.132730691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:48.134411 containerd[2012]: time="2025-05-17T00:05:48.132789179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:48.137357 containerd[2012]: time="2025-05-17T00:05:48.136918391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:48.143882 containerd[2012]: time="2025-05-17T00:05:48.143725667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:48.145122 containerd[2012]: time="2025-05-17T00:05:48.143921015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:48.145566 containerd[2012]: time="2025-05-17T00:05:48.145214183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:48.145566 containerd[2012]: time="2025-05-17T00:05:48.145401215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:48.150945 containerd[2012]: time="2025-05-17T00:05:48.149364767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:48.150945 containerd[2012]: time="2025-05-17T00:05:48.149472131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:48.150945 containerd[2012]: time="2025-05-17T00:05:48.149542247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:48.150945 containerd[2012]: time="2025-05-17T00:05:48.149722931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:48.192964 systemd[1]: Started cri-containerd-4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8.scope - libcontainer container 4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8. May 17 00:05:48.199842 systemd[1]: Started cri-containerd-4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd.scope - libcontainer container 4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd. May 17 00:05:48.218844 systemd[1]: Started cri-containerd-32062c96ad423fee123c857224d5ded90a22dcb2350187fed3da73d186268030.scope - libcontainer container 32062c96ad423fee123c857224d5ded90a22dcb2350187fed3da73d186268030. May 17 00:05:48.317639 containerd[2012]: time="2025-05-17T00:05:48.317529936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-249,Uid:b3b52693d464049fe0b5f93c62bc8192,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd\"" May 17 00:05:48.330380 containerd[2012]: time="2025-05-17T00:05:48.330316128Z" level=info msg="CreateContainer within sandbox \"4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:05:48.339799 containerd[2012]: time="2025-05-17T00:05:48.339456492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-249,Uid:f8c974167a6efbb53a436454e1bd1b2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"32062c96ad423fee123c857224d5ded90a22dcb2350187fed3da73d186268030\"" May 17 00:05:48.341951 containerd[2012]: time="2025-05-17T00:05:48.341834172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-249,Uid:a948d720e4a0ad6e02fd9b8c6bf82a0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8\"" May 17 00:05:48.347837 containerd[2012]: time="2025-05-17T00:05:48.347765604Z" level=info msg="CreateContainer within sandbox \"4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:05:48.351543 containerd[2012]: time="2025-05-17T00:05:48.351154692Z" level=info msg="CreateContainer within sandbox \"32062c96ad423fee123c857224d5ded90a22dcb2350187fed3da73d186268030\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:05:48.373201 kubelet[2838]: E0517 00:05:48.373130 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-249?timeout=10s\": dial tcp 172.31.26.249:6443: connect: connection refused" interval="1.6s" May 17 00:05:48.380714 containerd[2012]: time="2025-05-17T00:05:48.380621616Z" level=info msg="CreateContainer within sandbox \"4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887\"" May 17 00:05:48.381889 containerd[2012]: time="2025-05-17T00:05:48.381821436Z" level=info msg="StartContainer for \"374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887\"" May 17 00:05:48.409974 containerd[2012]: time="2025-05-17T00:05:48.409691245Z" level=info msg="CreateContainer within sandbox \"4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820\"" May 17 00:05:48.412539 containerd[2012]: time="2025-05-17T00:05:48.412403041Z" level=info msg="StartContainer for \"0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820\"" May 17 00:05:48.414032 containerd[2012]: time="2025-05-17T00:05:48.413817025Z" level=info msg="CreateContainer within sandbox \"32062c96ad423fee123c857224d5ded90a22dcb2350187fed3da73d186268030\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f35735782a6741e2dedf55015f2d8421529201a71c1a6cd853027ba48f17f26\"" May 17 00:05:48.415107 containerd[2012]: time="2025-05-17T00:05:48.415060525Z" level=info msg="StartContainer for \"9f35735782a6741e2dedf55015f2d8421529201a71c1a6cd853027ba48f17f26\"" May 17 00:05:48.435065 systemd[1]: Started cri-containerd-374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887.scope - libcontainer container 374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887. May 17 00:05:48.491915 systemd[1]: Started cri-containerd-0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820.scope - libcontainer container 0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820. May 17 00:05:48.508839 systemd[1]: Started cri-containerd-9f35735782a6741e2dedf55015f2d8421529201a71c1a6cd853027ba48f17f26.scope - libcontainer container 9f35735782a6741e2dedf55015f2d8421529201a71c1a6cd853027ba48f17f26. May 17 00:05:48.555578 containerd[2012]: time="2025-05-17T00:05:48.555477241Z" level=info msg="StartContainer for \"374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887\" returns successfully" May 17 00:05:48.580852 kubelet[2838]: I0517 00:05:48.580786 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-249" May 17 00:05:48.581293 kubelet[2838]: E0517 00:05:48.581244 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.26.249:6443/api/v1/nodes\": dial tcp 172.31.26.249:6443: connect: connection refused" node="ip-172-31-26-249" May 17 00:05:48.652592 containerd[2012]: time="2025-05-17T00:05:48.633380798Z" level=info msg="StartContainer for \"0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820\" returns successfully" May 17 00:05:48.657712 containerd[2012]: time="2025-05-17T00:05:48.657466334Z" level=info msg="StartContainer for \"9f35735782a6741e2dedf55015f2d8421529201a71c1a6cd853027ba48f17f26\" returns successfully" May 17 00:05:50.186208 kubelet[2838]: I0517 00:05:50.186147 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-249" May 17 00:05:52.166187 kubelet[2838]: E0517 00:05:52.166126 2838 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-249\" not found" node="ip-172-31-26-249" May 17 00:05:52.379679 kubelet[2838]: I0517 00:05:52.379110 2838 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-249" May 17 00:05:52.936535 kubelet[2838]: I0517 00:05:52.934706 2838 apiserver.go:52] "Watching apiserver" May 17 00:05:52.977221 kubelet[2838]: I0517 00:05:52.977164 2838 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:05:54.361277 systemd[1]: Reloading requested from client PID 3116 ('systemctl') (unit session-7.scope)... May 17 00:05:54.361303 systemd[1]: Reloading... May 17 00:05:54.528637 zram_generator::config[3162]: No configuration found. May 17 00:05:54.751951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:54.958168 systemd[1]: Reloading finished in 596 ms. May 17 00:05:55.032538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:55.049678 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:05:55.050974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:55.051052 systemd[1]: kubelet.service: Consumed 2.190s CPU time, 128.2M memory peak, 0B memory swap peak. May 17 00:05:55.058051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:55.383693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:55.405349 (kubelet)[3216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:05:55.511545 kubelet[3216]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:55.514149 kubelet[3216]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:05:55.514309 kubelet[3216]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:55.514713 kubelet[3216]: I0517 00:05:55.514636 3216 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:05:55.537679 kubelet[3216]: I0517 00:05:55.537613 3216 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:05:55.537679 kubelet[3216]: I0517 00:05:55.537662 3216 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:05:55.538134 kubelet[3216]: I0517 00:05:55.538091 3216 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:05:55.545550 kubelet[3216]: I0517 00:05:55.544856 3216 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:05:55.552186 kubelet[3216]: I0517 00:05:55.552125 3216 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:05:55.562765 kubelet[3216]: E0517 00:05:55.562700 3216 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:05:55.563406 kubelet[3216]: I0517 00:05:55.563013 3216 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:05:55.570572 kubelet[3216]: I0517 00:05:55.570529 3216 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:05:55.571187 kubelet[3216]: I0517 00:05:55.571008 3216 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:05:55.572556 kubelet[3216]: I0517 00:05:55.571307 3216 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:05:55.572556 kubelet[3216]: I0517 00:05:55.571368 3216 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:05:55.572556 kubelet[3216]: I0517 00:05:55.572252 3216 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:05:55.572556 kubelet[3216]: I0517 00:05:55.572277 3216 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:05:55.573010 kubelet[3216]: I0517 00:05:55.572357 3216 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:55.574368 kubelet[3216]: I0517 00:05:55.574329 3216 kubelet.go:408] "Attempting to sync node with API server" May 17 00:05:55.575226 kubelet[3216]: I0517 00:05:55.575192 3216 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:05:55.575441 kubelet[3216]: I0517 00:05:55.575421 3216 kubelet.go:314] "Adding apiserver pod source" May 17 00:05:55.575673 kubelet[3216]: I0517 00:05:55.575626 3216 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:05:55.591931 kubelet[3216]: I0517 00:05:55.588766 3216 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:05:55.591931 kubelet[3216]: I0517 00:05:55.589694 3216 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:05:55.591931 kubelet[3216]: I0517 00:05:55.590369 3216 server.go:1274] "Started kubelet" May 17 00:05:55.600330 kubelet[3216]: I0517 00:05:55.600192 3216 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:05:55.605662 kubelet[3216]: I0517 00:05:55.605422 3216 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:05:55.605814 kubelet[3216]: I0517 00:05:55.605730 3216 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:05:55.609538 kubelet[3216]: I0517 00:05:55.609236 3216 server.go:449] "Adding debug handlers to kubelet server" May 17 00:05:55.614268 kubelet[3216]: I0517 00:05:55.613487 3216 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:05:55.614268 kubelet[3216]: I0517 00:05:55.613912 3216 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:05:55.625782 kubelet[3216]: I0517 00:05:55.625710 3216 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:05:55.628549 kubelet[3216]: I0517 00:05:55.625905 3216 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:05:55.631541 kubelet[3216]: E0517 00:05:55.626131 3216 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-249\" not found" May 17 00:05:55.632721 kubelet[3216]: I0517 00:05:55.632677 3216 reconciler.go:26] "Reconciler: start to sync state" May 17 00:05:55.642232 sudo[3237]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:05:55.643253 sudo[3237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:05:55.646007 kubelet[3216]: I0517 00:05:55.644936 3216 factory.go:221] Registration of the systemd container factory successfully May 17 00:05:55.646007 kubelet[3216]: I0517 00:05:55.645091 3216 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:05:55.671736 kubelet[3216]: I0517 00:05:55.671668 3216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:05:55.682683 kubelet[3216]: I0517 00:05:55.682626 3216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:05:55.682683 kubelet[3216]: I0517 00:05:55.682675 3216 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:05:55.682888 kubelet[3216]: I0517 00:05:55.682710 3216 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:05:55.682888 kubelet[3216]: E0517 00:05:55.682804 3216 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:05:55.714375 kubelet[3216]: E0517 00:05:55.714029 3216 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:05:55.750376 kubelet[3216]: E0517 00:05:55.749665 3216 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-26-249\" not found" May 17 00:05:55.757721 kubelet[3216]: I0517 00:05:55.757671 3216 factory.go:221] Registration of the containerd container factory successfully May 17 00:05:55.783482 kubelet[3216]: E0517 00:05:55.783280 3216 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:05:55.866907 kubelet[3216]: I0517 00:05:55.866459 3216 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:05:55.866907 kubelet[3216]: I0517 00:05:55.866520 3216 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:05:55.866907 kubelet[3216]: I0517 00:05:55.866556 3216 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:55.866907 kubelet[3216]: I0517 00:05:55.866796 3216 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:05:55.866907 kubelet[3216]: I0517 00:05:55.866816 3216 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:05:55.866907 kubelet[3216]: I0517 00:05:55.866850 3216 policy_none.go:49] "None policy: Start" May 17 00:05:55.869672 kubelet[3216]: I0517 00:05:55.868903 3216 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:05:55.869672 kubelet[3216]: I0517 00:05:55.868953 3216 state_mem.go:35] "Initializing new in-memory state store" May 17 00:05:55.869672 kubelet[3216]: I0517 00:05:55.869203 3216 state_mem.go:75] "Updated machine memory state" May 17 00:05:55.877589 kubelet[3216]: I0517 00:05:55.877538 3216 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:05:55.878002 kubelet[3216]: I0517 00:05:55.877817 3216 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:05:55.878002 kubelet[3216]: I0517 00:05:55.877847 3216 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:05:55.884571 kubelet[3216]: I0517 00:05:55.883065 3216 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:05:56.012016 kubelet[3216]: I0517 00:05:56.011448 3216 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-26-249" May 17 00:05:56.034854 kubelet[3216]: I0517 00:05:56.034711 3216 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-26-249" May 17 00:05:56.034854 kubelet[3216]: I0517 00:05:56.034839 3216 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-26-249" May 17 00:05:56.042553 kubelet[3216]: I0517 00:05:56.042476 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:56.042719 kubelet[3216]: I0517 00:05:56.042562 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:56.042719 kubelet[3216]: I0517 00:05:56.042611 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8c974167a6efbb53a436454e1bd1b2a-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-249\" (UID: \"f8c974167a6efbb53a436454e1bd1b2a\") " pod="kube-system/kube-apiserver-ip-172-31-26-249" May 17 00:05:56.042719 kubelet[3216]: I0517 00:05:56.042663 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8c974167a6efbb53a436454e1bd1b2a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-249\" (UID: \"f8c974167a6efbb53a436454e1bd1b2a\") " pod="kube-system/kube-apiserver-ip-172-31-26-249" May 17 00:05:56.042719 kubelet[3216]: I0517 00:05:56.042703 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:56.042939 kubelet[3216]: I0517 00:05:56.042740 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:56.042939 kubelet[3216]: I0517 00:05:56.042777 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a948d720e4a0ad6e02fd9b8c6bf82a0c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-249\" (UID: \"a948d720e4a0ad6e02fd9b8c6bf82a0c\") " pod="kube-system/kube-controller-manager-ip-172-31-26-249" May 17 00:05:56.042939 kubelet[3216]: I0517 00:05:56.042811 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3b52693d464049fe0b5f93c62bc8192-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-249\" (UID: \"b3b52693d464049fe0b5f93c62bc8192\") " pod="kube-system/kube-scheduler-ip-172-31-26-249" May 17 00:05:56.042939 kubelet[3216]: I0517 00:05:56.042845 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8c974167a6efbb53a436454e1bd1b2a-ca-certs\") pod \"kube-apiserver-ip-172-31-26-249\" (UID: \"f8c974167a6efbb53a436454e1bd1b2a\") " pod="kube-system/kube-apiserver-ip-172-31-26-249" May 17 00:05:56.487373 sudo[3237]: pam_unix(sudo:session): session closed for user root May 17 00:05:56.588687 kubelet[3216]: I0517 00:05:56.588593 3216 apiserver.go:52] "Watching apiserver" May 17 00:05:56.628048 kubelet[3216]: I0517 00:05:56.627923 3216 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:05:56.828284 kubelet[3216]: E0517 00:05:56.828226 3216 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-26-249\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-249" May 17 00:05:56.882530 kubelet[3216]: I0517 00:05:56.880092 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-249" podStartSLOduration=1.880066775 podStartE2EDuration="1.880066775s" podCreationTimestamp="2025-05-17 00:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:05:56.852438179 +0000 UTC m=+1.432142709" watchObservedRunningTime="2025-05-17 00:05:56.880066775 +0000 UTC m=+1.459771281" May 17 00:05:56.914236 kubelet[3216]: I0517 00:05:56.914148 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-249" podStartSLOduration=1.914130455 podStartE2EDuration="1.914130455s" podCreationTimestamp="2025-05-17 00:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:05:56.881544095 +0000 UTC m=+1.461248637" watchObservedRunningTime="2025-05-17 00:05:56.914130455 +0000 UTC m=+1.493834973" May 17 00:05:56.914424 kubelet[3216]: I0517 00:05:56.914340 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-249" podStartSLOduration=1.914333147 podStartE2EDuration="1.914333147s" podCreationTimestamp="2025-05-17 00:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:05:56.913903907 +0000 UTC m=+1.493608413" watchObservedRunningTime="2025-05-17 00:05:56.914333147 +0000 UTC m=+1.494037653" May 17 00:05:59.451004 sudo[2335]: pam_unix(sudo:session): session closed for user root May 17 00:05:59.475794 sshd[2332]: pam_unix(sshd:session): session closed for user core May 17 00:05:59.482978 systemd[1]: sshd@6-172.31.26.249:22-139.178.89.65:48258.service: Deactivated successfully. May 17 00:05:59.489223 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:05:59.489834 systemd[1]: session-7.scope: Consumed 10.911s CPU time, 151.9M memory peak, 0B memory swap peak. May 17 00:05:59.490959 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. May 17 00:05:59.493079 systemd-logind[1993]: Removed session 7. May 17 00:05:59.890545 update_engine[1995]: I20250517 00:05:59.890043 1995 update_attempter.cc:509] Updating boot flags... May 17 00:05:59.979708 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3303) May 17 00:06:00.233533 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3305) May 17 00:06:01.531196 kubelet[3216]: I0517 00:06:01.530950 3216 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:06:01.531867 containerd[2012]: time="2025-05-17T00:06:01.531434210Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:06:01.532951 kubelet[3216]: I0517 00:06:01.532667 3216 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:06:02.214805 systemd[1]: Created slice kubepods-besteffort-podfb78acf3_4547_458f_a969_3d6e237883ba.slice - libcontainer container kubepods-besteffort-podfb78acf3_4547_458f_a969_3d6e237883ba.slice. May 17 00:06:02.248369 systemd[1]: Created slice kubepods-burstable-pod3879a8df_9591_4b76_8e98_42b80a818d01.slice - libcontainer container kubepods-burstable-pod3879a8df_9591_4b76_8e98_42b80a818d01.slice. May 17 00:06:02.285938 kubelet[3216]: I0517 00:06:02.283596 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb78acf3-4547-458f-a969-3d6e237883ba-xtables-lock\") pod \"kube-proxy-99lwk\" (UID: \"fb78acf3-4547-458f-a969-3d6e237883ba\") " pod="kube-system/kube-proxy-99lwk" May 17 00:06:02.285938 kubelet[3216]: I0517 00:06:02.283685 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btsm5\" (UniqueName: \"kubernetes.io/projected/fb78acf3-4547-458f-a969-3d6e237883ba-kube-api-access-btsm5\") pod \"kube-proxy-99lwk\" (UID: \"fb78acf3-4547-458f-a969-3d6e237883ba\") " pod="kube-system/kube-proxy-99lwk" May 17 00:06:02.285938 kubelet[3216]: I0517 00:06:02.283733 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-kernel\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.285938 kubelet[3216]: I0517 00:06:02.283772 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb78acf3-4547-458f-a969-3d6e237883ba-kube-proxy\") pod \"kube-proxy-99lwk\" (UID: \"fb78acf3-4547-458f-a969-3d6e237883ba\") " pod="kube-system/kube-proxy-99lwk" May 17 00:06:02.285938 kubelet[3216]: I0517 00:06:02.283814 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-hubble-tls\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286317 kubelet[3216]: I0517 00:06:02.283852 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj25h\" (UniqueName: \"kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-kube-api-access-mj25h\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286317 kubelet[3216]: I0517 00:06:02.283900 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-run\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286317 kubelet[3216]: I0517 00:06:02.283934 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-bpf-maps\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286317 kubelet[3216]: I0517 00:06:02.283971 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-net\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286317 kubelet[3216]: I0517 00:06:02.284012 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cni-path\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286317 kubelet[3216]: I0517 00:06:02.284046 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-lib-modules\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286680 kubelet[3216]: I0517 00:06:02.284080 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb78acf3-4547-458f-a969-3d6e237883ba-lib-modules\") pod \"kube-proxy-99lwk\" (UID: \"fb78acf3-4547-458f-a969-3d6e237883ba\") " pod="kube-system/kube-proxy-99lwk" May 17 00:06:02.286680 kubelet[3216]: I0517 00:06:02.284112 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-hostproc\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286680 kubelet[3216]: I0517 00:06:02.284154 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-cgroup\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286680 kubelet[3216]: I0517 00:06:02.284186 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-etc-cni-netd\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286680 kubelet[3216]: I0517 00:06:02.284218 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3879a8df-9591-4b76-8e98-42b80a818d01-clustermesh-secrets\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286680 kubelet[3216]: I0517 00:06:02.284266 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-xtables-lock\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.286995 kubelet[3216]: I0517 00:06:02.284299 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-config-path\") pod \"cilium-zzhz8\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " pod="kube-system/cilium-zzhz8" May 17 00:06:02.531902 containerd[2012]: time="2025-05-17T00:06:02.531453819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99lwk,Uid:fb78acf3-4547-458f-a969-3d6e237883ba,Namespace:kube-system,Attempt:0,}" May 17 00:06:02.555613 containerd[2012]: time="2025-05-17T00:06:02.555541755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzhz8,Uid:3879a8df-9591-4b76-8e98-42b80a818d01,Namespace:kube-system,Attempt:0,}" May 17 00:06:02.628323 containerd[2012]: time="2025-05-17T00:06:02.627250467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:02.630627 containerd[2012]: time="2025-05-17T00:06:02.629002167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:02.630627 containerd[2012]: time="2025-05-17T00:06:02.629092887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:02.630627 containerd[2012]: time="2025-05-17T00:06:02.629274687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:02.651101 containerd[2012]: time="2025-05-17T00:06:02.648948303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:02.651101 containerd[2012]: time="2025-05-17T00:06:02.649061043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:02.651101 containerd[2012]: time="2025-05-17T00:06:02.649099203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:02.651101 containerd[2012]: time="2025-05-17T00:06:02.649259043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:02.688969 kubelet[3216]: I0517 00:06:02.688831 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-cilium-config-path\") pod \"cilium-operator-5d85765b45-hs96r\" (UID: \"25d56d7a-dc65-490c-bd1c-a75f9bff9e78\") " pod="kube-system/cilium-operator-5d85765b45-hs96r" May 17 00:06:02.688969 kubelet[3216]: I0517 00:06:02.688895 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxlr\" (UniqueName: \"kubernetes.io/projected/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-kube-api-access-lxxlr\") pod \"cilium-operator-5d85765b45-hs96r\" (UID: \"25d56d7a-dc65-490c-bd1c-a75f9bff9e78\") " pod="kube-system/cilium-operator-5d85765b45-hs96r" May 17 00:06:02.689055 systemd[1]: Created slice kubepods-besteffort-pod25d56d7a_dc65_490c_bd1c_a75f9bff9e78.slice - libcontainer container kubepods-besteffort-pod25d56d7a_dc65_490c_bd1c_a75f9bff9e78.slice. May 17 00:06:02.712054 systemd[1]: Started cri-containerd-f7df3acfb342fbbede1364be68dfc42350d495fb9c60c30327a370fe34265b2b.scope - libcontainer container f7df3acfb342fbbede1364be68dfc42350d495fb9c60c30327a370fe34265b2b. May 17 00:06:02.753891 systemd[1]: Started cri-containerd-7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65.scope - libcontainer container 7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65. May 17 00:06:02.847385 containerd[2012]: time="2025-05-17T00:06:02.847074976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99lwk,Uid:fb78acf3-4547-458f-a969-3d6e237883ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7df3acfb342fbbede1364be68dfc42350d495fb9c60c30327a370fe34265b2b\"" May 17 00:06:02.853945 containerd[2012]: time="2025-05-17T00:06:02.853588936Z" level=info msg="CreateContainer within sandbox \"f7df3acfb342fbbede1364be68dfc42350d495fb9c60c30327a370fe34265b2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:06:02.861884 containerd[2012]: time="2025-05-17T00:06:02.861808612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zzhz8,Uid:3879a8df-9591-4b76-8e98-42b80a818d01,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\"" May 17 00:06:02.864979 containerd[2012]: time="2025-05-17T00:06:02.864686428Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:06:02.896294 containerd[2012]: time="2025-05-17T00:06:02.896238305Z" level=info msg="CreateContainer within sandbox \"f7df3acfb342fbbede1364be68dfc42350d495fb9c60c30327a370fe34265b2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd1a5c6f6174251c91d0d897657bf379909cf46a4b17522d1f33890049a2b5b8\"" May 17 00:06:02.897655 containerd[2012]: time="2025-05-17T00:06:02.897454469Z" level=info msg="StartContainer for \"dd1a5c6f6174251c91d0d897657bf379909cf46a4b17522d1f33890049a2b5b8\"" May 17 00:06:02.949865 systemd[1]: Started cri-containerd-dd1a5c6f6174251c91d0d897657bf379909cf46a4b17522d1f33890049a2b5b8.scope - libcontainer container dd1a5c6f6174251c91d0d897657bf379909cf46a4b17522d1f33890049a2b5b8. May 17 00:06:03.003274 containerd[2012]: time="2025-05-17T00:06:03.003189265Z" level=info msg="StartContainer for \"dd1a5c6f6174251c91d0d897657bf379909cf46a4b17522d1f33890049a2b5b8\" returns successfully" May 17 00:06:03.012188 containerd[2012]: time="2025-05-17T00:06:03.012109717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hs96r,Uid:25d56d7a-dc65-490c-bd1c-a75f9bff9e78,Namespace:kube-system,Attempt:0,}" May 17 00:06:03.065880 containerd[2012]: time="2025-05-17T00:06:03.065624389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:03.065880 containerd[2012]: time="2025-05-17T00:06:03.065812045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:03.066386 containerd[2012]: time="2025-05-17T00:06:03.066123361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:03.066646 containerd[2012]: time="2025-05-17T00:06:03.066358045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:03.111129 systemd[1]: Started cri-containerd-d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098.scope - libcontainer container d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098. May 17 00:06:03.180912 containerd[2012]: time="2025-05-17T00:06:03.180792038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hs96r,Uid:25d56d7a-dc65-490c-bd1c-a75f9bff9e78,Namespace:kube-system,Attempt:0,} returns sandbox id \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\"" May 17 00:06:06.397243 kubelet[3216]: I0517 00:06:06.397113 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-99lwk" podStartSLOduration=4.397090374 podStartE2EDuration="4.397090374s" podCreationTimestamp="2025-05-17 00:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:03.852825029 +0000 UTC m=+8.432529535" watchObservedRunningTime="2025-05-17 00:06:06.397090374 +0000 UTC m=+10.976794880" May 17 00:06:12.161247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037081288.mount: Deactivated successfully. May 17 00:06:14.658104 containerd[2012]: time="2025-05-17T00:06:14.658009539Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:14.660585 containerd[2012]: time="2025-05-17T00:06:14.660409719Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 00:06:14.662692 containerd[2012]: time="2025-05-17T00:06:14.662601951Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:14.669276 containerd[2012]: time="2025-05-17T00:06:14.669090135Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.804338463s" May 17 00:06:14.669276 containerd[2012]: time="2025-05-17T00:06:14.669164991Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:06:14.673645 containerd[2012]: time="2025-05-17T00:06:14.673000599Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:06:14.678841 containerd[2012]: time="2025-05-17T00:06:14.677317623Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:06:14.704807 containerd[2012]: time="2025-05-17T00:06:14.704745603Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\"" May 17 00:06:14.706289 containerd[2012]: time="2025-05-17T00:06:14.706215555Z" level=info msg="StartContainer for \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\"" May 17 00:06:14.766871 systemd[1]: Started cri-containerd-8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578.scope - libcontainer container 8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578. May 17 00:06:14.819121 containerd[2012]: time="2025-05-17T00:06:14.819053248Z" level=info msg="StartContainer for \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\" returns successfully" May 17 00:06:14.843881 systemd[1]: cri-containerd-8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578.scope: Deactivated successfully. May 17 00:06:14.905754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578-rootfs.mount: Deactivated successfully. May 17 00:06:16.226045 containerd[2012]: time="2025-05-17T00:06:16.225946827Z" level=info msg="shim disconnected" id=8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578 namespace=k8s.io May 17 00:06:16.226045 containerd[2012]: time="2025-05-17T00:06:16.226030227Z" level=warning msg="cleaning up after shim disconnected" id=8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578 namespace=k8s.io May 17 00:06:16.227249 containerd[2012]: time="2025-05-17T00:06:16.226082007Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:16.892904 containerd[2012]: time="2025-05-17T00:06:16.892831794Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:06:16.939067 containerd[2012]: time="2025-05-17T00:06:16.938983698Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\"" May 17 00:06:16.943225 containerd[2012]: time="2025-05-17T00:06:16.941376450Z" level=info msg="StartContainer for \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\"" May 17 00:06:17.012260 systemd[1]: run-containerd-runc-k8s.io-8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4-runc.2n0bFX.mount: Deactivated successfully. May 17 00:06:17.027839 systemd[1]: Started cri-containerd-8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4.scope - libcontainer container 8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4. May 17 00:06:17.081100 containerd[2012]: time="2025-05-17T00:06:17.080987499Z" level=info msg="StartContainer for \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\" returns successfully" May 17 00:06:17.106893 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:06:17.107452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:17.108084 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:17.121822 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:17.122331 systemd[1]: cri-containerd-8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4.scope: Deactivated successfully. May 17 00:06:17.168554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:17.195396 containerd[2012]: time="2025-05-17T00:06:17.195301372Z" level=info msg="shim disconnected" id=8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4 namespace=k8s.io May 17 00:06:17.196466 containerd[2012]: time="2025-05-17T00:06:17.195999376Z" level=warning msg="cleaning up after shim disconnected" id=8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4 namespace=k8s.io May 17 00:06:17.196466 containerd[2012]: time="2025-05-17T00:06:17.196180324Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:17.895514 containerd[2012]: time="2025-05-17T00:06:17.895403875Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:06:17.917765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4-rootfs.mount: Deactivated successfully. May 17 00:06:17.939066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970669146.mount: Deactivated successfully. May 17 00:06:17.957997 containerd[2012]: time="2025-05-17T00:06:17.957066151Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\"" May 17 00:06:17.959552 containerd[2012]: time="2025-05-17T00:06:17.959109643Z" level=info msg="StartContainer for \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\"" May 17 00:06:18.038861 systemd[1]: Started cri-containerd-f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c.scope - libcontainer container f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c. May 17 00:06:18.145306 containerd[2012]: time="2025-05-17T00:06:18.144964852Z" level=info msg="StartContainer for \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\" returns successfully" May 17 00:06:18.145862 systemd[1]: cri-containerd-f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c.scope: Deactivated successfully. May 17 00:06:18.241981 containerd[2012]: time="2025-05-17T00:06:18.241886021Z" level=info msg="shim disconnected" id=f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c namespace=k8s.io May 17 00:06:18.241981 containerd[2012]: time="2025-05-17T00:06:18.241961369Z" level=warning msg="cleaning up after shim disconnected" id=f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c namespace=k8s.io May 17 00:06:18.241981 containerd[2012]: time="2025-05-17T00:06:18.241983725Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:18.273385 containerd[2012]: time="2025-05-17T00:06:18.273096881Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:06:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:06:18.812583 containerd[2012]: time="2025-05-17T00:06:18.812246540Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:18.814904 containerd[2012]: time="2025-05-17T00:06:18.814740164Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17136657" May 17 00:06:18.817658 containerd[2012]: time="2025-05-17T00:06:18.817554668Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:18.820967 containerd[2012]: time="2025-05-17T00:06:18.820748204Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.147675389s" May 17 00:06:18.820967 containerd[2012]: time="2025-05-17T00:06:18.820812740Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:06:18.826603 containerd[2012]: time="2025-05-17T00:06:18.826402004Z" level=info msg="CreateContainer within sandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:06:18.850733 containerd[2012]: time="2025-05-17T00:06:18.850589456Z" level=info msg="CreateContainer within sandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\"" May 17 00:06:18.853218 containerd[2012]: time="2025-05-17T00:06:18.851485568Z" level=info msg="StartContainer for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\"" May 17 00:06:18.902268 systemd[1]: Started cri-containerd-94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb.scope - libcontainer container 94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb. May 17 00:06:18.913196 containerd[2012]: time="2025-05-17T00:06:18.909840116Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:06:18.925166 systemd[1]: run-containerd-runc-k8s.io-f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c-runc.PuuJvU.mount: Deactivated successfully. May 17 00:06:18.926846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c-rootfs.mount: Deactivated successfully. May 17 00:06:18.967035 containerd[2012]: time="2025-05-17T00:06:18.966826736Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\"" May 17 00:06:18.971705 containerd[2012]: time="2025-05-17T00:06:18.969399920Z" level=info msg="StartContainer for \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\"" May 17 00:06:19.040372 containerd[2012]: time="2025-05-17T00:06:19.040251917Z" level=info msg="StartContainer for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" returns successfully" May 17 00:06:19.074855 systemd[1]: Started cri-containerd-1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045.scope - libcontainer container 1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045. May 17 00:06:19.136393 systemd[1]: cri-containerd-1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045.scope: Deactivated successfully. May 17 00:06:19.146610 containerd[2012]: time="2025-05-17T00:06:19.145073177Z" level=info msg="StartContainer for \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\" returns successfully" May 17 00:06:19.294847 containerd[2012]: time="2025-05-17T00:06:19.294646458Z" level=info msg="shim disconnected" id=1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045 namespace=k8s.io May 17 00:06:19.295143 containerd[2012]: time="2025-05-17T00:06:19.294842310Z" level=warning msg="cleaning up after shim disconnected" id=1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045 namespace=k8s.io May 17 00:06:19.295143 containerd[2012]: time="2025-05-17T00:06:19.294894294Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:19.919376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045-rootfs.mount: Deactivated successfully. May 17 00:06:19.941797 containerd[2012]: time="2025-05-17T00:06:19.941612961Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:06:19.999591 containerd[2012]: time="2025-05-17T00:06:19.999462934Z" level=info msg="CreateContainer within sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\"" May 17 00:06:20.002693 containerd[2012]: time="2025-05-17T00:06:20.001187478Z" level=info msg="StartContainer for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\"" May 17 00:06:20.128284 systemd[1]: Started cri-containerd-810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35.scope - libcontainer container 810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35. May 17 00:06:20.267669 containerd[2012]: time="2025-05-17T00:06:20.266150731Z" level=info msg="StartContainer for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" returns successfully" May 17 00:06:20.719772 kubelet[3216]: I0517 00:06:20.717273 3216 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:06:20.918783 systemd[1]: run-containerd-runc-k8s.io-810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35-runc.U3er8i.mount: Deactivated successfully. May 17 00:06:20.926406 kubelet[3216]: I0517 00:06:20.926310 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hs96r" podStartSLOduration=3.287669152 podStartE2EDuration="18.926286406s" podCreationTimestamp="2025-05-17 00:06:02 +0000 UTC" firstStartedPulling="2025-05-17 00:06:03.183940646 +0000 UTC m=+7.763645140" lastFinishedPulling="2025-05-17 00:06:18.822557888 +0000 UTC m=+23.402262394" observedRunningTime="2025-05-17 00:06:20.226534495 +0000 UTC m=+24.806239013" watchObservedRunningTime="2025-05-17 00:06:20.926286406 +0000 UTC m=+25.505990912" May 17 00:06:20.947334 systemd[1]: Created slice kubepods-burstable-podbae4ab1b_8f5d_498d_a115_79f30997ef22.slice - libcontainer container kubepods-burstable-podbae4ab1b_8f5d_498d_a115_79f30997ef22.slice. May 17 00:06:20.974451 systemd[1]: Created slice kubepods-burstable-pod7824355d_b713_4aa2_ba47_26927ff2292c.slice - libcontainer container kubepods-burstable-pod7824355d_b713_4aa2_ba47_26927ff2292c.slice. May 17 00:06:21.042112 kubelet[3216]: I0517 00:06:21.041804 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmg8\" (UniqueName: \"kubernetes.io/projected/7824355d-b713-4aa2-ba47-26927ff2292c-kube-api-access-7pmg8\") pod \"coredns-7c65d6cfc9-6x5l6\" (UID: \"7824355d-b713-4aa2-ba47-26927ff2292c\") " pod="kube-system/coredns-7c65d6cfc9-6x5l6" May 17 00:06:21.042112 kubelet[3216]: I0517 00:06:21.041916 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bae4ab1b-8f5d-498d-a115-79f30997ef22-config-volume\") pod \"coredns-7c65d6cfc9-wnnds\" (UID: \"bae4ab1b-8f5d-498d-a115-79f30997ef22\") " pod="kube-system/coredns-7c65d6cfc9-wnnds" May 17 00:06:21.042112 kubelet[3216]: I0517 00:06:21.041955 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7824355d-b713-4aa2-ba47-26927ff2292c-config-volume\") pod \"coredns-7c65d6cfc9-6x5l6\" (UID: \"7824355d-b713-4aa2-ba47-26927ff2292c\") " pod="kube-system/coredns-7c65d6cfc9-6x5l6" May 17 00:06:21.042112 kubelet[3216]: I0517 00:06:21.042033 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xtl\" (UniqueName: \"kubernetes.io/projected/bae4ab1b-8f5d-498d-a115-79f30997ef22-kube-api-access-69xtl\") pod \"coredns-7c65d6cfc9-wnnds\" (UID: \"bae4ab1b-8f5d-498d-a115-79f30997ef22\") " pod="kube-system/coredns-7c65d6cfc9-wnnds" May 17 00:06:21.214825 kubelet[3216]: I0517 00:06:21.214578 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zzhz8" podStartSLOduration=7.405866841 podStartE2EDuration="19.214548416s" podCreationTimestamp="2025-05-17 00:06:02 +0000 UTC" firstStartedPulling="2025-05-17 00:06:02.864037096 +0000 UTC m=+7.443741602" lastFinishedPulling="2025-05-17 00:06:14.672718587 +0000 UTC m=+19.252423177" observedRunningTime="2025-05-17 00:06:21.137291827 +0000 UTC m=+25.716996417" watchObservedRunningTime="2025-05-17 00:06:21.214548416 +0000 UTC m=+25.794252946" May 17 00:06:21.262968 containerd[2012]: time="2025-05-17T00:06:21.262794344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wnnds,Uid:bae4ab1b-8f5d-498d-a115-79f30997ef22,Namespace:kube-system,Attempt:0,}" May 17 00:06:21.289060 containerd[2012]: time="2025-05-17T00:06:21.288815240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6x5l6,Uid:7824355d-b713-4aa2-ba47-26927ff2292c,Namespace:kube-system,Attempt:0,}" May 17 00:06:23.789837 systemd-networkd[1852]: cilium_host: Link UP May 17 00:06:23.791156 systemd-networkd[1852]: cilium_net: Link UP May 17 00:06:23.792046 (udev-worker)[4235]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:23.793680 (udev-worker)[4196]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:23.795972 systemd-networkd[1852]: cilium_net: Gained carrier May 17 00:06:23.796712 systemd-networkd[1852]: cilium_host: Gained carrier May 17 00:06:23.797059 systemd-networkd[1852]: cilium_net: Gained IPv6LL May 17 00:06:23.798576 systemd-networkd[1852]: cilium_host: Gained IPv6LL May 17 00:06:23.983681 systemd-networkd[1852]: cilium_vxlan: Link UP May 17 00:06:23.983701 systemd-networkd[1852]: cilium_vxlan: Gained carrier May 17 00:06:24.517538 kernel: NET: Registered PF_ALG protocol family May 17 00:06:25.929707 systemd-networkd[1852]: cilium_vxlan: Gained IPv6LL May 17 00:06:25.977797 systemd-networkd[1852]: lxc_health: Link UP May 17 00:06:25.983912 systemd-networkd[1852]: lxc_health: Gained carrier May 17 00:06:26.352833 systemd-networkd[1852]: lxcba9459fa3cb3: Link UP May 17 00:06:26.361552 kernel: eth0: renamed from tmpd235e May 17 00:06:26.367816 systemd-networkd[1852]: lxcba9459fa3cb3: Gained carrier May 17 00:06:26.416260 systemd-networkd[1852]: lxc84fe9498f390: Link UP May 17 00:06:26.432295 kernel: eth0: renamed from tmp63387 May 17 00:06:26.436644 (udev-worker)[4577]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:26.443280 systemd-networkd[1852]: lxc84fe9498f390: Gained carrier May 17 00:06:27.720899 systemd-networkd[1852]: lxc_health: Gained IPv6LL May 17 00:06:27.849346 systemd-networkd[1852]: lxc84fe9498f390: Gained IPv6LL May 17 00:06:27.976792 systemd-networkd[1852]: lxcba9459fa3cb3: Gained IPv6LL May 17 00:06:30.606090 ntpd[1988]: Listen normally on 7 cilium_host 192.168.0.50:123 May 17 00:06:30.606268 ntpd[1988]: Listen normally on 8 cilium_net [fe80::742b:48ff:fe5d:3b7e%4]:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 7 cilium_host 192.168.0.50:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 8 cilium_net [fe80::742b:48ff:fe5d:3b7e%4]:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 9 cilium_host [fe80::6c00:72ff:fec3:60e9%5]:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 10 cilium_vxlan [fe80::182e:e6ff:febb:426b%6]:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 11 lxc_health [fe80::7cf5:62ff:feec:f4f8%8]:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 12 lxcba9459fa3cb3 [fe80::60d7:deff:fe09:a54b%10]:123 May 17 00:06:30.606871 ntpd[1988]: 17 May 00:06:30 ntpd[1988]: Listen normally on 13 lxc84fe9498f390 [fe80::f0ab:b5ff:fe4d:a4ef%12]:123 May 17 00:06:30.606369 ntpd[1988]: Listen normally on 9 cilium_host [fe80::6c00:72ff:fec3:60e9%5]:123 May 17 00:06:30.606444 ntpd[1988]: Listen normally on 10 cilium_vxlan [fe80::182e:e6ff:febb:426b%6]:123 May 17 00:06:30.606630 ntpd[1988]: Listen normally on 11 lxc_health [fe80::7cf5:62ff:feec:f4f8%8]:123 May 17 00:06:30.606733 ntpd[1988]: Listen normally on 12 lxcba9459fa3cb3 [fe80::60d7:deff:fe09:a54b%10]:123 May 17 00:06:30.606811 ntpd[1988]: Listen normally on 13 lxc84fe9498f390 [fe80::f0ab:b5ff:fe4d:a4ef%12]:123 May 17 00:06:31.372898 systemd[1]: Started sshd@7-172.31.26.249:22-139.178.89.65:37258.service - OpenSSH per-connection server daemon (139.178.89.65:37258). May 17 00:06:31.563066 sshd[4599]: Accepted publickey for core from 139.178.89.65 port 37258 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:31.567042 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:31.578206 systemd-logind[1993]: New session 8 of user core. May 17 00:06:31.585233 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:06:31.923622 sshd[4599]: pam_unix(sshd:session): session closed for user core May 17 00:06:31.934470 systemd[1]: sshd@7-172.31.26.249:22-139.178.89.65:37258.service: Deactivated successfully. May 17 00:06:31.942826 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:06:31.945840 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. May 17 00:06:31.949900 systemd-logind[1993]: Removed session 8. May 17 00:06:36.273070 containerd[2012]: time="2025-05-17T00:06:36.272813086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:36.273070 containerd[2012]: time="2025-05-17T00:06:36.272928646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:36.273070 containerd[2012]: time="2025-05-17T00:06:36.272967058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:36.274182 containerd[2012]: time="2025-05-17T00:06:36.273161314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:36.344445 systemd[1]: Started cri-containerd-d235e629258217c963839d34f85ebba3f18b6bff391ac8932ec7b4dfd19b5458.scope - libcontainer container d235e629258217c963839d34f85ebba3f18b6bff391ac8932ec7b4dfd19b5458. May 17 00:06:36.398871 containerd[2012]: time="2025-05-17T00:06:36.398377319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:36.398871 containerd[2012]: time="2025-05-17T00:06:36.398543783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:36.398871 containerd[2012]: time="2025-05-17T00:06:36.398572655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:36.401560 containerd[2012]: time="2025-05-17T00:06:36.399678311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:36.466341 systemd[1]: Started cri-containerd-63387d4b7c348b9e16834b5eaaff70ef6da81eecff3bbfc0fa1800181fb69244.scope - libcontainer container 63387d4b7c348b9e16834b5eaaff70ef6da81eecff3bbfc0fa1800181fb69244. May 17 00:06:36.537218 containerd[2012]: time="2025-05-17T00:06:36.537127668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wnnds,Uid:bae4ab1b-8f5d-498d-a115-79f30997ef22,Namespace:kube-system,Attempt:0,} returns sandbox id \"d235e629258217c963839d34f85ebba3f18b6bff391ac8932ec7b4dfd19b5458\"" May 17 00:06:36.552700 containerd[2012]: time="2025-05-17T00:06:36.552251880Z" level=info msg="CreateContainer within sandbox \"d235e629258217c963839d34f85ebba3f18b6bff391ac8932ec7b4dfd19b5458\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:06:36.593582 containerd[2012]: time="2025-05-17T00:06:36.593426136Z" level=info msg="CreateContainer within sandbox \"d235e629258217c963839d34f85ebba3f18b6bff391ac8932ec7b4dfd19b5458\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6ceaabf46942a833fa7e47d92f1b013f0c340dc5a4425c9afd6a47478b214d0\"" May 17 00:06:36.598711 containerd[2012]: time="2025-05-17T00:06:36.596999664Z" level=info msg="StartContainer for \"d6ceaabf46942a833fa7e47d92f1b013f0c340dc5a4425c9afd6a47478b214d0\"" May 17 00:06:36.638897 containerd[2012]: time="2025-05-17T00:06:36.638812548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6x5l6,Uid:7824355d-b713-4aa2-ba47-26927ff2292c,Namespace:kube-system,Attempt:0,} returns sandbox id \"63387d4b7c348b9e16834b5eaaff70ef6da81eecff3bbfc0fa1800181fb69244\"" May 17 00:06:36.655457 containerd[2012]: time="2025-05-17T00:06:36.654783636Z" level=info msg="CreateContainer within sandbox \"63387d4b7c348b9e16834b5eaaff70ef6da81eecff3bbfc0fa1800181fb69244\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:06:36.703835 containerd[2012]: time="2025-05-17T00:06:36.703324933Z" level=info msg="CreateContainer within sandbox \"63387d4b7c348b9e16834b5eaaff70ef6da81eecff3bbfc0fa1800181fb69244\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b555cbd94ae299fbefd4b60c0e86cdcee865ae2b2a3999f24e48a400c28de297\"" May 17 00:06:36.707918 containerd[2012]: time="2025-05-17T00:06:36.707815561Z" level=info msg="StartContainer for \"b555cbd94ae299fbefd4b60c0e86cdcee865ae2b2a3999f24e48a400c28de297\"" May 17 00:06:36.711376 systemd[1]: Started cri-containerd-d6ceaabf46942a833fa7e47d92f1b013f0c340dc5a4425c9afd6a47478b214d0.scope - libcontainer container d6ceaabf46942a833fa7e47d92f1b013f0c340dc5a4425c9afd6a47478b214d0. May 17 00:06:36.799868 systemd[1]: Started cri-containerd-b555cbd94ae299fbefd4b60c0e86cdcee865ae2b2a3999f24e48a400c28de297.scope - libcontainer container b555cbd94ae299fbefd4b60c0e86cdcee865ae2b2a3999f24e48a400c28de297. May 17 00:06:36.851623 containerd[2012]: time="2025-05-17T00:06:36.851539153Z" level=info msg="StartContainer for \"d6ceaabf46942a833fa7e47d92f1b013f0c340dc5a4425c9afd6a47478b214d0\" returns successfully" May 17 00:06:36.918596 containerd[2012]: time="2025-05-17T00:06:36.918522374Z" level=info msg="StartContainer for \"b555cbd94ae299fbefd4b60c0e86cdcee865ae2b2a3999f24e48a400c28de297\" returns successfully" May 17 00:06:36.966105 systemd[1]: Started sshd@8-172.31.26.249:22-139.178.89.65:41262.service - OpenSSH per-connection server daemon (139.178.89.65:41262). May 17 00:06:37.113288 kubelet[3216]: I0517 00:06:37.110804 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6x5l6" podStartSLOduration=35.110779583 podStartE2EDuration="35.110779583s" podCreationTimestamp="2025-05-17 00:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:37.078110482 +0000 UTC m=+41.657815036" watchObservedRunningTime="2025-05-17 00:06:37.110779583 +0000 UTC m=+41.690484089" May 17 00:06:37.168678 sshd[4779]: Accepted publickey for core from 139.178.89.65 port 41262 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:37.171591 sshd[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:37.182097 systemd-logind[1993]: New session 9 of user core. May 17 00:06:37.188985 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:06:37.439530 sshd[4779]: pam_unix(sshd:session): session closed for user core May 17 00:06:37.446310 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. May 17 00:06:37.447988 systemd[1]: sshd@8-172.31.26.249:22-139.178.89.65:41262.service: Deactivated successfully. May 17 00:06:37.452681 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:06:37.455365 systemd-logind[1993]: Removed session 9. May 17 00:06:38.078061 kubelet[3216]: I0517 00:06:38.077958 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wnnds" podStartSLOduration=36.077924987 podStartE2EDuration="36.077924987s" podCreationTimestamp="2025-05-17 00:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:37.111845675 +0000 UTC m=+41.691550241" watchObservedRunningTime="2025-05-17 00:06:38.077924987 +0000 UTC m=+42.657629493" May 17 00:06:42.483068 systemd[1]: Started sshd@9-172.31.26.249:22-139.178.89.65:41268.service - OpenSSH per-connection server daemon (139.178.89.65:41268). May 17 00:06:42.656617 sshd[4805]: Accepted publickey for core from 139.178.89.65 port 41268 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:42.659561 sshd[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:42.668848 systemd-logind[1993]: New session 10 of user core. May 17 00:06:42.674844 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:06:42.923560 sshd[4805]: pam_unix(sshd:session): session closed for user core May 17 00:06:42.930868 systemd[1]: sshd@9-172.31.26.249:22-139.178.89.65:41268.service: Deactivated successfully. May 17 00:06:42.936927 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:06:42.940022 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. May 17 00:06:42.943001 systemd-logind[1993]: Removed session 10. May 17 00:06:47.965047 systemd[1]: Started sshd@10-172.31.26.249:22-139.178.89.65:49154.service - OpenSSH per-connection server daemon (139.178.89.65:49154). May 17 00:06:48.146181 sshd[4821]: Accepted publickey for core from 139.178.89.65 port 49154 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:48.147916 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:48.158439 systemd-logind[1993]: New session 11 of user core. May 17 00:06:48.165916 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:06:48.421283 sshd[4821]: pam_unix(sshd:session): session closed for user core May 17 00:06:48.432099 systemd[1]: sshd@10-172.31.26.249:22-139.178.89.65:49154.service: Deactivated successfully. May 17 00:06:48.436646 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:06:48.439289 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. May 17 00:06:48.442166 systemd-logind[1993]: Removed session 11. May 17 00:06:53.460072 systemd[1]: Started sshd@11-172.31.26.249:22-139.178.89.65:49160.service - OpenSSH per-connection server daemon (139.178.89.65:49160). May 17 00:06:53.638561 sshd[4834]: Accepted publickey for core from 139.178.89.65 port 49160 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:53.641289 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:53.649046 systemd-logind[1993]: New session 12 of user core. May 17 00:06:53.656784 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:06:53.899777 sshd[4834]: pam_unix(sshd:session): session closed for user core May 17 00:06:53.906028 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. May 17 00:06:53.907052 systemd[1]: sshd@11-172.31.26.249:22-139.178.89.65:49160.service: Deactivated successfully. May 17 00:06:53.910886 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:06:53.912744 systemd-logind[1993]: Removed session 12. May 17 00:06:53.941020 systemd[1]: Started sshd@12-172.31.26.249:22-139.178.89.65:49170.service - OpenSSH per-connection server daemon (139.178.89.65:49170). May 17 00:06:54.116828 sshd[4848]: Accepted publickey for core from 139.178.89.65 port 49170 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:54.119486 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:54.128738 systemd-logind[1993]: New session 13 of user core. May 17 00:06:54.133864 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:06:54.451945 sshd[4848]: pam_unix(sshd:session): session closed for user core May 17 00:06:54.461864 systemd[1]: sshd@12-172.31.26.249:22-139.178.89.65:49170.service: Deactivated successfully. May 17 00:06:54.472095 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:06:54.478169 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. May 17 00:06:54.506927 systemd[1]: Started sshd@13-172.31.26.249:22-139.178.89.65:49182.service - OpenSSH per-connection server daemon (139.178.89.65:49182). May 17 00:06:54.509983 systemd-logind[1993]: Removed session 13. May 17 00:06:54.680943 sshd[4859]: Accepted publickey for core from 139.178.89.65 port 49182 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:54.683952 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:54.692478 systemd-logind[1993]: New session 14 of user core. May 17 00:06:54.704830 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:06:54.960433 sshd[4859]: pam_unix(sshd:session): session closed for user core May 17 00:06:54.969703 systemd[1]: sshd@13-172.31.26.249:22-139.178.89.65:49182.service: Deactivated successfully. May 17 00:06:54.975603 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:06:54.979260 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. May 17 00:06:54.985339 systemd-logind[1993]: Removed session 14. May 17 00:07:00.004015 systemd[1]: Started sshd@14-172.31.26.249:22-139.178.89.65:60312.service - OpenSSH per-connection server daemon (139.178.89.65:60312). May 17 00:07:00.171252 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 60312 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:00.174003 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:00.181251 systemd-logind[1993]: New session 15 of user core. May 17 00:07:00.194787 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:07:00.433908 sshd[4874]: pam_unix(sshd:session): session closed for user core May 17 00:07:00.440663 systemd[1]: sshd@14-172.31.26.249:22-139.178.89.65:60312.service: Deactivated successfully. May 17 00:07:00.445802 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:07:00.447719 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. May 17 00:07:00.449354 systemd-logind[1993]: Removed session 15. May 17 00:07:05.476065 systemd[1]: Started sshd@15-172.31.26.249:22-139.178.89.65:60324.service - OpenSSH per-connection server daemon (139.178.89.65:60324). May 17 00:07:05.649230 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 60324 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:05.652334 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:05.661091 systemd-logind[1993]: New session 16 of user core. May 17 00:07:05.670828 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:07:05.913161 sshd[4889]: pam_unix(sshd:session): session closed for user core May 17 00:07:05.918050 systemd[1]: sshd@15-172.31.26.249:22-139.178.89.65:60324.service: Deactivated successfully. May 17 00:07:05.922153 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:07:05.925881 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. May 17 00:07:05.928423 systemd-logind[1993]: Removed session 16. May 17 00:07:10.958101 systemd[1]: Started sshd@16-172.31.26.249:22-139.178.89.65:46238.service - OpenSSH per-connection server daemon (139.178.89.65:46238). May 17 00:07:11.139450 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 46238 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:11.145216 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:11.156028 systemd-logind[1993]: New session 17 of user core. May 17 00:07:11.161167 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:07:11.421599 sshd[4902]: pam_unix(sshd:session): session closed for user core May 17 00:07:11.427471 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. May 17 00:07:11.429181 systemd[1]: sshd@16-172.31.26.249:22-139.178.89.65:46238.service: Deactivated successfully. May 17 00:07:11.432336 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:07:11.438367 systemd-logind[1993]: Removed session 17. May 17 00:07:11.463095 systemd[1]: Started sshd@17-172.31.26.249:22-139.178.89.65:46242.service - OpenSSH per-connection server daemon (139.178.89.65:46242). May 17 00:07:11.637180 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 46242 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:11.640146 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:11.651653 systemd-logind[1993]: New session 18 of user core. May 17 00:07:11.657859 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:07:11.979716 sshd[4915]: pam_unix(sshd:session): session closed for user core May 17 00:07:11.986605 systemd[1]: sshd@17-172.31.26.249:22-139.178.89.65:46242.service: Deactivated successfully. May 17 00:07:11.993819 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:07:11.996818 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. May 17 00:07:12.000002 systemd-logind[1993]: Removed session 18. May 17 00:07:12.028035 systemd[1]: Started sshd@18-172.31.26.249:22-139.178.89.65:46248.service - OpenSSH per-connection server daemon (139.178.89.65:46248). May 17 00:07:12.195437 sshd[4926]: Accepted publickey for core from 139.178.89.65 port 46248 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:12.198595 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:12.207622 systemd-logind[1993]: New session 19 of user core. May 17 00:07:12.218868 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:07:14.987429 sshd[4926]: pam_unix(sshd:session): session closed for user core May 17 00:07:15.004254 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. May 17 00:07:15.007281 systemd[1]: sshd@18-172.31.26.249:22-139.178.89.65:46248.service: Deactivated successfully. May 17 00:07:15.018781 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:07:15.038803 systemd-logind[1993]: Removed session 19. May 17 00:07:15.049166 systemd[1]: Started sshd@19-172.31.26.249:22-139.178.89.65:46256.service - OpenSSH per-connection server daemon (139.178.89.65:46256). May 17 00:07:15.236377 sshd[4945]: Accepted publickey for core from 139.178.89.65 port 46256 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:15.239366 sshd[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:15.251030 systemd-logind[1993]: New session 20 of user core. May 17 00:07:15.259923 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:07:15.798666 sshd[4945]: pam_unix(sshd:session): session closed for user core May 17 00:07:15.808234 systemd[1]: sshd@19-172.31.26.249:22-139.178.89.65:46256.service: Deactivated successfully. May 17 00:07:15.815029 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:07:15.817921 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. May 17 00:07:15.843007 systemd[1]: Started sshd@20-172.31.26.249:22-139.178.89.65:46260.service - OpenSSH per-connection server daemon (139.178.89.65:46260). May 17 00:07:15.845389 systemd-logind[1993]: Removed session 20. May 17 00:07:16.011854 sshd[4955]: Accepted publickey for core from 139.178.89.65 port 46260 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:16.015978 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:16.025608 systemd-logind[1993]: New session 21 of user core. May 17 00:07:16.030814 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:07:16.275547 sshd[4955]: pam_unix(sshd:session): session closed for user core May 17 00:07:16.285063 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. May 17 00:07:16.285471 systemd[1]: sshd@20-172.31.26.249:22-139.178.89.65:46260.service: Deactivated successfully. May 17 00:07:16.292914 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:07:16.297789 systemd-logind[1993]: Removed session 21. May 17 00:07:21.317365 systemd[1]: Started sshd@21-172.31.26.249:22-139.178.89.65:59510.service - OpenSSH per-connection server daemon (139.178.89.65:59510). May 17 00:07:21.485698 sshd[4968]: Accepted publickey for core from 139.178.89.65 port 59510 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:21.488385 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:21.496145 systemd-logind[1993]: New session 22 of user core. May 17 00:07:21.504758 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:07:21.743913 sshd[4968]: pam_unix(sshd:session): session closed for user core May 17 00:07:21.751170 systemd[1]: sshd@21-172.31.26.249:22-139.178.89.65:59510.service: Deactivated successfully. May 17 00:07:21.754425 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:07:21.756181 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. May 17 00:07:21.758831 systemd-logind[1993]: Removed session 22. May 17 00:07:26.785028 systemd[1]: Started sshd@22-172.31.26.249:22-139.178.89.65:45212.service - OpenSSH per-connection server daemon (139.178.89.65:45212). May 17 00:07:26.953679 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 45212 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:26.956487 sshd[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:26.965427 systemd-logind[1993]: New session 23 of user core. May 17 00:07:26.970272 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:07:27.232385 sshd[4985]: pam_unix(sshd:session): session closed for user core May 17 00:07:27.239223 systemd[1]: sshd@22-172.31.26.249:22-139.178.89.65:45212.service: Deactivated successfully. May 17 00:07:27.243739 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:07:27.246179 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. May 17 00:07:27.248586 systemd-logind[1993]: Removed session 23. May 17 00:07:32.281977 systemd[1]: Started sshd@23-172.31.26.249:22-139.178.89.65:45214.service - OpenSSH per-connection server daemon (139.178.89.65:45214). May 17 00:07:32.447918 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 45214 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:32.452132 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:32.459999 systemd-logind[1993]: New session 24 of user core. May 17 00:07:32.469784 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:07:32.713596 sshd[4998]: pam_unix(sshd:session): session closed for user core May 17 00:07:32.723242 systemd[1]: sshd@23-172.31.26.249:22-139.178.89.65:45214.service: Deactivated successfully. May 17 00:07:32.727001 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:07:32.729223 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. May 17 00:07:32.731436 systemd-logind[1993]: Removed session 24. May 17 00:07:37.754017 systemd[1]: Started sshd@24-172.31.26.249:22-139.178.89.65:42700.service - OpenSSH per-connection server daemon (139.178.89.65:42700). May 17 00:07:37.926849 sshd[5013]: Accepted publickey for core from 139.178.89.65 port 42700 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:37.929581 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:37.937277 systemd-logind[1993]: New session 25 of user core. May 17 00:07:37.944768 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:07:38.188758 sshd[5013]: pam_unix(sshd:session): session closed for user core May 17 00:07:38.195779 systemd[1]: sshd@24-172.31.26.249:22-139.178.89.65:42700.service: Deactivated successfully. May 17 00:07:38.200810 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:07:38.203384 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. May 17 00:07:38.206281 systemd-logind[1993]: Removed session 25. May 17 00:07:38.228023 systemd[1]: Started sshd@25-172.31.26.249:22-139.178.89.65:42712.service - OpenSSH per-connection server daemon (139.178.89.65:42712). May 17 00:07:38.407448 sshd[5026]: Accepted publickey for core from 139.178.89.65 port 42712 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:38.410211 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:38.419040 systemd-logind[1993]: New session 26 of user core. May 17 00:07:38.424786 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:07:40.858072 containerd[2012]: time="2025-05-17T00:07:40.856860231Z" level=info msg="StopContainer for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" with timeout 30 (s)" May 17 00:07:40.860200 containerd[2012]: time="2025-05-17T00:07:40.859209315Z" level=info msg="Stop container \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" with signal terminated" May 17 00:07:40.893015 systemd[1]: cri-containerd-94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb.scope: Deactivated successfully. May 17 00:07:40.911102 containerd[2012]: time="2025-05-17T00:07:40.911031099Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:07:40.921143 kubelet[3216]: E0517 00:07:40.921053 3216 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:07:40.929732 containerd[2012]: time="2025-05-17T00:07:40.929340820Z" level=info msg="StopContainer for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" with timeout 2 (s)" May 17 00:07:40.931368 containerd[2012]: time="2025-05-17T00:07:40.930037480Z" level=info msg="Stop container \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" with signal terminated" May 17 00:07:40.953883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb-rootfs.mount: Deactivated successfully. May 17 00:07:40.959145 systemd-networkd[1852]: lxc_health: Link DOWN May 17 00:07:40.959160 systemd-networkd[1852]: lxc_health: Lost carrier May 17 00:07:40.982428 containerd[2012]: time="2025-05-17T00:07:40.982083976Z" level=info msg="shim disconnected" id=94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb namespace=k8s.io May 17 00:07:40.982428 containerd[2012]: time="2025-05-17T00:07:40.982205128Z" level=warning msg="cleaning up after shim disconnected" id=94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb namespace=k8s.io May 17 00:07:40.982428 containerd[2012]: time="2025-05-17T00:07:40.982263652Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:40.991608 systemd[1]: cri-containerd-810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35.scope: Deactivated successfully. May 17 00:07:40.992315 systemd[1]: cri-containerd-810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35.scope: Consumed 16.163s CPU time. May 17 00:07:41.025132 containerd[2012]: time="2025-05-17T00:07:41.024893124Z" level=info msg="StopContainer for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" returns successfully" May 17 00:07:41.025791 containerd[2012]: time="2025-05-17T00:07:41.025757496Z" level=info msg="StopPodSandbox for \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\"" May 17 00:07:41.025860 containerd[2012]: time="2025-05-17T00:07:41.025821456Z" level=info msg="Container to stop \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:41.031133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098-shm.mount: Deactivated successfully. May 17 00:07:41.051265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35-rootfs.mount: Deactivated successfully. May 17 00:07:41.055468 systemd[1]: cri-containerd-d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098.scope: Deactivated successfully. May 17 00:07:41.066014 containerd[2012]: time="2025-05-17T00:07:41.065783904Z" level=info msg="shim disconnected" id=810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35 namespace=k8s.io May 17 00:07:41.066014 containerd[2012]: time="2025-05-17T00:07:41.065971188Z" level=warning msg="cleaning up after shim disconnected" id=810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35 namespace=k8s.io May 17 00:07:41.066341 containerd[2012]: time="2025-05-17T00:07:41.066105696Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:41.101595 containerd[2012]: time="2025-05-17T00:07:41.101352660Z" level=info msg="StopContainer for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" returns successfully" May 17 00:07:41.103140 containerd[2012]: time="2025-05-17T00:07:41.102829044Z" level=info msg="StopPodSandbox for \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\"" May 17 00:07:41.104667 containerd[2012]: time="2025-05-17T00:07:41.103328148Z" level=info msg="Container to stop \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:41.104667 containerd[2012]: time="2025-05-17T00:07:41.103362348Z" level=info msg="Container to stop \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:41.104667 containerd[2012]: time="2025-05-17T00:07:41.103421340Z" level=info msg="Container to stop \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:41.104667 containerd[2012]: time="2025-05-17T00:07:41.103446144Z" level=info msg="Container to stop \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:41.104667 containerd[2012]: time="2025-05-17T00:07:41.103513992Z" level=info msg="Container to stop \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:07:41.111434 containerd[2012]: time="2025-05-17T00:07:41.109694604Z" level=info msg="shim disconnected" id=d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098 namespace=k8s.io May 17 00:07:41.111434 containerd[2012]: time="2025-05-17T00:07:41.110120712Z" level=warning msg="cleaning up after shim disconnected" id=d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098 namespace=k8s.io May 17 00:07:41.113002 containerd[2012]: time="2025-05-17T00:07:41.112928448Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:41.117837 systemd[1]: cri-containerd-7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65.scope: Deactivated successfully. May 17 00:07:41.163804 containerd[2012]: time="2025-05-17T00:07:41.163738249Z" level=info msg="TearDown network for sandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" successfully" May 17 00:07:41.163804 containerd[2012]: time="2025-05-17T00:07:41.163790521Z" level=info msg="StopPodSandbox for \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" returns successfully" May 17 00:07:41.181436 containerd[2012]: time="2025-05-17T00:07:41.181357225Z" level=info msg="shim disconnected" id=7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65 namespace=k8s.io May 17 00:07:41.182137 containerd[2012]: time="2025-05-17T00:07:41.181829161Z" level=warning msg="cleaning up after shim disconnected" id=7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65 namespace=k8s.io May 17 00:07:41.182137 containerd[2012]: time="2025-05-17T00:07:41.181866625Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:41.211398 containerd[2012]: time="2025-05-17T00:07:41.211326481Z" level=info msg="TearDown network for sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" successfully" May 17 00:07:41.211934 containerd[2012]: time="2025-05-17T00:07:41.211641853Z" level=info msg="StopPodSandbox for \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" returns successfully" May 17 00:07:41.231605 kubelet[3216]: I0517 00:07:41.231426 3216 scope.go:117] "RemoveContainer" containerID="810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35" May 17 00:07:41.239612 containerd[2012]: time="2025-05-17T00:07:41.239219209Z" level=info msg="RemoveContainer for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\"" May 17 00:07:41.255568 containerd[2012]: time="2025-05-17T00:07:41.255452065Z" level=info msg="RemoveContainer for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" returns successfully" May 17 00:07:41.256850 kubelet[3216]: I0517 00:07:41.256274 3216 scope.go:117] "RemoveContainer" containerID="1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045" May 17 00:07:41.259216 containerd[2012]: time="2025-05-17T00:07:41.259154701Z" level=info msg="RemoveContainer for \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\"" May 17 00:07:41.271207 containerd[2012]: time="2025-05-17T00:07:41.270968785Z" level=info msg="RemoveContainer for \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\" returns successfully" May 17 00:07:41.271846 kubelet[3216]: I0517 00:07:41.271811 3216 scope.go:117] "RemoveContainer" containerID="f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c" May 17 00:07:41.276379 containerd[2012]: time="2025-05-17T00:07:41.276271897Z" level=info msg="RemoveContainer for \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\"" May 17 00:07:41.282673 containerd[2012]: time="2025-05-17T00:07:41.282539617Z" level=info msg="RemoveContainer for \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\" returns successfully" May 17 00:07:41.283071 kubelet[3216]: I0517 00:07:41.282909 3216 scope.go:117] "RemoveContainer" containerID="8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4" May 17 00:07:41.285250 containerd[2012]: time="2025-05-17T00:07:41.285040177Z" level=info msg="RemoveContainer for \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\"" May 17 00:07:41.289704 kubelet[3216]: I0517 00:07:41.287854 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-cgroup\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.289704 kubelet[3216]: I0517 00:07:41.287912 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-net\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.289704 kubelet[3216]: I0517 00:07:41.287949 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-run\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.289704 kubelet[3216]: I0517 00:07:41.287983 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-hostproc\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.289704 kubelet[3216]: I0517 00:07:41.288028 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3879a8df-9591-4b76-8e98-42b80a818d01-clustermesh-secrets\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.289704 kubelet[3216]: I0517 00:07:41.288062 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-xtables-lock\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290175 kubelet[3216]: I0517 00:07:41.288097 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-bpf-maps\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290175 kubelet[3216]: I0517 00:07:41.288134 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxxlr\" (UniqueName: \"kubernetes.io/projected/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-kube-api-access-lxxlr\") pod \"25d56d7a-dc65-490c-bd1c-a75f9bff9e78\" (UID: \"25d56d7a-dc65-490c-bd1c-a75f9bff9e78\") " May 17 00:07:41.290175 kubelet[3216]: I0517 00:07:41.288173 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-cilium-config-path\") pod \"25d56d7a-dc65-490c-bd1c-a75f9bff9e78\" (UID: \"25d56d7a-dc65-490c-bd1c-a75f9bff9e78\") " May 17 00:07:41.290175 kubelet[3216]: I0517 00:07:41.288209 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-lib-modules\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290175 kubelet[3216]: I0517 00:07:41.288248 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj25h\" (UniqueName: \"kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-kube-api-access-mj25h\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290175 kubelet[3216]: I0517 00:07:41.288279 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-etc-cni-netd\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290565 kubelet[3216]: I0517 00:07:41.288315 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-kernel\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290565 kubelet[3216]: I0517 00:07:41.288352 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-config-path\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290565 kubelet[3216]: I0517 00:07:41.288390 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-hubble-tls\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290565 kubelet[3216]: I0517 00:07:41.288421 3216 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cni-path\") pod \"3879a8df-9591-4b76-8e98-42b80a818d01\" (UID: \"3879a8df-9591-4b76-8e98-42b80a818d01\") " May 17 00:07:41.290565 kubelet[3216]: I0517 00:07:41.288526 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cni-path" (OuterVolumeSpecName: "cni-path") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.290565 kubelet[3216]: I0517 00:07:41.288591 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.291034 kubelet[3216]: I0517 00:07:41.288627 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.293415 containerd[2012]: time="2025-05-17T00:07:41.293337553Z" level=info msg="RemoveContainer for \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\" returns successfully" May 17 00:07:41.293817 kubelet[3216]: I0517 00:07:41.293775 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.297017 kubelet[3216]: I0517 00:07:41.294005 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.297340 kubelet[3216]: I0517 00:07:41.294666 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.297514 kubelet[3216]: I0517 00:07:41.294710 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.297514 kubelet[3216]: I0517 00:07:41.294734 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-hostproc" (OuterVolumeSpecName: "hostproc") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.298139 kubelet[3216]: I0517 00:07:41.298097 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.302060 kubelet[3216]: I0517 00:07:41.298286 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:07:41.302060 kubelet[3216]: I0517 00:07:41.298346 3216 scope.go:117] "RemoveContainer" containerID="8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578" May 17 00:07:41.304209 kubelet[3216]: I0517 00:07:41.304079 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3879a8df-9591-4b76-8e98-42b80a818d01-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:07:41.309648 containerd[2012]: time="2025-05-17T00:07:41.309586813Z" level=info msg="RemoveContainer for \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\"" May 17 00:07:41.312242 kubelet[3216]: I0517 00:07:41.311686 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-kube-api-access-lxxlr" (OuterVolumeSpecName: "kube-api-access-lxxlr") pod "25d56d7a-dc65-490c-bd1c-a75f9bff9e78" (UID: "25d56d7a-dc65-490c-bd1c-a75f9bff9e78"). InnerVolumeSpecName "kube-api-access-lxxlr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:07:41.313563 kubelet[3216]: I0517 00:07:41.313480 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-kube-api-access-mj25h" (OuterVolumeSpecName: "kube-api-access-mj25h") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "kube-api-access-mj25h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:07:41.316345 kubelet[3216]: I0517 00:07:41.316255 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:07:41.318306 containerd[2012]: time="2025-05-17T00:07:41.318237949Z" level=info msg="RemoveContainer for \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\" returns successfully" May 17 00:07:41.318900 kubelet[3216]: I0517 00:07:41.318620 3216 scope.go:117] "RemoveContainer" containerID="810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35" May 17 00:07:41.319075 containerd[2012]: time="2025-05-17T00:07:41.319008733Z" level=error msg="ContainerStatus for \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\": not found" May 17 00:07:41.319699 kubelet[3216]: E0517 00:07:41.319295 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\": not found" containerID="810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35" May 17 00:07:41.319699 kubelet[3216]: I0517 00:07:41.319361 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35"} err="failed to get container status \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\": rpc error: code = NotFound desc = an error occurred when try to find container \"810c3740c206d6babacfeeaa9d53e39c68d0704227844100f4092b6ae0d06b35\": not found" May 17 00:07:41.319699 kubelet[3216]: I0517 00:07:41.319485 3216 scope.go:117] "RemoveContainer" containerID="1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045" May 17 00:07:41.320663 containerd[2012]: time="2025-05-17T00:07:41.320345161Z" level=error msg="ContainerStatus for \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\": not found" May 17 00:07:41.321158 kubelet[3216]: E0517 00:07:41.321066 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\": not found" containerID="1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045" May 17 00:07:41.321340 kubelet[3216]: I0517 00:07:41.321285 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045"} err="failed to get container status \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\": rpc error: code = NotFound desc = an error occurred when try to find container \"1292a2ffe3492e84ff2b278a9e4323b8af94046a715bfa8e2ab041f94ea55045\": not found" May 17 00:07:41.321516 kubelet[3216]: I0517 00:07:41.321420 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25d56d7a-dc65-490c-bd1c-a75f9bff9e78" (UID: "25d56d7a-dc65-490c-bd1c-a75f9bff9e78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:07:41.321516 kubelet[3216]: I0517 00:07:41.321442 3216 scope.go:117] "RemoveContainer" containerID="f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c" May 17 00:07:41.323045 containerd[2012]: time="2025-05-17T00:07:41.322968733Z" level=error msg="ContainerStatus for \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\": not found" May 17 00:07:41.323702 kubelet[3216]: I0517 00:07:41.323362 3216 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3879a8df-9591-4b76-8e98-42b80a818d01" (UID: "3879a8df-9591-4b76-8e98-42b80a818d01"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:07:41.323702 kubelet[3216]: E0517 00:07:41.323419 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\": not found" containerID="f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c" May 17 00:07:41.323702 kubelet[3216]: I0517 00:07:41.323474 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c"} err="failed to get container status \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3f892fc81fc3746f21de05f2cf7eec9ab7ac4863f66054a9e5c72f67e81ef0c\": not found" May 17 00:07:41.323702 kubelet[3216]: I0517 00:07:41.323541 3216 scope.go:117] "RemoveContainer" containerID="8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4" May 17 00:07:41.323970 containerd[2012]: time="2025-05-17T00:07:41.323930185Z" level=error msg="ContainerStatus for \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\": not found" May 17 00:07:41.324184 kubelet[3216]: E0517 00:07:41.324141 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\": not found" containerID="8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4" May 17 00:07:41.324279 kubelet[3216]: I0517 00:07:41.324194 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4"} err="failed to get container status \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a10236c70270b5d84ec0c55f11f6c3d780fa1cc3781896880c7d04db45324d4\": not found" May 17 00:07:41.324279 kubelet[3216]: I0517 00:07:41.324231 3216 scope.go:117] "RemoveContainer" containerID="8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578" May 17 00:07:41.324742 containerd[2012]: time="2025-05-17T00:07:41.324690865Z" level=error msg="ContainerStatus for \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\": not found" May 17 00:07:41.324994 kubelet[3216]: E0517 00:07:41.324910 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\": not found" containerID="8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578" May 17 00:07:41.324994 kubelet[3216]: I0517 00:07:41.325004 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578"} err="failed to get container status \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d3d7b2845336647235bc1bcec4008f09a4bfc1050828a4a5d4010c4a7670578\": not found" May 17 00:07:41.325230 kubelet[3216]: I0517 00:07:41.325037 3216 scope.go:117] "RemoveContainer" containerID="94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb" May 17 00:07:41.327817 containerd[2012]: time="2025-05-17T00:07:41.327760658Z" level=info msg="RemoveContainer for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\"" May 17 00:07:41.333768 containerd[2012]: time="2025-05-17T00:07:41.333697898Z" level=info msg="RemoveContainer for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" returns successfully" May 17 00:07:41.334116 kubelet[3216]: I0517 00:07:41.334065 3216 scope.go:117] "RemoveContainer" containerID="94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb" May 17 00:07:41.334677 containerd[2012]: time="2025-05-17T00:07:41.334433294Z" level=error msg="ContainerStatus for \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\": not found" May 17 00:07:41.334802 kubelet[3216]: E0517 00:07:41.334745 3216 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\": not found" containerID="94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb" May 17 00:07:41.334889 kubelet[3216]: I0517 00:07:41.334791 3216 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb"} err="failed to get container status \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\": rpc error: code = NotFound desc = an error occurred when try to find container \"94356092fa90083dcaec60867e8398c38f69454f621d24323dd186095bab3ceb\": not found" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389312 3216 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-cilium-config-path\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389372 3216 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-bpf-maps\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389431 3216 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxxlr\" (UniqueName: \"kubernetes.io/projected/25d56d7a-dc65-490c-bd1c-a75f9bff9e78-kube-api-access-lxxlr\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389455 3216 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-lib-modules\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389562 3216 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj25h\" (UniqueName: \"kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-kube-api-access-mj25h\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389587 3216 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-etc-cni-netd\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389610 3216 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-kernel\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.390615 kubelet[3216]: I0517 00:07:41.389633 3216 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cni-path\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389654 3216 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-config-path\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389677 3216 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3879a8df-9591-4b76-8e98-42b80a818d01-hubble-tls\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389700 3216 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-cgroup\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389721 3216 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-host-proc-sys-net\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389742 3216 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-cilium-run\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389764 3216 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-hostproc\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389784 3216 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3879a8df-9591-4b76-8e98-42b80a818d01-clustermesh-secrets\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.391153 kubelet[3216]: I0517 00:07:41.389804 3216 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3879a8df-9591-4b76-8e98-42b80a818d01-xtables-lock\") on node \"ip-172-31-26-249\" DevicePath \"\"" May 17 00:07:41.558880 systemd[1]: Removed slice kubepods-besteffort-pod25d56d7a_dc65_490c_bd1c_a75f9bff9e78.slice - libcontainer container kubepods-besteffort-pod25d56d7a_dc65_490c_bd1c_a75f9bff9e78.slice. May 17 00:07:41.688345 kubelet[3216]: I0517 00:07:41.687385 3216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d56d7a-dc65-490c-bd1c-a75f9bff9e78" path="/var/lib/kubelet/pods/25d56d7a-dc65-490c-bd1c-a75f9bff9e78/volumes" May 17 00:07:41.697097 systemd[1]: Removed slice kubepods-burstable-pod3879a8df_9591_4b76_8e98_42b80a818d01.slice - libcontainer container kubepods-burstable-pod3879a8df_9591_4b76_8e98_42b80a818d01.slice. May 17 00:07:41.697334 systemd[1]: kubepods-burstable-pod3879a8df_9591_4b76_8e98_42b80a818d01.slice: Consumed 16.327s CPU time. May 17 00:07:41.873348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098-rootfs.mount: Deactivated successfully. May 17 00:07:41.873548 systemd[1]: var-lib-kubelet-pods-25d56d7a\x2ddc65\x2d490c\x2dbd1c\x2da75f9bff9e78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlxxlr.mount: Deactivated successfully. May 17 00:07:41.873697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65-rootfs.mount: Deactivated successfully. May 17 00:07:41.873825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65-shm.mount: Deactivated successfully. May 17 00:07:41.873955 systemd[1]: var-lib-kubelet-pods-3879a8df\x2d9591\x2d4b76\x2d8e98\x2d42b80a818d01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmj25h.mount: Deactivated successfully. May 17 00:07:41.874098 systemd[1]: var-lib-kubelet-pods-3879a8df\x2d9591\x2d4b76\x2d8e98\x2d42b80a818d01-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:07:41.874230 systemd[1]: var-lib-kubelet-pods-3879a8df\x2d9591\x2d4b76\x2d8e98\x2d42b80a818d01-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:07:42.785483 sshd[5026]: pam_unix(sshd:session): session closed for user core May 17 00:07:42.790915 systemd[1]: sshd@25-172.31.26.249:22-139.178.89.65:42712.service: Deactivated successfully. May 17 00:07:42.796767 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:07:42.797600 systemd[1]: session-26.scope: Consumed 1.682s CPU time. May 17 00:07:42.801369 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. May 17 00:07:42.803449 systemd-logind[1993]: Removed session 26. May 17 00:07:42.826064 systemd[1]: Started sshd@26-172.31.26.249:22-139.178.89.65:42722.service - OpenSSH per-connection server daemon (139.178.89.65:42722). May 17 00:07:43.004629 sshd[5188]: Accepted publickey for core from 139.178.89.65 port 42722 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:43.007306 sshd[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:43.014842 systemd-logind[1993]: New session 27 of user core. May 17 00:07:43.020877 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:07:43.605898 ntpd[1988]: Deleting interface #11 lxc_health, fe80::7cf5:62ff:feec:f4f8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs May 17 00:07:43.606395 ntpd[1988]: 17 May 00:07:43 ntpd[1988]: Deleting interface #11 lxc_health, fe80::7cf5:62ff:feec:f4f8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs May 17 00:07:43.693792 kubelet[3216]: I0517 00:07:43.693731 3216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" path="/var/lib/kubelet/pods/3879a8df-9591-4b76-8e98-42b80a818d01/volumes" May 17 00:07:44.728530 sshd[5188]: pam_unix(sshd:session): session closed for user core May 17 00:07:44.738125 systemd[1]: sshd@26-172.31.26.249:22-139.178.89.65:42722.service: Deactivated successfully. May 17 00:07:44.745694 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:07:44.746464 systemd[1]: session-27.scope: Consumed 1.507s CPU time. May 17 00:07:44.749818 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. May 17 00:07:44.778197 systemd[1]: Started sshd@27-172.31.26.249:22-139.178.89.65:42732.service - OpenSSH per-connection server daemon (139.178.89.65:42732). May 17 00:07:44.781650 systemd-logind[1993]: Removed session 27. May 17 00:07:44.823468 kubelet[3216]: E0517 00:07:44.823392 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25d56d7a-dc65-490c-bd1c-a75f9bff9e78" containerName="cilium-operator" May 17 00:07:44.824176 kubelet[3216]: E0517 00:07:44.824117 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" containerName="mount-cgroup" May 17 00:07:44.824479 kubelet[3216]: E0517 00:07:44.824315 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" containerName="apply-sysctl-overwrites" May 17 00:07:44.824479 kubelet[3216]: E0517 00:07:44.824369 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" containerName="mount-bpf-fs" May 17 00:07:44.824479 kubelet[3216]: E0517 00:07:44.824390 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" containerName="clean-cilium-state" May 17 00:07:44.824479 kubelet[3216]: E0517 00:07:44.824407 3216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" containerName="cilium-agent" May 17 00:07:44.825178 kubelet[3216]: I0517 00:07:44.824832 3216 memory_manager.go:354] "RemoveStaleState removing state" podUID="3879a8df-9591-4b76-8e98-42b80a818d01" containerName="cilium-agent" May 17 00:07:44.825178 kubelet[3216]: I0517 00:07:44.824924 3216 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d56d7a-dc65-490c-bd1c-a75f9bff9e78" containerName="cilium-operator" May 17 00:07:44.841587 systemd[1]: Created slice kubepods-burstable-podcb372c29_7cc7_4661_b3f0_32a5b757e852.slice - libcontainer container kubepods-burstable-podcb372c29_7cc7_4661_b3f0_32a5b757e852.slice. May 17 00:07:44.914350 kubelet[3216]: I0517 00:07:44.913574 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-hostproc\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914350 kubelet[3216]: I0517 00:07:44.913646 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqcm\" (UniqueName: \"kubernetes.io/projected/cb372c29-7cc7-4661-b3f0-32a5b757e852-kube-api-access-5kqcm\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914350 kubelet[3216]: I0517 00:07:44.913691 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-host-proc-sys-kernel\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914350 kubelet[3216]: I0517 00:07:44.913730 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-cni-path\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914350 kubelet[3216]: I0517 00:07:44.913768 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-lib-modules\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914350 kubelet[3216]: I0517 00:07:44.913805 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-bpf-maps\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914943 kubelet[3216]: I0517 00:07:44.913842 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb372c29-7cc7-4661-b3f0-32a5b757e852-clustermesh-secrets\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914943 kubelet[3216]: I0517 00:07:44.913880 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-host-proc-sys-net\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914943 kubelet[3216]: I0517 00:07:44.913920 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-etc-cni-netd\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914943 kubelet[3216]: I0517 00:07:44.913958 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb372c29-7cc7-4661-b3f0-32a5b757e852-cilium-config-path\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.914943 kubelet[3216]: I0517 00:07:44.913994 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-cilium-cgroup\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.915300 kubelet[3216]: I0517 00:07:44.914032 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cb372c29-7cc7-4661-b3f0-32a5b757e852-cilium-ipsec-secrets\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.915300 kubelet[3216]: I0517 00:07:44.914070 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb372c29-7cc7-4661-b3f0-32a5b757e852-hubble-tls\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.915300 kubelet[3216]: I0517 00:07:44.914107 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-cilium-run\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.915300 kubelet[3216]: I0517 00:07:44.914142 3216 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb372c29-7cc7-4661-b3f0-32a5b757e852-xtables-lock\") pod \"cilium-2r4x8\" (UID: \"cb372c29-7cc7-4661-b3f0-32a5b757e852\") " pod="kube-system/cilium-2r4x8" May 17 00:07:44.969580 sshd[5200]: Accepted publickey for core from 139.178.89.65 port 42732 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:44.972711 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:44.980623 systemd-logind[1993]: New session 28 of user core. May 17 00:07:44.987805 systemd[1]: Started session-28.scope - Session 28 of User core. May 17 00:07:45.116936 sshd[5200]: pam_unix(sshd:session): session closed for user core May 17 00:07:45.122468 systemd[1]: sshd@27-172.31.26.249:22-139.178.89.65:42732.service: Deactivated successfully. May 17 00:07:45.126230 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:07:45.131811 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. May 17 00:07:45.135226 systemd-logind[1993]: Removed session 28. May 17 00:07:45.147956 containerd[2012]: time="2025-05-17T00:07:45.147886300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r4x8,Uid:cb372c29-7cc7-4661-b3f0-32a5b757e852,Namespace:kube-system,Attempt:0,}" May 17 00:07:45.158024 systemd[1]: Started sshd@28-172.31.26.249:22-139.178.89.65:42744.service - OpenSSH per-connection server daemon (139.178.89.65:42744). May 17 00:07:45.203887 containerd[2012]: time="2025-05-17T00:07:45.203621909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:07:45.204543 containerd[2012]: time="2025-05-17T00:07:45.204238601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:07:45.204543 containerd[2012]: time="2025-05-17T00:07:45.204293885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:07:45.204920 containerd[2012]: time="2025-05-17T00:07:45.204768605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:07:45.237885 systemd[1]: Started cri-containerd-a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf.scope - libcontainer container a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf. May 17 00:07:45.288186 containerd[2012]: time="2025-05-17T00:07:45.288046325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r4x8,Uid:cb372c29-7cc7-4661-b3f0-32a5b757e852,Namespace:kube-system,Attempt:0,} returns sandbox id \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\"" May 17 00:07:45.296991 containerd[2012]: time="2025-05-17T00:07:45.296772389Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:07:45.322761 containerd[2012]: time="2025-05-17T00:07:45.322352429Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b\"" May 17 00:07:45.325216 containerd[2012]: time="2025-05-17T00:07:45.325081661Z" level=info msg="StartContainer for \"9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b\"" May 17 00:07:45.351067 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 42744 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:45.354809 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:45.371240 systemd-logind[1993]: New session 29 of user core. May 17 00:07:45.379831 systemd[1]: Started cri-containerd-9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b.scope - libcontainer container 9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b. May 17 00:07:45.382412 systemd[1]: Started session-29.scope - Session 29 of User core. May 17 00:07:45.440412 containerd[2012]: time="2025-05-17T00:07:45.440220330Z" level=info msg="StartContainer for \"9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b\" returns successfully" May 17 00:07:45.456744 systemd[1]: cri-containerd-9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b.scope: Deactivated successfully. May 17 00:07:45.516629 containerd[2012]: time="2025-05-17T00:07:45.516133362Z" level=info msg="shim disconnected" id=9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b namespace=k8s.io May 17 00:07:45.516629 containerd[2012]: time="2025-05-17T00:07:45.516219570Z" level=warning msg="cleaning up after shim disconnected" id=9afa47afeda1917c4b6e6438cf80d97ded3d5a0c1a238bbe13e9fe3665878a0b namespace=k8s.io May 17 00:07:45.516629 containerd[2012]: time="2025-05-17T00:07:45.516242946Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:45.923009 kubelet[3216]: E0517 00:07:45.922942 3216 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:07:46.276616 containerd[2012]: time="2025-05-17T00:07:46.274251426Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:07:46.302585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124759478.mount: Deactivated successfully. May 17 00:07:46.303929 containerd[2012]: time="2025-05-17T00:07:46.302660502Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94\"" May 17 00:07:46.309579 containerd[2012]: time="2025-05-17T00:07:46.308484546Z" level=info msg="StartContainer for \"c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94\"" May 17 00:07:46.426861 systemd[1]: Started cri-containerd-c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94.scope - libcontainer container c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94. May 17 00:07:46.527336 containerd[2012]: time="2025-05-17T00:07:46.525188287Z" level=info msg="StartContainer for \"c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94\" returns successfully" May 17 00:07:46.542449 systemd[1]: cri-containerd-c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94.scope: Deactivated successfully. May 17 00:07:46.591080 containerd[2012]: time="2025-05-17T00:07:46.590986616Z" level=info msg="shim disconnected" id=c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94 namespace=k8s.io May 17 00:07:46.591080 containerd[2012]: time="2025-05-17T00:07:46.591065840Z" level=warning msg="cleaning up after shim disconnected" id=c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94 namespace=k8s.io May 17 00:07:46.591672 containerd[2012]: time="2025-05-17T00:07:46.591091280Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:46.613319 containerd[2012]: time="2025-05-17T00:07:46.613245704Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:07:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:07:47.023805 systemd[1]: run-containerd-runc-k8s.io-c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94-runc.AcBAar.mount: Deactivated successfully. May 17 00:07:47.023978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c492465542ff18feb6480980493f725c66791791f217de2c531a12c3c3417d94-rootfs.mount: Deactivated successfully. May 17 00:07:47.278297 containerd[2012]: time="2025-05-17T00:07:47.277825735Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:07:47.310123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281052559.mount: Deactivated successfully. May 17 00:07:47.319805 containerd[2012]: time="2025-05-17T00:07:47.319746847Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8\"" May 17 00:07:47.321092 containerd[2012]: time="2025-05-17T00:07:47.321020983Z" level=info msg="StartContainer for \"024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8\"" May 17 00:07:47.378841 systemd[1]: Started cri-containerd-024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8.scope - libcontainer container 024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8. May 17 00:07:47.432520 containerd[2012]: time="2025-05-17T00:07:47.431885708Z" level=info msg="StartContainer for \"024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8\" returns successfully" May 17 00:07:47.437340 systemd[1]: cri-containerd-024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8.scope: Deactivated successfully. May 17 00:07:47.493193 containerd[2012]: time="2025-05-17T00:07:47.492481904Z" level=info msg="shim disconnected" id=024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8 namespace=k8s.io May 17 00:07:47.493193 containerd[2012]: time="2025-05-17T00:07:47.492861332Z" level=warning msg="cleaning up after shim disconnected" id=024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8 namespace=k8s.io May 17 00:07:47.493193 containerd[2012]: time="2025-05-17T00:07:47.492902696Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:47.518478 containerd[2012]: time="2025-05-17T00:07:47.518311760Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:07:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:07:48.024523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-024f2cd4e02e801958ef472fc8f80ff299559c11dd83b1f3f058bf0b5f15e6c8-rootfs.mount: Deactivated successfully. May 17 00:07:48.303550 containerd[2012]: time="2025-05-17T00:07:48.300968792Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:07:48.337275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686622136.mount: Deactivated successfully. May 17 00:07:48.351327 containerd[2012]: time="2025-05-17T00:07:48.350571128Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed\"" May 17 00:07:48.354778 containerd[2012]: time="2025-05-17T00:07:48.352264232Z" level=info msg="StartContainer for \"3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed\"" May 17 00:07:48.429802 systemd[1]: Started cri-containerd-3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed.scope - libcontainer container 3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed. May 17 00:07:48.487047 systemd[1]: cri-containerd-3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed.scope: Deactivated successfully. May 17 00:07:48.493278 containerd[2012]: time="2025-05-17T00:07:48.492856917Z" level=info msg="StartContainer for \"3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed\" returns successfully" May 17 00:07:48.557243 containerd[2012]: time="2025-05-17T00:07:48.556904169Z" level=info msg="shim disconnected" id=3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed namespace=k8s.io May 17 00:07:48.557243 containerd[2012]: time="2025-05-17T00:07:48.556981233Z" level=warning msg="cleaning up after shim disconnected" id=3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed namespace=k8s.io May 17 00:07:48.557243 containerd[2012]: time="2025-05-17T00:07:48.557006433Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:07:48.683333 kubelet[3216]: E0517 00:07:48.683249 3216 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-wnnds" podUID="bae4ab1b-8f5d-498d-a115-79f30997ef22" May 17 00:07:48.913832 kubelet[3216]: I0517 00:07:48.913669 3216 setters.go:600] "Node became not ready" node="ip-172-31-26-249" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:07:48Z","lastTransitionTime":"2025-05-17T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:07:49.024680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a0f556b500a246bf6df57114a0f77ef28add0c36584ba9806febf60484fc7ed-rootfs.mount: Deactivated successfully. May 17 00:07:49.297602 containerd[2012]: time="2025-05-17T00:07:49.297310989Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:07:49.331306 containerd[2012]: time="2025-05-17T00:07:49.331230213Z" level=info msg="CreateContainer within sandbox \"a96c7c218d86df3d744c8e0438bbfa3d370a29485679f8249aedc0af240effaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66\"" May 17 00:07:49.336102 containerd[2012]: time="2025-05-17T00:07:49.334398633Z" level=info msg="StartContainer for \"c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66\"" May 17 00:07:49.398839 systemd[1]: Started cri-containerd-c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66.scope - libcontainer container c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66. May 17 00:07:49.451577 containerd[2012]: time="2025-05-17T00:07:49.451267690Z" level=info msg="StartContainer for \"c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66\" returns successfully" May 17 00:07:50.247829 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 00:07:50.683907 kubelet[3216]: E0517 00:07:50.683336 3216 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-wnnds" podUID="bae4ab1b-8f5d-498d-a115-79f30997ef22" May 17 00:07:51.866147 systemd[1]: run-containerd-runc-k8s.io-c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66-runc.HvuusR.mount: Deactivated successfully. May 17 00:07:54.486783 (udev-worker)[6051]: Network interface NamePolicy= disabled on kernel command line. May 17 00:07:54.492918 (udev-worker)[6053]: Network interface NamePolicy= disabled on kernel command line. May 17 00:07:54.500761 systemd-networkd[1852]: lxc_health: Link UP May 17 00:07:54.520920 systemd-networkd[1852]: lxc_health: Gained carrier May 17 00:07:55.186160 kubelet[3216]: I0517 00:07:55.186053 3216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2r4x8" podStartSLOduration=11.186032174 podStartE2EDuration="11.186032174s" podCreationTimestamp="2025-05-17 00:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:07:50.346712206 +0000 UTC m=+114.926416736" watchObservedRunningTime="2025-05-17 00:07:55.186032174 +0000 UTC m=+119.765736692" May 17 00:07:55.634115 containerd[2012]: time="2025-05-17T00:07:55.633840905Z" level=info msg="StopPodSandbox for \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\"" May 17 00:07:55.634115 containerd[2012]: time="2025-05-17T00:07:55.633978317Z" level=info msg="TearDown network for sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" successfully" May 17 00:07:55.634115 containerd[2012]: time="2025-05-17T00:07:55.634002437Z" level=info msg="StopPodSandbox for \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" returns successfully" May 17 00:07:55.634870 containerd[2012]: time="2025-05-17T00:07:55.634774901Z" level=info msg="RemovePodSandbox for \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\"" May 17 00:07:55.634870 containerd[2012]: time="2025-05-17T00:07:55.634844285Z" level=info msg="Forcibly stopping sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\"" May 17 00:07:55.635100 containerd[2012]: time="2025-05-17T00:07:55.634987289Z" level=info msg="TearDown network for sandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" successfully" May 17 00:07:55.643448 containerd[2012]: time="2025-05-17T00:07:55.643355705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:07:55.643702 containerd[2012]: time="2025-05-17T00:07:55.643487237Z" level=info msg="RemovePodSandbox \"7b52d9cc7400795ee7505312651bde168c221f15a859c806d5019601686dfb65\" returns successfully" May 17 00:07:55.644743 containerd[2012]: time="2025-05-17T00:07:55.644640497Z" level=info msg="StopPodSandbox for \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\"" May 17 00:07:55.644927 containerd[2012]: time="2025-05-17T00:07:55.644882813Z" level=info msg="TearDown network for sandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" successfully" May 17 00:07:55.644991 containerd[2012]: time="2025-05-17T00:07:55.644919233Z" level=info msg="StopPodSandbox for \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" returns successfully" May 17 00:07:55.648245 containerd[2012]: time="2025-05-17T00:07:55.648182669Z" level=info msg="RemovePodSandbox for \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\"" May 17 00:07:55.648245 containerd[2012]: time="2025-05-17T00:07:55.648241325Z" level=info msg="Forcibly stopping sandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\"" May 17 00:07:55.648476 containerd[2012]: time="2025-05-17T00:07:55.648350165Z" level=info msg="TearDown network for sandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" successfully" May 17 00:07:55.654524 containerd[2012]: time="2025-05-17T00:07:55.654430757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:07:55.654663 containerd[2012]: time="2025-05-17T00:07:55.654590213Z" level=info msg="RemovePodSandbox \"d12e0a5168b4933ccda499f6bbec35fa097aabe028e7591d631f143c32227098\" returns successfully" May 17 00:07:56.233335 systemd-networkd[1852]: lxc_health: Gained IPv6LL May 17 00:07:58.605933 ntpd[1988]: Listen normally on 14 lxc_health [fe80::6895:efff:fe53:28ae%14]:123 May 17 00:07:58.608129 ntpd[1988]: 17 May 00:07:58 ntpd[1988]: Listen normally on 14 lxc_health [fe80::6895:efff:fe53:28ae%14]:123 May 17 00:07:58.701178 systemd[1]: run-containerd-runc-k8s.io-c17fab2620037680bed5c6d0f9e823463fff0276e3195f6831b06156cc12dd66-runc.u7eHEy.mount: Deactivated successfully. May 17 00:08:01.222851 sshd[5212]: pam_unix(sshd:session): session closed for user core May 17 00:08:01.229237 systemd[1]: sshd@28-172.31.26.249:22-139.178.89.65:42744.service: Deactivated successfully. May 17 00:08:01.237197 systemd[1]: session-29.scope: Deactivated successfully. May 17 00:08:01.243548 systemd-logind[1993]: Session 29 logged out. Waiting for processes to exit. May 17 00:08:01.246455 systemd-logind[1993]: Removed session 29. May 17 00:08:15.406826 systemd[1]: cri-containerd-0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820.scope: Deactivated successfully. May 17 00:08:15.407327 systemd[1]: cri-containerd-0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820.scope: Consumed 5.237s CPU time, 18.1M memory peak, 0B memory swap peak. May 17 00:08:15.447832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820-rootfs.mount: Deactivated successfully. May 17 00:08:15.458926 containerd[2012]: time="2025-05-17T00:08:15.458831171Z" level=info msg="shim disconnected" id=0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820 namespace=k8s.io May 17 00:08:15.458926 containerd[2012]: time="2025-05-17T00:08:15.458926331Z" level=warning msg="cleaning up after shim disconnected" id=0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820 namespace=k8s.io May 17 00:08:15.458926 containerd[2012]: time="2025-05-17T00:08:15.458949719Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:16.386990 kubelet[3216]: I0517 00:08:16.386680 3216 scope.go:117] "RemoveContainer" containerID="0c970ec3409104524c2ba3b8cb31944229c0b72bfcb3c4628be6c49374067820" May 17 00:08:16.391255 containerd[2012]: time="2025-05-17T00:08:16.391153236Z" level=info msg="CreateContainer within sandbox \"4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:08:16.426699 containerd[2012]: time="2025-05-17T00:08:16.426552780Z" level=info msg="CreateContainer within sandbox \"4862abc523d3ff41e41c96b0f6664b36a21923c93e8fd428609bae14fc1016c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"97c62d29a5a6b9e36ff2b864fc9c9256ef0a49f4723ccaf61d0a42709d158fd6\"" May 17 00:08:16.427926 containerd[2012]: time="2025-05-17T00:08:16.427471296Z" level=info msg="StartContainer for \"97c62d29a5a6b9e36ff2b864fc9c9256ef0a49f4723ccaf61d0a42709d158fd6\"" May 17 00:08:16.480838 systemd[1]: Started cri-containerd-97c62d29a5a6b9e36ff2b864fc9c9256ef0a49f4723ccaf61d0a42709d158fd6.scope - libcontainer container 97c62d29a5a6b9e36ff2b864fc9c9256ef0a49f4723ccaf61d0a42709d158fd6. May 17 00:08:16.551923 containerd[2012]: time="2025-05-17T00:08:16.551706480Z" level=info msg="StartContainer for \"97c62d29a5a6b9e36ff2b864fc9c9256ef0a49f4723ccaf61d0a42709d158fd6\" returns successfully" May 17 00:08:19.024973 kubelet[3216]: E0517 00:08:19.024148 3216 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-249?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:08:21.096122 systemd[1]: cri-containerd-374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887.scope: Deactivated successfully. May 17 00:08:21.097329 systemd[1]: cri-containerd-374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887.scope: Consumed 3.075s CPU time, 16.1M memory peak, 0B memory swap peak. May 17 00:08:21.137240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887-rootfs.mount: Deactivated successfully. May 17 00:08:21.152100 containerd[2012]: time="2025-05-17T00:08:21.152011659Z" level=info msg="shim disconnected" id=374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887 namespace=k8s.io May 17 00:08:21.152100 containerd[2012]: time="2025-05-17T00:08:21.152089011Z" level=warning msg="cleaning up after shim disconnected" id=374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887 namespace=k8s.io May 17 00:08:21.153284 containerd[2012]: time="2025-05-17T00:08:21.152112951Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:21.172192 containerd[2012]: time="2025-05-17T00:08:21.172033119Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:08:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:08:21.406202 kubelet[3216]: I0517 00:08:21.406025 3216 scope.go:117] "RemoveContainer" containerID="374637325c0618010b0d5acdc8426e874c43a235db95ebce74239066994fa887" May 17 00:08:21.409947 containerd[2012]: time="2025-05-17T00:08:21.409868273Z" level=info msg="CreateContainer within sandbox \"4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:08:21.439350 containerd[2012]: time="2025-05-17T00:08:21.439212929Z" level=info msg="CreateContainer within sandbox \"4ea05cdc249eb21000efaa033d289f7e4a5a8b35e4aca27537d97ecec4506bfd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d2488732da55fcbe393846b9e16d8989779912669a9390573f6e38c4bc1958bc\"" May 17 00:08:21.439963 containerd[2012]: time="2025-05-17T00:08:21.439924037Z" level=info msg="StartContainer for \"d2488732da55fcbe393846b9e16d8989779912669a9390573f6e38c4bc1958bc\"" May 17 00:08:21.491814 systemd[1]: Started cri-containerd-d2488732da55fcbe393846b9e16d8989779912669a9390573f6e38c4bc1958bc.scope - libcontainer container d2488732da55fcbe393846b9e16d8989779912669a9390573f6e38c4bc1958bc. May 17 00:08:21.558237 containerd[2012]: time="2025-05-17T00:08:21.557992709Z" level=info msg="StartContainer for \"d2488732da55fcbe393846b9e16d8989779912669a9390573f6e38c4bc1958bc\" returns successfully" May 17 00:08:29.025425 kubelet[3216]: E0517 00:08:29.025057 3216 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-249?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"