May 9 23:57:47.222310 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 9 23:57:47.222355 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:57:47.222379 kernel: KASLR disabled due to lack of seed May 9 23:57:47.222395 kernel: efi: EFI v2.7 by EDK II May 9 23:57:47.222411 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 9 23:57:47.222427 kernel: ACPI: Early table checksum verification disabled May 9 23:57:47.222444 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 9 23:57:47.222460 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 9 23:57:47.222476 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 23:57:47.222491 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 9 23:57:47.222511 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 23:57:47.222527 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 9 23:57:47.222542 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 9 23:57:47.222558 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 9 23:57:47.222577 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 23:57:47.222597 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 9 23:57:47.222614 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 9 23:57:47.222630 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 9 23:57:47.222647 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 9 23:57:47.222663 kernel: printk: bootconsole [uart0] enabled May 9 23:57:47.222679 kernel: NUMA: Failed to initialise from firmware May 9 23:57:47.222696 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:57:47.222713 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 9 23:57:47.222729 kernel: Zone ranges: May 9 23:57:47.222745 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 9 23:57:47.222761 kernel: DMA32 empty May 9 23:57:47.222781 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 9 23:57:47.222798 kernel: Movable zone start for each node May 9 23:57:47.222814 kernel: Early memory node ranges May 9 23:57:47.222830 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 9 23:57:47.222847 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 9 23:57:47.222863 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 9 23:57:47.222879 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 9 23:57:47.222895 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 9 23:57:47.222943 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 9 23:57:47.222962 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 9 23:57:47.222979 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 9 23:57:47.222996 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:57:47.223019 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 9 23:57:47.223036 kernel: psci: probing for conduit method from ACPI. May 9 23:57:47.223060 kernel: psci: PSCIv1.0 detected in firmware. May 9 23:57:47.223078 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:57:47.223095 kernel: psci: Trusted OS migration not required May 9 23:57:47.223117 kernel: psci: SMC Calling Convention v1.1 May 9 23:57:47.223134 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:57:47.223151 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:57:47.223169 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:57:47.223186 kernel: Detected PIPT I-cache on CPU0 May 9 23:57:47.223203 kernel: CPU features: detected: GIC system register CPU interface May 9 23:57:47.223221 kernel: CPU features: detected: Spectre-v2 May 9 23:57:47.223238 kernel: CPU features: detected: Spectre-v3a May 9 23:57:47.223255 kernel: CPU features: detected: Spectre-BHB May 9 23:57:47.223272 kernel: CPU features: detected: ARM erratum 1742098 May 9 23:57:47.223290 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 9 23:57:47.223311 kernel: alternatives: applying boot alternatives May 9 23:57:47.223331 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:47.223350 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:57:47.223367 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:57:47.223385 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:57:47.223402 kernel: Fallback order for Node 0: 0 May 9 23:57:47.223420 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 9 23:57:47.223437 kernel: Policy zone: Normal May 9 23:57:47.223454 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:57:47.223471 kernel: software IO TLB: area num 2. May 9 23:57:47.223489 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 9 23:57:47.223511 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) May 9 23:57:47.223529 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:57:47.223546 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:57:47.223582 kernel: rcu: RCU event tracing is enabled. May 9 23:57:47.223602 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:57:47.223620 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:57:47.223637 kernel: Tracing variant of Tasks RCU enabled. May 9 23:57:47.223655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:57:47.223672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:57:47.223689 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:57:47.223706 kernel: GICv3: 96 SPIs implemented May 9 23:57:47.223729 kernel: GICv3: 0 Extended SPIs implemented May 9 23:57:47.223746 kernel: Root IRQ handler: gic_handle_irq May 9 23:57:47.223763 kernel: GICv3: GICv3 features: 16 PPIs May 9 23:57:47.223781 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 9 23:57:47.223798 kernel: ITS [mem 0x10080000-0x1009ffff] May 9 23:57:47.223815 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:57:47.223833 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 9 23:57:47.223850 kernel: GICv3: using LPI property table @0x00000004000d0000 May 9 23:57:47.223867 kernel: ITS: Using hypervisor restricted LPI range [128] May 9 23:57:47.223884 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 9 23:57:47.223902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:57:47.225109 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 9 23:57:47.225139 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 9 23:57:47.225157 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 9 23:57:47.225174 kernel: Console: colour dummy device 80x25 May 9 23:57:47.225192 kernel: printk: console [tty1] enabled May 9 23:57:47.225210 kernel: ACPI: Core revision 20230628 May 9 23:57:47.225229 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 9 23:57:47.225247 kernel: pid_max: default: 32768 minimum: 301 May 9 23:57:47.225265 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:57:47.225282 kernel: landlock: Up and running. May 9 23:57:47.225304 kernel: SELinux: Initializing. May 9 23:57:47.225322 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:47.225340 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:47.225358 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:47.225376 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:47.225396 kernel: rcu: Hierarchical SRCU implementation. May 9 23:57:47.225415 kernel: rcu: Max phase no-delay instances is 400. May 9 23:57:47.225432 kernel: Platform MSI: ITS@0x10080000 domain created May 9 23:57:47.225450 kernel: PCI/MSI: ITS@0x10080000 domain created May 9 23:57:47.225472 kernel: Remapping and enabling EFI services. May 9 23:57:47.225490 kernel: smp: Bringing up secondary CPUs ... May 9 23:57:47.225508 kernel: Detected PIPT I-cache on CPU1 May 9 23:57:47.225525 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 9 23:57:47.225543 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 9 23:57:47.225561 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 9 23:57:47.225579 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:57:47.225596 kernel: SMP: Total of 2 processors activated. May 9 23:57:47.225614 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:57:47.225636 kernel: CPU features: detected: 32-bit EL1 Support May 9 23:57:47.225654 kernel: CPU features: detected: CRC32 instructions May 9 23:57:47.225672 kernel: CPU: All CPU(s) started at EL1 May 9 23:57:47.225701 kernel: alternatives: applying system-wide alternatives May 9 23:57:47.225723 kernel: devtmpfs: initialized May 9 23:57:47.225742 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:57:47.225760 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:57:47.225779 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:57:47.225797 kernel: SMBIOS 3.0.0 present. May 9 23:57:47.225815 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 9 23:57:47.225838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:57:47.225857 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:57:47.225876 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:57:47.225894 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:57:47.225944 kernel: audit: initializing netlink subsys (disabled) May 9 23:57:47.225966 kernel: audit: type=2000 audit(0.285:1): state=initialized audit_enabled=0 res=1 May 9 23:57:47.225985 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:57:47.226010 kernel: cpuidle: using governor menu May 9 23:57:47.226030 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:57:47.226049 kernel: ASID allocator initialised with 65536 entries May 9 23:57:47.226067 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:57:47.226085 kernel: Serial: AMBA PL011 UART driver May 9 23:57:47.226104 kernel: Modules: 17488 pages in range for non-PLT usage May 9 23:57:47.226122 kernel: Modules: 509008 pages in range for PLT usage May 9 23:57:47.226141 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:57:47.226159 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:57:47.226183 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:57:47.226202 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:57:47.226221 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:57:47.226240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:57:47.226259 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:57:47.226277 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:57:47.226296 kernel: ACPI: Added _OSI(Module Device) May 9 23:57:47.226315 kernel: ACPI: Added _OSI(Processor Device) May 9 23:57:47.226334 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:57:47.226356 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:57:47.226376 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:57:47.226394 kernel: ACPI: Interpreter enabled May 9 23:57:47.226413 kernel: ACPI: Using GIC for interrupt routing May 9 23:57:47.226431 kernel: ACPI: MCFG table detected, 1 entries May 9 23:57:47.226450 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 9 23:57:47.226750 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:57:47.227056 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:57:47.227276 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:57:47.227479 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 9 23:57:47.227711 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 9 23:57:47.227741 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 9 23:57:47.227762 kernel: acpiphp: Slot [1] registered May 9 23:57:47.227782 kernel: acpiphp: Slot [2] registered May 9 23:57:47.227801 kernel: acpiphp: Slot [3] registered May 9 23:57:47.227819 kernel: acpiphp: Slot [4] registered May 9 23:57:47.227844 kernel: acpiphp: Slot [5] registered May 9 23:57:47.227863 kernel: acpiphp: Slot [6] registered May 9 23:57:47.227883 kernel: acpiphp: Slot [7] registered May 9 23:57:47.227901 kernel: acpiphp: Slot [8] registered May 9 23:57:47.227958 kernel: acpiphp: Slot [9] registered May 9 23:57:47.227977 kernel: acpiphp: Slot [10] registered May 9 23:57:47.227996 kernel: acpiphp: Slot [11] registered May 9 23:57:47.228015 kernel: acpiphp: Slot [12] registered May 9 23:57:47.228034 kernel: acpiphp: Slot [13] registered May 9 23:57:47.228052 kernel: acpiphp: Slot [14] registered May 9 23:57:47.228077 kernel: acpiphp: Slot [15] registered May 9 23:57:47.228095 kernel: acpiphp: Slot [16] registered May 9 23:57:47.228113 kernel: acpiphp: Slot [17] registered May 9 23:57:47.228132 kernel: acpiphp: Slot [18] registered May 9 23:57:47.228150 kernel: acpiphp: Slot [19] registered May 9 23:57:47.228168 kernel: acpiphp: Slot [20] registered May 9 23:57:47.228187 kernel: acpiphp: Slot [21] registered May 9 23:57:47.228205 kernel: acpiphp: Slot [22] registered May 9 23:57:47.228223 kernel: acpiphp: Slot [23] registered May 9 23:57:47.228245 kernel: acpiphp: Slot [24] registered May 9 23:57:47.228264 kernel: acpiphp: Slot [25] registered May 9 23:57:47.228282 kernel: acpiphp: Slot [26] registered May 9 23:57:47.228301 kernel: acpiphp: Slot [27] registered May 9 23:57:47.228319 kernel: acpiphp: Slot [28] registered May 9 23:57:47.228337 kernel: acpiphp: Slot [29] registered May 9 23:57:47.228355 kernel: acpiphp: Slot [30] registered May 9 23:57:47.228373 kernel: acpiphp: Slot [31] registered May 9 23:57:47.228391 kernel: PCI host bridge to bus 0000:00 May 9 23:57:47.228634 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 9 23:57:47.228830 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:57:47.229079 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 9 23:57:47.229276 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 9 23:57:47.229515 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 9 23:57:47.229744 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 9 23:57:47.229982 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 9 23:57:47.230221 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 23:57:47.230434 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 9 23:57:47.230643 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:57:47.230872 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 23:57:47.231124 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 9 23:57:47.231340 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 9 23:57:47.231573 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 9 23:57:47.231798 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:57:47.232096 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 9 23:57:47.232304 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 9 23:57:47.232512 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 9 23:57:47.232715 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 9 23:57:47.232941 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 9 23:57:47.233143 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 9 23:57:47.233327 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:57:47.233514 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 9 23:57:47.233540 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:57:47.233560 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:57:47.233579 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:57:47.233599 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:57:47.233619 kernel: iommu: Default domain type: Translated May 9 23:57:47.233638 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:57:47.233666 kernel: efivars: Registered efivars operations May 9 23:57:47.233685 kernel: vgaarb: loaded May 9 23:57:47.233705 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:57:47.233724 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:57:47.233743 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:57:47.233763 kernel: pnp: PnP ACPI init May 9 23:57:47.234137 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 9 23:57:47.234171 kernel: pnp: PnP ACPI: found 1 devices May 9 23:57:47.234197 kernel: NET: Registered PF_INET protocol family May 9 23:57:47.234217 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:57:47.234236 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:57:47.234254 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:57:47.234273 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:57:47.234291 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:57:47.234310 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:57:47.234329 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:47.234347 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:47.234370 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:57:47.234389 kernel: PCI: CLS 0 bytes, default 64 May 9 23:57:47.234407 kernel: kvm [1]: HYP mode not available May 9 23:57:47.234425 kernel: Initialise system trusted keyrings May 9 23:57:47.234444 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:57:47.234463 kernel: Key type asymmetric registered May 9 23:57:47.234481 kernel: Asymmetric key parser 'x509' registered May 9 23:57:47.234500 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:57:47.234518 kernel: io scheduler mq-deadline registered May 9 23:57:47.234541 kernel: io scheduler kyber registered May 9 23:57:47.234559 kernel: io scheduler bfq registered May 9 23:57:47.234770 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 9 23:57:47.234798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:57:47.234817 kernel: ACPI: button: Power Button [PWRB] May 9 23:57:47.234836 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 9 23:57:47.234855 kernel: ACPI: button: Sleep Button [SLPB] May 9 23:57:47.234874 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:57:47.234898 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 9 23:57:47.235216 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 9 23:57:47.235543 kernel: printk: console [ttyS0] disabled May 9 23:57:47.235863 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 9 23:57:47.237132 kernel: printk: console [ttyS0] enabled May 9 23:57:47.237251 kernel: printk: bootconsole [uart0] disabled May 9 23:57:47.237271 kernel: thunder_xcv, ver 1.0 May 9 23:57:47.237291 kernel: thunder_bgx, ver 1.0 May 9 23:57:47.237310 kernel: nicpf, ver 1.0 May 9 23:57:47.237337 kernel: nicvf, ver 1.0 May 9 23:57:47.237584 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:57:47.237783 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:57:46 UTC (1746835066) May 9 23:57:47.237810 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:57:47.237829 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 9 23:57:47.237848 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:57:47.237867 kernel: watchdog: Hard watchdog permanently disabled May 9 23:57:47.237887 kernel: NET: Registered PF_INET6 protocol family May 9 23:57:47.237935 kernel: Segment Routing with IPv6 May 9 23:57:47.237957 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:57:47.237976 kernel: NET: Registered PF_PACKET protocol family May 9 23:57:47.237994 kernel: Key type dns_resolver registered May 9 23:57:47.238013 kernel: registered taskstats version 1 May 9 23:57:47.238032 kernel: Loading compiled-in X.509 certificates May 9 23:57:47.238051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:57:47.238070 kernel: Key type .fscrypt registered May 9 23:57:47.238088 kernel: Key type fscrypt-provisioning registered May 9 23:57:47.238112 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:57:47.238131 kernel: ima: Allocated hash algorithm: sha1 May 9 23:57:47.238150 kernel: ima: No architecture policies found May 9 23:57:47.238168 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:57:47.238187 kernel: clk: Disabling unused clocks May 9 23:57:47.238205 kernel: Freeing unused kernel memory: 39424K May 9 23:57:47.238223 kernel: Run /init as init process May 9 23:57:47.238242 kernel: with arguments: May 9 23:57:47.238260 kernel: /init May 9 23:57:47.238279 kernel: with environment: May 9 23:57:47.238302 kernel: HOME=/ May 9 23:57:47.238321 kernel: TERM=linux May 9 23:57:47.238340 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:57:47.238363 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:47.238387 systemd[1]: Detected virtualization amazon. May 9 23:57:47.238407 systemd[1]: Detected architecture arm64. May 9 23:57:47.238428 systemd[1]: Running in initrd. May 9 23:57:47.241027 systemd[1]: No hostname configured, using default hostname. May 9 23:57:47.241052 systemd[1]: Hostname set to . May 9 23:57:47.241074 systemd[1]: Initializing machine ID from VM UUID. May 9 23:57:47.241095 systemd[1]: Queued start job for default target initrd.target. May 9 23:57:47.241115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:47.241136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:47.241158 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:57:47.241179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:47.241205 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:57:47.241227 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:57:47.241251 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:57:47.241272 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:57:47.241293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:47.241313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:47.241333 systemd[1]: Reached target paths.target - Path Units. May 9 23:57:47.241358 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:47.241379 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:47.241399 systemd[1]: Reached target timers.target - Timer Units. May 9 23:57:47.241419 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:47.241440 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:47.241461 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:57:47.241481 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:57:47.241501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:47.241522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:47.241547 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:47.241567 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:57:47.241588 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:57:47.241608 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:47.241629 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:57:47.241649 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:57:47.241669 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:47.241689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:47.241713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:47.241734 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:57:47.241755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:47.241775 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:57:47.241796 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:57:47.241873 systemd-journald[250]: Collecting audit messages is disabled. May 9 23:57:47.241951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:47.241975 systemd-journald[250]: Journal started May 9 23:57:47.242020 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2ec301db4cb9a7ad17f0e47eb974db) is 8.0M, max 75.3M, 67.3M free. May 9 23:57:47.209608 systemd-modules-load[251]: Inserted module 'overlay' May 9 23:57:47.254157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:57:47.254217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:47.258010 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:47.259829 kernel: Bridge firewalling registered May 9 23:57:47.258942 systemd-modules-load[251]: Inserted module 'br_netfilter' May 9 23:57:47.261689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:47.263409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:47.271187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:47.289079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:47.293150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:47.325014 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:47.340971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:47.346122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:47.359215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:57:47.363613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:47.379174 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:57:47.430702 dracut-cmdline[290]: dracut-dracut-053 May 9 23:57:47.437266 systemd-resolved[288]: Positive Trust Anchors: May 9 23:57:47.437304 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:57:47.437368 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:57:47.456514 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:47.604938 kernel: SCSI subsystem initialized May 9 23:57:47.609938 kernel: Loading iSCSI transport class v2.0-870. May 9 23:57:47.622944 kernel: iscsi: registered transport (tcp) May 9 23:57:47.645047 kernel: iscsi: registered transport (qla4xxx) May 9 23:57:47.645132 kernel: QLogic iSCSI HBA Driver May 9 23:57:47.703956 kernel: random: crng init done May 9 23:57:47.704228 systemd-resolved[288]: Defaulting to hostname 'linux'. May 9 23:57:47.707224 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:57:47.709543 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:47.736586 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:57:47.749213 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:57:47.782421 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:57:47.782508 kernel: device-mapper: uevent: version 1.0.3 May 9 23:57:47.784256 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:57:47.849991 kernel: raid6: neonx8 gen() 6738 MB/s May 9 23:57:47.866957 kernel: raid6: neonx4 gen() 6539 MB/s May 9 23:57:47.883942 kernel: raid6: neonx2 gen() 5463 MB/s May 9 23:57:47.900937 kernel: raid6: neonx1 gen() 3969 MB/s May 9 23:57:47.917952 kernel: raid6: int64x8 gen() 3832 MB/s May 9 23:57:47.934942 kernel: raid6: int64x4 gen() 3719 MB/s May 9 23:57:47.951941 kernel: raid6: int64x2 gen() 3615 MB/s May 9 23:57:47.969785 kernel: raid6: int64x1 gen() 2768 MB/s May 9 23:57:47.969851 kernel: raid6: using algorithm neonx8 gen() 6738 MB/s May 9 23:57:47.987762 kernel: raid6: .... xor() 4867 MB/s, rmw enabled May 9 23:57:47.987836 kernel: raid6: using neon recovery algorithm May 9 23:57:47.996309 kernel: xor: measuring software checksum speed May 9 23:57:47.996368 kernel: 8regs : 10970 MB/sec May 9 23:57:47.997421 kernel: 32regs : 11941 MB/sec May 9 23:57:47.998606 kernel: arm64_neon : 9283 MB/sec May 9 23:57:47.998638 kernel: xor: using function: 32regs (11941 MB/sec) May 9 23:57:48.087960 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:57:48.109315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:48.122234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:48.163423 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 9 23:57:48.173075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:48.195574 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:57:48.238635 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation May 9 23:57:48.295854 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:48.306233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:48.432208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:48.444290 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:57:48.490028 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:57:48.495899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:48.501112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:48.506380 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:48.526844 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:57:48.574243 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:48.645280 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:57:48.645357 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 9 23:57:48.651338 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 23:57:48.653794 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 23:57:48.663935 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0f:9e:47:fd:e1 May 9 23:57:48.665201 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. May 9 23:57:48.670673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:48.670961 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:48.673648 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:48.675986 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:48.708081 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 9 23:57:48.676278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:48.688089 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:48.713331 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 23:57:48.717870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:48.728891 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 23:57:48.738036 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:57:48.738107 kernel: GPT:9289727 != 16777215 May 9 23:57:48.741789 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:57:48.741854 kernel: GPT:9289727 != 16777215 May 9 23:57:48.741880 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:57:48.742979 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:48.755428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:48.773341 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:48.816680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:48.843951 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (516) May 9 23:57:48.875798 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (517) May 9 23:57:48.932682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 23:57:48.987397 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 23:57:49.002667 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 23:57:49.005215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 23:57:49.024159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:57:49.038426 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:57:49.052300 disk-uuid[661]: Primary Header is updated. May 9 23:57:49.052300 disk-uuid[661]: Secondary Entries is updated. May 9 23:57:49.052300 disk-uuid[661]: Secondary Header is updated. May 9 23:57:49.063133 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:49.071939 kernel: GPT:disk_guids don't match. May 9 23:57:49.072010 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:57:49.072037 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:49.080967 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:50.085088 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:50.086962 disk-uuid[662]: The operation has completed successfully. May 9 23:57:50.275666 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:57:50.277673 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:57:50.325238 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:57:50.348565 sh[1006]: Success May 9 23:57:50.374575 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:57:50.494798 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:57:50.506095 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:57:50.515265 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:57:50.552817 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:57:50.552895 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:50.554725 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:57:50.556108 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:57:50.557256 kernel: BTRFS info (device dm-0): using free space tree May 9 23:57:50.586955 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 23:57:50.602556 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:57:50.606694 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:57:50.618203 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:57:50.623174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:57:50.660320 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:50.660393 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:50.661918 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:50.668972 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:50.688676 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:57:50.690781 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:50.699736 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:57:50.710354 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:57:50.819256 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:50.832351 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:50.894829 systemd-networkd[1198]: lo: Link UP May 9 23:57:50.895662 systemd-networkd[1198]: lo: Gained carrier May 9 23:57:50.900712 systemd-networkd[1198]: Enumeration completed May 9 23:57:50.900937 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:57:50.903162 systemd[1]: Reached target network.target - Network. May 9 23:57:50.906876 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:50.906883 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:57:50.913031 ignition[1119]: Ignition 2.19.0 May 9 23:57:50.913055 ignition[1119]: Stage: fetch-offline May 9 23:57:50.919350 systemd-networkd[1198]: eth0: Link UP May 9 23:57:50.914112 ignition[1119]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:50.919359 systemd-networkd[1198]: eth0: Gained carrier May 9 23:57:50.914146 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:50.919378 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:50.919201 ignition[1119]: Ignition finished successfully May 9 23:57:50.923896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:50.944773 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.18.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:57:50.945653 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:57:50.983205 ignition[1206]: Ignition 2.19.0 May 9 23:57:50.983235 ignition[1206]: Stage: fetch May 9 23:57:50.984479 ignition[1206]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:50.984505 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:50.984654 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:50.993475 ignition[1206]: PUT result: OK May 9 23:57:50.996051 ignition[1206]: parsed url from cmdline: "" May 9 23:57:50.996116 ignition[1206]: no config URL provided May 9 23:57:50.996135 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:50.996160 ignition[1206]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:50.996192 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:50.999851 ignition[1206]: PUT result: OK May 9 23:57:51.000079 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 23:57:51.002214 ignition[1206]: GET result: OK May 9 23:57:51.002355 ignition[1206]: parsing config with SHA512: 38d6cc663e6ad5df66dcdef9a06f306de86b9fa829dd4b1191b21a7ff34bdc02c194003d6dfa132c393af61b85ed5d8d5a587490b8cabe3a62403593b81307ba May 9 23:57:51.014104 unknown[1206]: fetched base config from "system" May 9 23:57:51.014131 unknown[1206]: fetched base config from "system" May 9 23:57:51.014145 unknown[1206]: fetched user config from "aws" May 9 23:57:51.018087 ignition[1206]: fetch: fetch complete May 9 23:57:51.018099 ignition[1206]: fetch: fetch passed May 9 23:57:51.018192 ignition[1206]: Ignition finished successfully May 9 23:57:51.025970 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:57:51.035150 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:57:51.070649 ignition[1215]: Ignition 2.19.0 May 9 23:57:51.070681 ignition[1215]: Stage: kargs May 9 23:57:51.072261 ignition[1215]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:51.072288 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:51.072852 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:51.077524 ignition[1215]: PUT result: OK May 9 23:57:51.083653 ignition[1215]: kargs: kargs passed May 9 23:57:51.083979 ignition[1215]: Ignition finished successfully May 9 23:57:51.089194 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:57:51.100183 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:57:51.125319 ignition[1221]: Ignition 2.19.0 May 9 23:57:51.125348 ignition[1221]: Stage: disks May 9 23:57:51.126998 ignition[1221]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:51.127024 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:51.127778 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:51.132130 ignition[1221]: PUT result: OK May 9 23:57:51.138233 ignition[1221]: disks: disks passed May 9 23:57:51.138375 ignition[1221]: Ignition finished successfully May 9 23:57:51.145005 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:57:51.149243 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:57:51.149623 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:57:51.151862 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:51.152381 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:57:51.153002 systemd[1]: Reached target basic.target - Basic System. May 9 23:57:51.179302 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:57:51.229265 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:57:51.234744 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:57:51.246460 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:57:51.346970 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:57:51.347902 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:57:51.351687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:57:51.378062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:51.384439 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:57:51.387679 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:57:51.388196 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:57:51.390059 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:51.418950 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1248) May 9 23:57:51.422747 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:51.422812 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:51.422839 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:51.426522 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:57:51.436356 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:57:51.444945 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:51.448203 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:51.541400 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:57:51.551311 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory May 9 23:57:51.559428 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:57:51.568494 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:57:51.717355 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:57:51.733071 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:57:51.742231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:57:51.756986 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:57:51.759170 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:51.803646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:57:51.812529 ignition[1361]: INFO : Ignition 2.19.0 May 9 23:57:51.812529 ignition[1361]: INFO : Stage: mount May 9 23:57:51.815801 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:51.815801 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:51.819962 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:51.823043 ignition[1361]: INFO : PUT result: OK May 9 23:57:51.827492 ignition[1361]: INFO : mount: mount passed May 9 23:57:51.829261 ignition[1361]: INFO : Ignition finished successfully May 9 23:57:51.833180 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:57:51.842135 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:57:51.870256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:51.899948 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) May 9 23:57:51.903689 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:51.903734 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:51.903773 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:51.911944 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:51.913192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:51.954212 ignition[1390]: INFO : Ignition 2.19.0 May 9 23:57:51.954212 ignition[1390]: INFO : Stage: files May 9 23:57:51.957495 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:51.957495 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:51.961757 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:51.964374 ignition[1390]: INFO : PUT result: OK May 9 23:57:51.968520 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping May 9 23:57:51.971963 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:57:51.971963 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:57:51.981001 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:57:51.983768 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:57:51.986267 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:57:51.985973 unknown[1390]: wrote ssh authorized keys file for user: core May 9 23:57:51.991680 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:57:51.991680 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:57:52.122885 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:57:52.145070 systemd-networkd[1198]: eth0: Gained IPv6LL May 9 23:57:52.817817 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:57:52.821780 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:57:52.821780 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 9 23:57:53.271105 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:57:53.407614 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:57:53.410992 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:57:53.414773 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 9 23:57:53.782623 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:57:54.081459 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 9 23:57:54.081459 ignition[1390]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 23:57:54.087928 ignition[1390]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:54.087928 ignition[1390]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:54.087928 ignition[1390]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 23:57:54.087928 ignition[1390]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 9 23:57:54.087928 ignition[1390]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:57:54.087928 ignition[1390]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:54.087928 ignition[1390]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:54.087928 ignition[1390]: INFO : files: files passed May 9 23:57:54.087928 ignition[1390]: INFO : Ignition finished successfully May 9 23:57:54.113490 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:57:54.128148 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:57:54.134765 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:57:54.142773 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:57:54.144983 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:57:54.173942 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:54.173942 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:54.184008 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:54.190043 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:54.192853 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:57:54.208669 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:57:54.269763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:57:54.270002 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:57:54.275409 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:57:54.278499 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:57:54.284658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:57:54.292285 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:57:54.331250 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:54.344242 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:57:54.369751 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:54.370861 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:54.372478 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:57:54.373041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:57:54.373346 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:54.374477 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:57:54.374871 systemd[1]: Stopped target basic.target - Basic System. May 9 23:57:54.375795 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:57:54.376628 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:54.376900 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:57:54.377212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:57:54.377510 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:54.377830 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:57:54.378427 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:57:54.378730 systemd[1]: Stopped target swap.target - Swaps. May 9 23:57:54.379242 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:57:54.379536 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:54.380577 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:54.381006 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:54.381779 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:57:54.401011 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:54.401323 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:57:54.401610 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:57:54.434313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:57:54.434640 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:54.459446 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:57:54.459878 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:57:54.475980 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:57:54.477818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:57:54.480032 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:54.504271 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:57:54.508231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:57:54.522886 ignition[1442]: INFO : Ignition 2.19.0 May 9 23:57:54.522886 ignition[1442]: INFO : Stage: umount May 9 23:57:54.522886 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:54.522886 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:54.522886 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:54.522886 ignition[1442]: INFO : PUT result: OK May 9 23:57:54.508674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:54.564589 ignition[1442]: INFO : umount: umount passed May 9 23:57:54.564589 ignition[1442]: INFO : Ignition finished successfully May 9 23:57:54.522462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:57:54.522805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:54.544704 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:57:54.545967 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:57:54.558646 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:57:54.559877 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:57:54.561954 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:57:54.591437 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:57:54.591558 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:57:54.599150 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:57:54.599258 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:57:54.612536 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:57:54.612643 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:57:54.615755 systemd[1]: Stopped target network.target - Network. May 9 23:57:54.619305 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:57:54.619405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:54.621693 systemd[1]: Stopped target paths.target - Path Units. May 9 23:57:54.623606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:57:54.628260 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:54.641462 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:57:54.643206 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:57:54.645067 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:57:54.645144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:54.647065 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:57:54.647136 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:54.649239 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:57:54.649323 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:57:54.651267 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:57:54.651342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:57:54.653618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:57:54.655954 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:57:54.673629 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:57:54.673806 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:57:54.680036 systemd-networkd[1198]: eth0: DHCPv6 lease lost May 9 23:57:54.680436 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:57:54.680606 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:57:54.687565 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:57:54.687780 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:57:54.696520 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:57:54.698628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:57:54.722528 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:57:54.722630 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:54.744107 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:57:54.746724 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:57:54.746842 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:54.749237 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:57:54.749320 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:54.751412 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:57:54.751488 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:57:54.753645 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:57:54.753721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:54.758659 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:54.799656 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:57:54.801582 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:57:54.805875 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:57:54.807780 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:54.811062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:57:54.811178 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:57:54.815047 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:57:54.815124 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:54.828515 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:57:54.828615 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:54.833349 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:57:54.833433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:57:54.835745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:54.835827 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:54.855209 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:57:54.858183 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:57:54.858295 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:54.861049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:54.861131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:54.894162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:57:54.894573 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:57:54.901663 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:57:54.916280 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:57:54.933554 systemd[1]: Switching root. May 9 23:57:54.968994 systemd-journald[250]: Journal stopped May 9 23:57:56.695211 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). May 9 23:57:56.695348 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:57:56.695392 kernel: SELinux: policy capability open_perms=1 May 9 23:57:56.695423 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:57:56.695454 kernel: SELinux: policy capability always_check_network=0 May 9 23:57:56.695483 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:57:56.695529 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:57:56.695571 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:57:56.695613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:57:56.695643 kernel: audit: type=1403 audit(1746835075.276:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:57:56.695684 systemd[1]: Successfully loaded SELinux policy in 48.830ms. May 9 23:57:56.695724 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.012ms. May 9 23:57:56.695769 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:56.695803 systemd[1]: Detected virtualization amazon. May 9 23:57:56.695835 systemd[1]: Detected architecture arm64. May 9 23:57:56.695866 systemd[1]: Detected first boot. May 9 23:57:56.695903 systemd[1]: Initializing machine ID from VM UUID. May 9 23:57:56.695957 zram_generator::config[1484]: No configuration found. May 9 23:57:56.695998 systemd[1]: Populated /etc with preset unit settings. May 9 23:57:56.696032 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:57:56.696064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:57:56.696097 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:57:56.696128 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:57:56.696160 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:57:56.696195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:57:56.696227 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:57:56.696260 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:57:56.696293 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:57:56.696326 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:57:56.696359 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:57:56.696388 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:56.696418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:56.696449 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:57:56.696484 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:57:56.696516 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:57:56.696548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:56.696577 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 23:57:56.696607 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:56.696638 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:57:56.696671 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:57:56.696704 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:57:56.696739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:57:56.696770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:56.696804 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:56.696833 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:56.696866 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:56.696896 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:57:56.696955 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:57:56.696990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:56.697020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:56.697056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:56.697087 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:57:56.697116 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:57:56.697149 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:57:56.697178 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:57:56.697207 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:57:56.697237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:57:56.697268 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:57:56.697303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:57:56.697337 systemd[1]: Reached target machines.target - Containers. May 9 23:57:56.697370 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:57:56.697399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:57:56.697439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:56.697469 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:57:56.697501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:57:56.697530 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:57:56.697559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:57:56.697592 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:57:56.697622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:57:56.697663 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:57:56.697695 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:57:56.697726 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:57:56.697755 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:57:56.697784 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:57:56.697813 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:56.697844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:56.697878 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:57:56.697923 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:57:56.697961 kernel: fuse: init (API version 7.39) May 9 23:57:56.697991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:56.698022 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:57:56.698438 systemd[1]: Stopped verity-setup.service. May 9 23:57:56.700937 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:57:56.700993 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:57:56.701027 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:57:56.701064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:57:56.701138 systemd-journald[1565]: Collecting audit messages is disabled. May 9 23:57:56.701189 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:57:56.701224 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:57:56.701254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:56.701285 systemd-journald[1565]: Journal started May 9 23:57:56.701332 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec2ec301db4cb9a7ad17f0e47eb974db) is 8.0M, max 75.3M, 67.3M free. May 9 23:57:56.234370 systemd[1]: Queued start job for default target multi-user.target. May 9 23:57:56.265350 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 23:57:56.266149 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:57:56.709941 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:56.711400 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:57:56.713029 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:57:56.715963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:57:56.716281 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:57:56.719059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:57:56.719346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:57:56.723623 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:57:56.723960 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:57:56.735972 kernel: ACPI: bus type drm_connector registered May 9 23:57:56.740551 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:57:56.743263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:57:56.776224 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:57:56.787956 kernel: loop: module loaded May 9 23:57:56.799926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:57:56.806039 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:57:56.806382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:57:56.813061 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:56.818577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:57:56.821613 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:57:56.824479 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:57:56.827193 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:57:56.839646 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:57:56.843181 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:57:56.843249 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:56.847641 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:57:56.859405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:57:56.864371 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:57:56.867325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:57:56.874212 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:57:56.879436 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:57:56.882715 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:57:56.892655 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:57:56.895152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:57:56.898218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:56.904537 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:57:56.916094 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:57:56.956432 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:57:56.959041 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:57:56.966123 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:57:57.000939 kernel: loop0: detected capacity change from 0 to 52536 May 9 23:57:57.001676 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec2ec301db4cb9a7ad17f0e47eb974db is 180.137ms for 914 entries. May 9 23:57:57.001676 systemd-journald[1565]: System Journal (/var/log/journal/ec2ec301db4cb9a7ad17f0e47eb974db) is 8.0M, max 195.6M, 187.6M free. May 9 23:57:57.208508 systemd-journald[1565]: Received client request to flush runtime journal. May 9 23:57:57.208603 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:57:57.208658 kernel: loop1: detected capacity change from 0 to 189592 May 9 23:57:57.010002 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:57:57.022308 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:57:57.068221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:57.108620 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:57.123607 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:57:57.174114 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 23:57:57.215639 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:57:57.221597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:57:57.224724 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:57:57.276023 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:57:57.289146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:57.357404 kernel: loop2: detected capacity change from 0 to 114432 May 9 23:57:57.390689 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. May 9 23:57:57.391538 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. May 9 23:57:57.410034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:57.416074 kernel: loop3: detected capacity change from 0 to 114328 May 9 23:57:57.474940 kernel: loop4: detected capacity change from 0 to 52536 May 9 23:57:57.507335 kernel: loop5: detected capacity change from 0 to 189592 May 9 23:57:57.543035 kernel: loop6: detected capacity change from 0 to 114432 May 9 23:57:57.571962 kernel: loop7: detected capacity change from 0 to 114328 May 9 23:57:57.598662 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 23:57:57.601136 (sd-merge)[1641]: Merged extensions into '/usr'. May 9 23:57:57.616204 systemd[1]: Reloading requested from client PID 1611 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:57:57.616786 systemd[1]: Reloading... May 9 23:57:57.871947 zram_generator::config[1670]: No configuration found. May 9 23:57:57.918122 ldconfig[1602]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:57:58.170349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:57:58.285411 systemd[1]: Reloading finished in 667 ms. May 9 23:57:58.326595 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:57:58.329338 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:57:58.332346 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:57:58.348294 systemd[1]: Starting ensure-sysext.service... May 9 23:57:58.360318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:58.365110 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:58.386149 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... May 9 23:57:58.386180 systemd[1]: Reloading... May 9 23:57:58.421556 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:57:58.426245 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:57:58.430791 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:57:58.431739 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. May 9 23:57:58.434166 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. May 9 23:57:58.444576 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:57:58.444791 systemd-tmpfiles[1721]: Skipping /boot May 9 23:57:58.486024 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:57:58.486045 systemd-tmpfiles[1721]: Skipping /boot May 9 23:57:58.492335 systemd-udevd[1722]: Using default interface naming scheme 'v255'. May 9 23:57:58.572984 zram_generator::config[1751]: No configuration found. May 9 23:57:58.777407 (udev-worker)[1783]: Network interface NamePolicy= disabled on kernel command line. May 9 23:57:58.920516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:57:59.057946 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1812) May 9 23:57:59.064670 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 23:57:59.065808 systemd[1]: Reloading finished in 679 ms. May 9 23:57:59.113610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:59.125109 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:59.204487 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:57:59.231438 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:57:59.239421 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:57:59.249444 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:59.259257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:57:59.265365 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:57:59.275429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:59.298611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:57:59.302755 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:57:59.312319 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:57:59.318449 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:57:59.321989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:57:59.336285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:57:59.336741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:57:59.338993 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:57:59.364003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:57:59.369444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:57:59.372100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:57:59.372496 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:57:59.382442 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:57:59.391032 systemd[1]: Finished ensure-sysext.service. May 9 23:57:59.459433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:57:59.461278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:57:59.464800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:57:59.465301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:57:59.468371 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:57:59.470272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:57:59.484106 augenrules[1946]: No rules May 9 23:57:59.478957 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:57:59.487323 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:57:59.489807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:57:59.498092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:57:59.498226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:57:59.532013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:57:59.546185 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:57:59.567936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:57:59.579378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:57:59.584530 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:57:59.601161 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:57:59.604066 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:57:59.608365 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:57:59.628049 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:57:59.649308 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:57:59.665938 lvm[1959]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:57:59.667529 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:57:59.691477 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:59.725244 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:57:59.728336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:59.745348 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:57:59.778346 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:57:59.812771 systemd-resolved[1912]: Positive Trust Anchors: May 9 23:57:59.812804 systemd-resolved[1912]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:57:59.812871 systemd-resolved[1912]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:57:59.821208 systemd-resolved[1912]: Defaulting to hostname 'linux'. May 9 23:57:59.824001 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:57:59.829000 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:59.832075 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:57:59.834238 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:57:59.836810 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:57:59.839540 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:57:59.841845 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:57:59.845625 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:57:59.848059 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:57:59.848118 systemd[1]: Reached target paths.target - Path Units. May 9 23:57:59.849901 systemd[1]: Reached target timers.target - Timer Units. May 9 23:57:59.855481 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:57:59.858205 systemd-networkd[1908]: lo: Link UP May 9 23:57:59.861217 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:57:59.861393 systemd-networkd[1908]: lo: Gained carrier May 9 23:57:59.866101 systemd-networkd[1908]: Enumeration completed May 9 23:57:59.872680 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:57:59.875610 systemd-networkd[1908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:59.875624 systemd-networkd[1908]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:57:59.877418 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:57:59.881244 systemd-networkd[1908]: eth0: Link UP May 9 23:57:59.881705 systemd-networkd[1908]: eth0: Gained carrier May 9 23:57:59.881738 systemd-networkd[1908]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:59.881988 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:57:59.884820 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:57:59.888279 systemd[1]: Reached target network.target - Network. May 9 23:57:59.890171 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:57:59.892077 systemd[1]: Reached target basic.target - Basic System. May 9 23:57:59.894696 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:57:59.894764 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:57:59.904096 systemd-networkd[1908]: eth0: DHCPv4 address 172.31.18.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:57:59.905027 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:57:59.910210 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 23:57:59.919324 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:57:59.929240 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:57:59.944336 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:57:59.946336 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:57:59.956103 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:57:59.970325 systemd[1]: Started ntpd.service - Network Time Service. May 9 23:57:59.979129 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:57:59.995879 jq[1982]: false May 9 23:57:59.992235 systemd[1]: Starting setup-oem.service - Setup OEM... May 9 23:57:59.999509 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:58:00.021254 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:58:00.034999 extend-filesystems[1983]: Found loop4 May 9 23:58:00.034999 extend-filesystems[1983]: Found loop5 May 9 23:58:00.034999 extend-filesystems[1983]: Found loop6 May 9 23:58:00.034999 extend-filesystems[1983]: Found loop7 May 9 23:58:00.034999 extend-filesystems[1983]: Found nvme0n1 May 9 23:58:00.034999 extend-filesystems[1983]: Found nvme0n1p1 May 9 23:58:00.034999 extend-filesystems[1983]: Found nvme0n1p2 May 9 23:58:00.034999 extend-filesystems[1983]: Found nvme0n1p3 May 9 23:58:00.034999 extend-filesystems[1983]: Found usr May 9 23:58:00.034999 extend-filesystems[1983]: Found nvme0n1p4 May 9 23:58:00.034999 extend-filesystems[1983]: Found nvme0n1p6 May 9 23:58:00.042197 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:58:00.039688 dbus-daemon[1981]: [system] SELinux support is enabled May 9 23:58:00.150803 extend-filesystems[1983]: Found nvme0n1p7 May 9 23:58:00.150803 extend-filesystems[1983]: Found nvme0n1p9 May 9 23:58:00.150803 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 May 9 23:58:00.077464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:58:00.048972 dbus-daemon[1981]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1908 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 9 23:58:00.083130 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:58:00.130792 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.systemd1' May 9 23:58:00.084961 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:58:00.087630 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:58:00.094719 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: ---------------------------------------------------- May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: corporation. Support and training for ntp-4 are May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: available at https://www.nwtime.org/support May 9 23:58:00.201081 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: ---------------------------------------------------- May 9 23:58:00.197628 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:58:00.099280 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:58:00.197674 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:58:00.112697 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:58:00.218086 jq[1999]: true May 9 23:58:00.197695 ntpd[1986]: ---------------------------------------------------- May 9 23:58:00.113136 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:58:00.197713 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, May 9 23:58:00.122637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:58:00.197732 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:58:00.231349 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: proto: precision = 0.108 usec (-23) May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: basedate set to 2025-04-27 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: gps base set to 2025-04-27 (week 2364) May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Listen normally on 3 eth0 172.31.18.52:123 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Listen normally on 4 lo [::1]:123 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: bind(21) AF_INET6 fe80::40f:9eff:fe47:fde1%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: unable to create socket on eth0 (5) for fe80::40f:9eff:fe47:fde1%2#123 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: failed to init interface for address fe80::40f:9eff:fe47:fde1%2 May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: Listening on routing socket on fd #21 for interface updates May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:00.240559 ntpd[1986]: 9 May 23:58:00 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:00.122742 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:58:00.197750 ntpd[1986]: corporation. Support and training for ntp-4 are May 9 23:58:00.138434 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:58:00.197771 ntpd[1986]: available at https://www.nwtime.org/support May 9 23:58:00.138475 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:58:00.197789 ntpd[1986]: ---------------------------------------------------- May 9 23:58:00.265245 extend-filesystems[2026]: resize2fs 1.47.1 (20-May-2024) May 9 23:58:00.155889 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 9 23:58:00.220892 ntpd[1986]: proto: precision = 0.108 usec (-23) May 9 23:58:00.205026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:58:00.222258 ntpd[1986]: basedate set to 2025-04-27 May 9 23:58:00.205589 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:58:00.222295 ntpd[1986]: gps base set to 2025-04-27 (week 2364) May 9 23:58:00.259347 (ntainerd)[2025]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:58:00.224820 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:58:00.224894 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:58:00.225175 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:58:00.225244 ntpd[1986]: Listen normally on 3 eth0 172.31.18.52:123 May 9 23:58:00.225310 ntpd[1986]: Listen normally on 4 lo [::1]:123 May 9 23:58:00.225381 ntpd[1986]: bind(21) AF_INET6 fe80::40f:9eff:fe47:fde1%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:00.225420 ntpd[1986]: unable to create socket on eth0 (5) for fe80::40f:9eff:fe47:fde1%2#123 May 9 23:58:00.225447 ntpd[1986]: failed to init interface for address fe80::40f:9eff:fe47:fde1%2 May 9 23:58:00.225497 ntpd[1986]: Listening on routing socket on fd #21 for interface updates May 9 23:58:00.227852 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:00.229792 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:00.291023 tar[2009]: linux-arm64/helm May 9 23:58:00.296937 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 9 23:58:00.307447 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:58:00.311035 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:58:00.318626 update_engine[1997]: I20250509 23:58:00.318446 1997 main.cc:92] Flatcar Update Engine starting May 9 23:58:00.338299 coreos-metadata[1980]: May 09 23:58:00.335 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:58:00.338299 coreos-metadata[1980]: May 09 23:58:00.338 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 9 23:58:00.342033 coreos-metadata[1980]: May 09 23:58:00.340 INFO Fetch successful May 9 23:58:00.342033 coreos-metadata[1980]: May 09 23:58:00.340 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 9 23:58:00.344211 coreos-metadata[1980]: May 09 23:58:00.343 INFO Fetch successful May 9 23:58:00.344484 coreos-metadata[1980]: May 09 23:58:00.344 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 9 23:58:00.355046 coreos-metadata[1980]: May 09 23:58:00.351 INFO Fetch successful May 9 23:58:00.355046 coreos-metadata[1980]: May 09 23:58:00.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 9 23:58:00.356369 systemd[1]: Started update-engine.service - Update Engine. May 9 23:58:00.362776 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:58:00.367939 update_engine[1997]: I20250509 23:58:00.366834 1997 update_check_scheduler.cc:74] Next update check in 6m34s May 9 23:58:00.368050 jq[2017]: true May 9 23:58:00.369071 coreos-metadata[1980]: May 09 23:58:00.368 INFO Fetch successful May 9 23:58:00.369071 coreos-metadata[1980]: May 09 23:58:00.368 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 9 23:58:00.370029 systemd[1]: Finished setup-oem.service - Setup OEM. May 9 23:58:00.374171 coreos-metadata[1980]: May 09 23:58:00.373 INFO Fetch failed with 404: resource not found May 9 23:58:00.374171 coreos-metadata[1980]: May 09 23:58:00.373 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 9 23:58:00.376715 coreos-metadata[1980]: May 09 23:58:00.376 INFO Fetch successful May 9 23:58:00.376715 coreos-metadata[1980]: May 09 23:58:00.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 9 23:58:00.381218 coreos-metadata[1980]: May 09 23:58:00.380 INFO Fetch successful May 9 23:58:00.381218 coreos-metadata[1980]: May 09 23:58:00.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 9 23:58:00.385955 coreos-metadata[1980]: May 09 23:58:00.385 INFO Fetch successful May 9 23:58:00.385955 coreos-metadata[1980]: May 09 23:58:00.385 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 9 23:58:00.391408 coreos-metadata[1980]: May 09 23:58:00.390 INFO Fetch successful May 9 23:58:00.391408 coreos-metadata[1980]: May 09 23:58:00.390 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 9 23:58:00.391670 coreos-metadata[1980]: May 09 23:58:00.391 INFO Fetch successful May 9 23:58:00.507946 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 9 23:58:00.516833 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1783) May 9 23:58:00.544199 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 9 23:58:00.544199 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:58:00.544199 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 9 23:58:00.567762 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 May 9 23:58:00.556003 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:58:00.558651 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:58:00.570854 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 23:58:00.574638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:58:00.650784 locksmithd[2034]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:58:00.690181 bash[2103]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:00.719785 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:58:00.745725 systemd[1]: Starting sshkeys.service... May 9 23:58:00.782647 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 23:58:00.796817 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 23:58:00.815837 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:58:00.815882 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) May 9 23:58:00.823213 systemd-logind[1993]: New seat seat0. May 9 23:58:00.833260 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:58:01.015428 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.hostname1' May 9 23:58:01.017816 dbus-daemon[1981]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2008 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 9 23:58:01.023197 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 9 23:58:01.037240 containerd[2025]: time="2025-05-09T23:58:01.032022524Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 23:58:01.042505 systemd[1]: Starting polkit.service - Authorization Manager... May 9 23:58:01.127057 polkitd[2169]: Started polkitd version 121 May 9 23:58:01.141503 polkitd[2169]: Loading rules from directory /etc/polkit-1/rules.d May 9 23:58:01.141634 polkitd[2169]: Loading rules from directory /usr/share/polkit-1/rules.d May 9 23:58:01.144366 polkitd[2169]: Finished loading, compiling and executing 2 rules May 9 23:58:01.145223 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 9 23:58:01.145515 systemd[1]: Started polkit.service - Authorization Manager. May 9 23:58:01.148413 polkitd[2169]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 9 23:58:01.168219 coreos-metadata[2141]: May 09 23:58:01.168 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:58:01.170004 coreos-metadata[2141]: May 09 23:58:01.169 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 9 23:58:01.170898 coreos-metadata[2141]: May 09 23:58:01.170 INFO Fetch successful May 9 23:58:01.170898 coreos-metadata[2141]: May 09 23:58:01.170 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 23:58:01.176933 coreos-metadata[2141]: May 09 23:58:01.171 INFO Fetch successful May 9 23:58:01.179156 unknown[2141]: wrote ssh authorized keys file for user: core May 9 23:58:01.194250 systemd-resolved[1912]: System hostname changed to 'ip-172-31-18-52'. May 9 23:58:01.194258 systemd-hostnamed[2008]: Hostname set to (transient) May 9 23:58:01.205699 ntpd[1986]: 9 May 23:58:01 ntpd[1986]: bind(24) AF_INET6 fe80::40f:9eff:fe47:fde1%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:01.205699 ntpd[1986]: 9 May 23:58:01 ntpd[1986]: unable to create socket on eth0 (6) for fe80::40f:9eff:fe47:fde1%2#123 May 9 23:58:01.205699 ntpd[1986]: 9 May 23:58:01 ntpd[1986]: failed to init interface for address fe80::40f:9eff:fe47:fde1%2 May 9 23:58:01.200493 ntpd[1986]: bind(24) AF_INET6 fe80::40f:9eff:fe47:fde1%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:01.200546 ntpd[1986]: unable to create socket on eth0 (6) for fe80::40f:9eff:fe47:fde1%2#123 May 9 23:58:01.200575 ntpd[1986]: failed to init interface for address fe80::40f:9eff:fe47:fde1%2 May 9 23:58:01.231246 containerd[2025]: time="2025-05-09T23:58:01.231181497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.234269 containerd[2025]: time="2025-05-09T23:58:01.234200073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:01.234432 containerd[2025]: time="2025-05-09T23:58:01.234402225Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:58:01.234545 containerd[2025]: time="2025-05-09T23:58:01.234516669Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:58:01.234956 containerd[2025]: time="2025-05-09T23:58:01.234897849Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:58:01.235418 containerd[2025]: time="2025-05-09T23:58:01.235160313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.235418 containerd[2025]: time="2025-05-09T23:58:01.235304025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:01.235628 containerd[2025]: time="2025-05-09T23:58:01.235598253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.236941 containerd[2025]: time="2025-05-09T23:58:01.236182713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:01.236941 containerd[2025]: time="2025-05-09T23:58:01.236226873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.236941 containerd[2025]: time="2025-05-09T23:58:01.236259225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:01.236941 containerd[2025]: time="2025-05-09T23:58:01.236284017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.236941 containerd[2025]: time="2025-05-09T23:58:01.236476869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.236941 containerd[2025]: time="2025-05-09T23:58:01.236849169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:01.238560 containerd[2025]: time="2025-05-09T23:58:01.237170541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:01.238560 containerd[2025]: time="2025-05-09T23:58:01.237210081Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:58:01.239031 containerd[2025]: time="2025-05-09T23:58:01.238994277Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:58:01.239221 containerd[2025]: time="2025-05-09T23:58:01.239193633Z" level=info msg="metadata content store policy set" policy=shared May 9 23:58:01.240522 update-ssh-keys[2180]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:01.244027 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 23:58:01.251828 systemd[1]: Finished sshkeys.service. May 9 23:58:01.254058 containerd[2025]: time="2025-05-09T23:58:01.252402969Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:58:01.254058 containerd[2025]: time="2025-05-09T23:58:01.253317537Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:58:01.254058 containerd[2025]: time="2025-05-09T23:58:01.253355973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:58:01.254058 containerd[2025]: time="2025-05-09T23:58:01.253392093Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:58:01.254058 containerd[2025]: time="2025-05-09T23:58:01.253424613Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:58:01.257377 containerd[2025]: time="2025-05-09T23:58:01.257314473Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:58:01.259044 containerd[2025]: time="2025-05-09T23:58:01.258981189Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:58:01.260023 containerd[2025]: time="2025-05-09T23:58:01.259984569Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:58:01.260162 containerd[2025]: time="2025-05-09T23:58:01.260133777Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:58:01.260406 containerd[2025]: time="2025-05-09T23:58:01.260376249Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:58:01.260805 containerd[2025]: time="2025-05-09T23:58:01.260775789Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.260889117Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.260945589Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.260980689Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261014169Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261044769Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261073713Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261104565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261146313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261177369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261206565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261237765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261281949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261314493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:58:01.262934 containerd[2025]: time="2025-05-09T23:58:01.261343029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:58:01.263565 containerd[2025]: time="2025-05-09T23:58:01.261372885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:58:01.263565 containerd[2025]: time="2025-05-09T23:58:01.261402417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263681337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263737137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263768121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263839437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263878713Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263945289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.263977617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.264008589Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.264129645Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.264168381Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.264194253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:58:01.264285 containerd[2025]: time="2025-05-09T23:58:01.264225357Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:58:01.266117 containerd[2025]: time="2025-05-09T23:58:01.266039061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:58:01.267733 containerd[2025]: time="2025-05-09T23:58:01.267642549Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:58:01.267733 containerd[2025]: time="2025-05-09T23:58:01.267718533Z" level=info msg="NRI interface is disabled by configuration." May 9 23:58:01.267931 containerd[2025]: time="2025-05-09T23:58:01.267750585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:58:01.276707 containerd[2025]: time="2025-05-09T23:58:01.272817417Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:58:01.276707 containerd[2025]: time="2025-05-09T23:58:01.274093941Z" level=info msg="Connect containerd service" May 9 23:58:01.276707 containerd[2025]: time="2025-05-09T23:58:01.274311945Z" level=info msg="using legacy CRI server" May 9 23:58:01.276707 containerd[2025]: time="2025-05-09T23:58:01.274337181Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:58:01.276707 containerd[2025]: time="2025-05-09T23:58:01.274489473Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:58:01.276707 containerd[2025]: time="2025-05-09T23:58:01.275655633Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:58:01.277208 containerd[2025]: time="2025-05-09T23:58:01.276999801Z" level=info msg="Start subscribing containerd event" May 9 23:58:01.277208 containerd[2025]: time="2025-05-09T23:58:01.277084749Z" level=info msg="Start recovering state" May 9 23:58:01.277297 containerd[2025]: time="2025-05-09T23:58:01.277206429Z" level=info msg="Start event monitor" May 9 23:58:01.277297 containerd[2025]: time="2025-05-09T23:58:01.277232529Z" level=info msg="Start snapshots syncer" May 9 23:58:01.277297 containerd[2025]: time="2025-05-09T23:58:01.277254261Z" level=info msg="Start cni network conf syncer for default" May 9 23:58:01.277297 containerd[2025]: time="2025-05-09T23:58:01.277272429Z" level=info msg="Start streaming server" May 9 23:58:01.281083 containerd[2025]: time="2025-05-09T23:58:01.281037549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:58:01.281439 containerd[2025]: time="2025-05-09T23:58:01.281292093Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:58:01.281439 containerd[2025]: time="2025-05-09T23:58:01.281407929Z" level=info msg="containerd successfully booted in 0.255651s" May 9 23:58:01.281538 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:58:01.457720 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:58:01.716197 tar[2009]: linux-arm64/LICENSE May 9 23:58:01.716841 tar[2009]: linux-arm64/README.md May 9 23:58:01.737693 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:58:01.937120 systemd-networkd[1908]: eth0: Gained IPv6LL May 9 23:58:01.942955 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:58:01.946630 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:58:01.958348 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 9 23:58:01.968648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:01.976090 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:58:02.056714 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:58:02.067982 amazon-ssm-agent[2191]: Initializing new seelog logger May 9 23:58:02.067982 amazon-ssm-agent[2191]: New Seelog Logger Creation Complete May 9 23:58:02.067982 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.067982 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.068631 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 processing appconfig overrides May 9 23:58:02.070346 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.070346 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.070346 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 processing appconfig overrides May 9 23:58:02.070346 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.070346 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.070346 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 processing appconfig overrides May 9 23:58:02.070346 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO Proxy environment variables: May 9 23:58:02.074942 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.074942 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:02.074942 amazon-ssm-agent[2191]: 2025/05/09 23:58:02 processing appconfig overrides May 9 23:58:02.091457 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:58:02.158000 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:58:02.169698 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO https_proxy: May 9 23:58:02.170582 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:58:02.183371 systemd[1]: Started sshd@0-172.31.18.52:22-147.75.109.163:47824.service - OpenSSH per-connection server daemon (147.75.109.163:47824). May 9 23:58:02.216179 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:58:02.220009 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:58:02.227438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:58:02.272033 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO http_proxy: May 9 23:58:02.292374 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:58:02.306529 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:58:02.315521 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 23:58:02.320616 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:58:02.370088 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO no_proxy: May 9 23:58:02.406956 sshd[2216]: Accepted publickey for core from 147.75.109.163 port 47824 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:02.409404 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:02.434573 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:58:02.446387 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:58:02.453389 systemd-logind[1993]: New session 1 of user core. May 9 23:58:02.470967 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO Checking if agent identity type OnPrem can be assumed May 9 23:58:02.491081 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:58:02.507526 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:58:02.531841 (systemd)[2228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:58:02.567951 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO Checking if agent identity type EC2 can be assumed May 9 23:58:02.668001 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO Agent will take identity from EC2 May 9 23:58:02.757586 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:02.757586 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:02.757586 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:02.757586 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 9 23:58:02.757586 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] Starting Core Agent May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [Registrar] Starting registrar module May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [EC2Identity] EC2 registration was successful. May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [CredentialRefresher] credentialRefresher has started May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [CredentialRefresher] Starting credentials refresher loop May 9 23:58:02.757877 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 9 23:58:02.767225 amazon-ssm-agent[2191]: 2025-05-09 23:58:02 INFO [CredentialRefresher] Next credential rotation will be in 31.641655464866666 minutes May 9 23:58:02.775070 systemd[2228]: Queued start job for default target default.target. May 9 23:58:02.784628 systemd[2228]: Created slice app.slice - User Application Slice. May 9 23:58:02.784696 systemd[2228]: Reached target paths.target - Paths. May 9 23:58:02.784730 systemd[2228]: Reached target timers.target - Timers. May 9 23:58:02.787244 systemd[2228]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:58:02.813671 systemd[2228]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:58:02.813976 systemd[2228]: Reached target sockets.target - Sockets. May 9 23:58:02.814013 systemd[2228]: Reached target basic.target - Basic System. May 9 23:58:02.814108 systemd[2228]: Reached target default.target - Main User Target. May 9 23:58:02.814171 systemd[2228]: Startup finished in 268ms. May 9 23:58:02.814409 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:58:02.825200 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:58:02.982457 systemd[1]: Started sshd@1-172.31.18.52:22-147.75.109.163:47838.service - OpenSSH per-connection server daemon (147.75.109.163:47838). May 9 23:58:03.163637 sshd[2240]: Accepted publickey for core from 147.75.109.163 port 47838 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:03.166449 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:03.176295 systemd-logind[1993]: New session 2 of user core. May 9 23:58:03.181191 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:58:03.311124 sshd[2240]: pam_unix(sshd:session): session closed for user core May 9 23:58:03.317642 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. May 9 23:58:03.319297 systemd[1]: sshd@1-172.31.18.52:22-147.75.109.163:47838.service: Deactivated successfully. May 9 23:58:03.324301 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:58:03.326403 systemd-logind[1993]: Removed session 2. May 9 23:58:03.345059 systemd[1]: Started sshd@2-172.31.18.52:22-147.75.109.163:47842.service - OpenSSH per-connection server daemon (147.75.109.163:47842). May 9 23:58:03.535355 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 47842 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:03.538674 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:03.548016 systemd-logind[1993]: New session 3 of user core. May 9 23:58:03.560194 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:58:03.688351 sshd[2247]: pam_unix(sshd:session): session closed for user core May 9 23:58:03.694248 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. May 9 23:58:03.696499 systemd[1]: sshd@2-172.31.18.52:22-147.75.109.163:47842.service: Deactivated successfully. May 9 23:58:03.699844 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:58:03.701745 systemd-logind[1993]: Removed session 3. May 9 23:58:03.787659 amazon-ssm-agent[2191]: 2025-05-09 23:58:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 9 23:58:03.888880 amazon-ssm-agent[2191]: 2025-05-09 23:58:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started May 9 23:58:03.989273 amazon-ssm-agent[2191]: 2025-05-09 23:58:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 9 23:58:04.198361 ntpd[1986]: Listen normally on 7 eth0 [fe80::40f:9eff:fe47:fde1%2]:123 May 9 23:58:04.199068 ntpd[1986]: 9 May 23:58:04 ntpd[1986]: Listen normally on 7 eth0 [fe80::40f:9eff:fe47:fde1%2]:123 May 9 23:58:04.648216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:04.652455 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:58:04.653668 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:04.655966 systemd[1]: Startup finished in 1.165s (kernel) + 8.478s (initrd) + 9.425s (userspace) = 19.070s. May 9 23:58:05.820935 kubelet[2268]: E0509 23:58:05.820837 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:05.825368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:05.825710 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:05.827063 systemd[1]: kubelet.service: Consumed 1.252s CPU time. May 9 23:58:07.482976 systemd-resolved[1912]: Clock change detected. Flushing caches. May 9 23:58:14.013771 systemd[1]: Started sshd@3-172.31.18.52:22-147.75.109.163:54748.service - OpenSSH per-connection server daemon (147.75.109.163:54748). May 9 23:58:14.181067 sshd[2281]: Accepted publickey for core from 147.75.109.163 port 54748 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:14.183661 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:14.192624 systemd-logind[1993]: New session 4 of user core. May 9 23:58:14.200551 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:58:14.327594 sshd[2281]: pam_unix(sshd:session): session closed for user core May 9 23:58:14.333853 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. May 9 23:58:14.335265 systemd[1]: sshd@3-172.31.18.52:22-147.75.109.163:54748.service: Deactivated successfully. May 9 23:58:14.338328 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:58:14.341021 systemd-logind[1993]: Removed session 4. May 9 23:58:14.362476 systemd[1]: Started sshd@4-172.31.18.52:22-147.75.109.163:54756.service - OpenSSH per-connection server daemon (147.75.109.163:54756). May 9 23:58:14.544631 sshd[2288]: Accepted publickey for core from 147.75.109.163 port 54756 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:14.547155 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:14.554363 systemd-logind[1993]: New session 5 of user core. May 9 23:58:14.567534 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:58:14.685870 sshd[2288]: pam_unix(sshd:session): session closed for user core May 9 23:58:14.692005 systemd[1]: sshd@4-172.31.18.52:22-147.75.109.163:54756.service: Deactivated successfully. May 9 23:58:14.692601 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. May 9 23:58:14.695390 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:58:14.698934 systemd-logind[1993]: Removed session 5. May 9 23:58:14.727815 systemd[1]: Started sshd@5-172.31.18.52:22-147.75.109.163:54766.service - OpenSSH per-connection server daemon (147.75.109.163:54766). May 9 23:58:14.896706 sshd[2295]: Accepted publickey for core from 147.75.109.163 port 54766 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:14.899254 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:14.908633 systemd-logind[1993]: New session 6 of user core. May 9 23:58:14.918556 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:58:15.044748 sshd[2295]: pam_unix(sshd:session): session closed for user core May 9 23:58:15.051655 systemd[1]: sshd@5-172.31.18.52:22-147.75.109.163:54766.service: Deactivated successfully. May 9 23:58:15.055847 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:58:15.057510 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. May 9 23:58:15.059660 systemd-logind[1993]: Removed session 6. May 9 23:58:15.084810 systemd[1]: Started sshd@6-172.31.18.52:22-147.75.109.163:54776.service - OpenSSH per-connection server daemon (147.75.109.163:54776). May 9 23:58:15.262337 sshd[2302]: Accepted publickey for core from 147.75.109.163 port 54776 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:15.264980 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:15.272221 systemd-logind[1993]: New session 7 of user core. May 9 23:58:15.283548 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:58:15.398783 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:58:15.399425 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:15.415000 sudo[2305]: pam_unix(sudo:session): session closed for user root May 9 23:58:15.438129 sshd[2302]: pam_unix(sshd:session): session closed for user core May 9 23:58:15.444047 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. May 9 23:58:15.446222 systemd[1]: sshd@6-172.31.18.52:22-147.75.109.163:54776.service: Deactivated successfully. May 9 23:58:15.450541 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:58:15.452382 systemd-logind[1993]: Removed session 7. May 9 23:58:15.479757 systemd[1]: Started sshd@7-172.31.18.52:22-147.75.109.163:54792.service - OpenSSH per-connection server daemon (147.75.109.163:54792). May 9 23:58:15.647252 sshd[2310]: Accepted publickey for core from 147.75.109.163 port 54792 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:15.649940 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:15.658142 systemd-logind[1993]: New session 8 of user core. May 9 23:58:15.670537 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:58:15.773516 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:58:15.774165 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:15.780438 sudo[2314]: pam_unix(sudo:session): session closed for user root May 9 23:58:15.790620 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 23:58:15.791218 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:15.819221 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 23:58:15.821570 auditctl[2317]: No rules May 9 23:58:15.822251 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:58:15.822723 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 23:58:15.830131 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:15.884251 augenrules[2335]: No rules May 9 23:58:15.887397 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:15.890482 sudo[2313]: pam_unix(sudo:session): session closed for user root May 9 23:58:15.913605 sshd[2310]: pam_unix(sshd:session): session closed for user core May 9 23:58:15.919984 systemd[1]: sshd@7-172.31.18.52:22-147.75.109.163:54792.service: Deactivated successfully. May 9 23:58:15.923186 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:58:15.924543 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. May 9 23:58:15.926267 systemd-logind[1993]: Removed session 8. May 9 23:58:15.947470 systemd[1]: Started sshd@8-172.31.18.52:22-147.75.109.163:54806.service - OpenSSH per-connection server daemon (147.75.109.163:54806). May 9 23:58:16.130117 sshd[2343]: Accepted publickey for core from 147.75.109.163 port 54806 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:16.132781 sshd[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:16.134465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:58:16.143669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:16.149645 systemd-logind[1993]: New session 9 of user core. May 9 23:58:16.154079 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:58:16.271902 sudo[2349]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:58:16.273034 sudo[2349]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:16.478596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:16.483889 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:16.571070 kubelet[2363]: E0509 23:58:16.570879 2363 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:16.577450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:16.577765 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:16.752805 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:58:16.769064 (dockerd)[2376]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:58:17.118152 dockerd[2376]: time="2025-05-09T23:58:17.118057427Z" level=info msg="Starting up" May 9 23:58:17.258348 dockerd[2376]: time="2025-05-09T23:58:17.258116508Z" level=info msg="Loading containers: start." May 9 23:58:17.413338 kernel: Initializing XFRM netlink socket May 9 23:58:17.449068 (udev-worker)[2400]: Network interface NamePolicy= disabled on kernel command line. May 9 23:58:17.530988 systemd-networkd[1908]: docker0: Link UP May 9 23:58:17.557700 dockerd[2376]: time="2025-05-09T23:58:17.557629898Z" level=info msg="Loading containers: done." May 9 23:58:17.582854 dockerd[2376]: time="2025-05-09T23:58:17.582660098Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:58:17.582854 dockerd[2376]: time="2025-05-09T23:58:17.582815198Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 23:58:17.583658 dockerd[2376]: time="2025-05-09T23:58:17.583361270Z" level=info msg="Daemon has completed initialization" May 9 23:58:17.646341 dockerd[2376]: time="2025-05-09T23:58:17.645696086Z" level=info msg="API listen on /run/docker.sock" May 9 23:58:17.648174 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:58:19.068319 containerd[2025]: time="2025-05-09T23:58:19.068247925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 9 23:58:19.685951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464444987.mount: Deactivated successfully. May 9 23:58:20.995832 containerd[2025]: time="2025-05-09T23:58:20.995737783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:20.997938 containerd[2025]: time="2025-05-09T23:58:20.997881679Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 9 23:58:21.000673 containerd[2025]: time="2025-05-09T23:58:21.000586395Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:21.007246 containerd[2025]: time="2025-05-09T23:58:21.007170219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:21.010149 containerd[2025]: time="2025-05-09T23:58:21.009471927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.941149902s" May 9 23:58:21.010149 containerd[2025]: time="2025-05-09T23:58:21.009531411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 9 23:58:21.010647 containerd[2025]: time="2025-05-09T23:58:21.010591371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 9 23:58:22.395371 containerd[2025]: time="2025-05-09T23:58:22.395004150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.397175 containerd[2025]: time="2025-05-09T23:58:22.397107462Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 9 23:58:22.398083 containerd[2025]: time="2025-05-09T23:58:22.397617450Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.403380 containerd[2025]: time="2025-05-09T23:58:22.403270374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.406815 containerd[2025]: time="2025-05-09T23:58:22.405607794Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.394833207s" May 9 23:58:22.406815 containerd[2025]: time="2025-05-09T23:58:22.405671550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 9 23:58:22.406815 containerd[2025]: time="2025-05-09T23:58:22.406396830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 9 23:58:23.595265 containerd[2025]: time="2025-05-09T23:58:23.595044416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:23.597188 containerd[2025]: time="2025-05-09T23:58:23.597114356Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 9 23:58:23.597741 containerd[2025]: time="2025-05-09T23:58:23.597669212Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:23.603326 containerd[2025]: time="2025-05-09T23:58:23.603215288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:23.606177 containerd[2025]: time="2025-05-09T23:58:23.605572304Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.199127174s" May 9 23:58:23.606177 containerd[2025]: time="2025-05-09T23:58:23.605634104Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 9 23:58:23.607655 containerd[2025]: time="2025-05-09T23:58:23.607599296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 9 23:58:24.877031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548275532.mount: Deactivated successfully. May 9 23:58:25.429343 containerd[2025]: time="2025-05-09T23:58:25.428485029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:25.430729 containerd[2025]: time="2025-05-09T23:58:25.430666761Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 9 23:58:25.432111 containerd[2025]: time="2025-05-09T23:58:25.432045309Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:25.435340 containerd[2025]: time="2025-05-09T23:58:25.435230901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:25.437083 containerd[2025]: time="2025-05-09T23:58:25.436764429Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.829105589s" May 9 23:58:25.437083 containerd[2025]: time="2025-05-09T23:58:25.436820433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 9 23:58:25.437807 containerd[2025]: time="2025-05-09T23:58:25.437766765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 23:58:25.958626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875004535.mount: Deactivated successfully. May 9 23:58:26.828904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 23:58:26.838719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:27.215102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:27.227880 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:27.310470 kubelet[2637]: E0509 23:58:27.310166 2637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:27.314799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:27.315135 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:27.393100 containerd[2025]: time="2025-05-09T23:58:27.392818234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:27.397634 containerd[2025]: time="2025-05-09T23:58:27.397269658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 9 23:58:27.402103 containerd[2025]: time="2025-05-09T23:58:27.402016594Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:27.411776 containerd[2025]: time="2025-05-09T23:58:27.411661534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:27.416364 containerd[2025]: time="2025-05-09T23:58:27.415135799Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.977072586s" May 9 23:58:27.416364 containerd[2025]: time="2025-05-09T23:58:27.415202555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 23:58:27.417663 containerd[2025]: time="2025-05-09T23:58:27.417385415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 23:58:28.094744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701942442.mount: Deactivated successfully. May 9 23:58:28.107780 containerd[2025]: time="2025-05-09T23:58:28.107689558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:28.110806 containerd[2025]: time="2025-05-09T23:58:28.110745286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 9 23:58:28.113498 containerd[2025]: time="2025-05-09T23:58:28.113426374Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:28.119663 containerd[2025]: time="2025-05-09T23:58:28.119533714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:28.123898 containerd[2025]: time="2025-05-09T23:58:28.123241234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 705.790419ms" May 9 23:58:28.123898 containerd[2025]: time="2025-05-09T23:58:28.123350914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 23:58:28.126915 containerd[2025]: time="2025-05-09T23:58:28.126788770Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 9 23:58:28.697044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686555530.mount: Deactivated successfully. May 9 23:58:30.598951 containerd[2025]: time="2025-05-09T23:58:30.598873034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:30.601355 containerd[2025]: time="2025-05-09T23:58:30.601278734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 9 23:58:30.603494 containerd[2025]: time="2025-05-09T23:58:30.603402638Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:30.610049 containerd[2025]: time="2025-05-09T23:58:30.609947570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:30.613083 containerd[2025]: time="2025-05-09T23:58:30.612511658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.485523808s" May 9 23:58:30.613083 containerd[2025]: time="2025-05-09T23:58:30.612572366Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 9 23:58:31.490146 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 9 23:58:37.566229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 23:58:37.575847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:37.624355 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:58:37.624543 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:58:37.625097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:37.637913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:37.706656 systemd[1]: Reloading requested from client PID 2735 ('systemctl') (unit session-9.scope)... May 9 23:58:37.706869 systemd[1]: Reloading... May 9 23:58:37.978517 zram_generator::config[2778]: No configuration found. May 9 23:58:38.179302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:38.351689 systemd[1]: Reloading finished in 644 ms. May 9 23:58:38.452112 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:38.455908 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:58:38.456348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:38.465777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:38.782433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:38.800094 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:58:38.877427 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:38.877427 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:58:38.877427 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:38.877992 kubelet[2840]: I0509 23:58:38.877518 2840 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:58:39.427438 kubelet[2840]: I0509 23:58:39.427387 2840 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 23:58:39.427438 kubelet[2840]: I0509 23:58:39.427432 2840 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:58:39.429339 kubelet[2840]: I0509 23:58:39.428382 2840 server.go:929] "Client rotation is on, will bootstrap in background" May 9 23:58:39.485832 kubelet[2840]: E0509 23:58:39.485783 2840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:39.489363 kubelet[2840]: I0509 23:58:39.489323 2840 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:58:39.501469 kubelet[2840]: E0509 23:58:39.501405 2840 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:58:39.501469 kubelet[2840]: I0509 23:58:39.501467 2840 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:58:39.507851 kubelet[2840]: I0509 23:58:39.507809 2840 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:58:39.508151 kubelet[2840]: I0509 23:58:39.508112 2840 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 23:58:39.508562 kubelet[2840]: I0509 23:58:39.508503 2840 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:58:39.508835 kubelet[2840]: I0509 23:58:39.508555 2840 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-52","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:58:39.509010 kubelet[2840]: I0509 23:58:39.508890 2840 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:58:39.509010 kubelet[2840]: I0509 23:58:39.508914 2840 container_manager_linux.go:300] "Creating device plugin manager" May 9 23:58:39.509109 kubelet[2840]: I0509 23:58:39.509100 2840 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:39.514081 kubelet[2840]: I0509 23:58:39.513562 2840 kubelet.go:408] "Attempting to sync node with API server" May 9 23:58:39.514081 kubelet[2840]: I0509 23:58:39.513613 2840 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:58:39.514081 kubelet[2840]: I0509 23:58:39.513673 2840 kubelet.go:314] "Adding apiserver pod source" May 9 23:58:39.514081 kubelet[2840]: I0509 23:58:39.513695 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:58:39.521598 kubelet[2840]: W0509 23:58:39.520394 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-52&limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:39.521598 kubelet[2840]: E0509 23:58:39.520540 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-52&limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:39.521598 kubelet[2840]: W0509 23:58:39.521357 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:39.521598 kubelet[2840]: E0509 23:58:39.521461 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:39.522354 kubelet[2840]: I0509 23:58:39.521922 2840 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:58:39.524980 kubelet[2840]: I0509 23:58:39.524939 2840 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:58:39.526399 kubelet[2840]: W0509 23:58:39.526366 2840 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:58:39.528422 kubelet[2840]: I0509 23:58:39.528124 2840 server.go:1269] "Started kubelet" May 9 23:58:39.529168 kubelet[2840]: I0509 23:58:39.529104 2840 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:58:39.534516 kubelet[2840]: I0509 23:58:39.534420 2840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:58:39.536107 kubelet[2840]: I0509 23:58:39.535194 2840 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:58:39.536107 kubelet[2840]: I0509 23:58:39.535510 2840 server.go:460] "Adding debug handlers to kubelet server" May 9 23:58:39.540401 kubelet[2840]: E0509 23:58:39.538431 2840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.52:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.52:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-52.183e01493ca61bdb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-52,UID:ip-172-31-18-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-52,},FirstTimestamp:2025-05-09 23:58:39.528090587 +0000 UTC m=+0.721531025,LastTimestamp:2025-05-09 23:58:39.528090587 +0000 UTC m=+0.721531025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-52,}" May 9 23:58:39.543998 kubelet[2840]: I0509 23:58:39.543353 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:58:39.545843 kubelet[2840]: E0509 23:58:39.545789 2840 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:58:39.546179 kubelet[2840]: I0509 23:58:39.546140 2840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:58:39.548505 kubelet[2840]: I0509 23:58:39.548467 2840 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 23:58:39.549121 kubelet[2840]: E0509 23:58:39.549086 2840 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-52\" not found" May 9 23:58:39.552214 kubelet[2840]: E0509 23:58:39.551597 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-52?timeout=10s\": dial tcp 172.31.18.52:6443: connect: connection refused" interval="200ms" May 9 23:58:39.552412 kubelet[2840]: W0509 23:58:39.552261 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:39.552412 kubelet[2840]: E0509 23:58:39.552380 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:39.552920 kubelet[2840]: I0509 23:58:39.552874 2840 factory.go:221] Registration of the systemd container factory successfully May 9 23:58:39.553057 kubelet[2840]: I0509 23:58:39.553018 2840 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:58:39.554613 kubelet[2840]: I0509 23:58:39.554156 2840 reconciler.go:26] "Reconciler: start to sync state" May 9 23:58:39.554613 kubelet[2840]: I0509 23:58:39.554208 2840 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 23:58:39.554802 kubelet[2840]: I0509 23:58:39.554642 2840 factory.go:221] Registration of the containerd container factory successfully May 9 23:58:39.585539 kubelet[2840]: I0509 23:58:39.585359 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:58:39.587639 kubelet[2840]: I0509 23:58:39.587489 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:58:39.587639 kubelet[2840]: I0509 23:58:39.587535 2840 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:58:39.587639 kubelet[2840]: I0509 23:58:39.587567 2840 kubelet.go:2321] "Starting kubelet main sync loop" May 9 23:58:39.587906 kubelet[2840]: E0509 23:58:39.587632 2840 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:58:39.590687 kubelet[2840]: W0509 23:58:39.590528 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:39.590687 kubelet[2840]: E0509 23:58:39.590640 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:39.603144 kubelet[2840]: I0509 23:58:39.603075 2840 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:58:39.603618 kubelet[2840]: I0509 23:58:39.603107 2840 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:58:39.603618 kubelet[2840]: I0509 23:58:39.603457 2840 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:39.608762 kubelet[2840]: I0509 23:58:39.608597 2840 policy_none.go:49] "None policy: Start" May 9 23:58:39.610517 kubelet[2840]: I0509 23:58:39.609940 2840 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:58:39.610517 kubelet[2840]: I0509 23:58:39.609979 2840 state_mem.go:35] "Initializing new in-memory state store" May 9 23:58:39.622003 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:58:39.637154 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:58:39.644190 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:58:39.649549 kubelet[2840]: E0509 23:58:39.649482 2840 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-52\" not found" May 9 23:58:39.653165 kubelet[2840]: I0509 23:58:39.652334 2840 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:58:39.653165 kubelet[2840]: I0509 23:58:39.652631 2840 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:58:39.653165 kubelet[2840]: I0509 23:58:39.652651 2840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:58:39.653165 kubelet[2840]: I0509 23:58:39.652999 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:58:39.657087 kubelet[2840]: E0509 23:58:39.656987 2840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-52\" not found" May 9 23:58:39.705372 systemd[1]: Created slice kubepods-burstable-podd7d8a92f58088e51a0ce9d5c0e9a01c5.slice - libcontainer container kubepods-burstable-podd7d8a92f58088e51a0ce9d5c0e9a01c5.slice. May 9 23:58:39.725242 systemd[1]: Created slice kubepods-burstable-pod6e06f0e6eb25eb8e2cabd3ca67ae300d.slice - libcontainer container kubepods-burstable-pod6e06f0e6eb25eb8e2cabd3ca67ae300d.slice. May 9 23:58:39.735515 systemd[1]: Created slice kubepods-burstable-pod142be92da19c0df4c3659904b49c5608.slice - libcontainer container kubepods-burstable-pod142be92da19c0df4c3659904b49c5608.slice. May 9 23:58:39.752208 kubelet[2840]: E0509 23:58:39.752147 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-52?timeout=10s\": dial tcp 172.31.18.52:6443: connect: connection refused" interval="400ms" May 9 23:58:39.755207 kubelet[2840]: I0509 23:58:39.755163 2840 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-52" May 9 23:58:39.755785 kubelet[2840]: E0509 23:58:39.755724 2840 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.52:6443/api/v1/nodes\": dial tcp 172.31.18.52:6443: connect: connection refused" node="ip-172-31-18-52" May 9 23:58:39.757602 kubelet[2840]: I0509 23:58:39.757103 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e06f0e6eb25eb8e2cabd3ca67ae300d-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-52\" (UID: \"6e06f0e6eb25eb8e2cabd3ca67ae300d\") " pod="kube-system/kube-scheduler-ip-172-31-18-52" May 9 23:58:39.757602 kubelet[2840]: I0509 23:58:39.757160 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:39.757602 kubelet[2840]: I0509 23:58:39.757201 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7d8a92f58088e51a0ce9d5c0e9a01c5-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-52\" (UID: \"d7d8a92f58088e51a0ce9d5c0e9a01c5\") " pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:39.757602 kubelet[2840]: I0509 23:58:39.757238 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7d8a92f58088e51a0ce9d5c0e9a01c5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-52\" (UID: \"d7d8a92f58088e51a0ce9d5c0e9a01c5\") " pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:39.757602 kubelet[2840]: I0509 23:58:39.757273 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:39.757902 kubelet[2840]: I0509 23:58:39.757349 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:39.757902 kubelet[2840]: I0509 23:58:39.757386 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:39.757902 kubelet[2840]: I0509 23:58:39.757420 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:39.757902 kubelet[2840]: I0509 23:58:39.757459 2840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7d8a92f58088e51a0ce9d5c0e9a01c5-ca-certs\") pod \"kube-apiserver-ip-172-31-18-52\" (UID: \"d7d8a92f58088e51a0ce9d5c0e9a01c5\") " pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:39.958549 kubelet[2840]: I0509 23:58:39.958368 2840 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-52" May 9 23:58:39.959875 kubelet[2840]: E0509 23:58:39.959824 2840 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.52:6443/api/v1/nodes\": dial tcp 172.31.18.52:6443: connect: connection refused" node="ip-172-31-18-52" May 9 23:58:40.021125 containerd[2025]: time="2025-05-09T23:58:40.021057393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-52,Uid:d7d8a92f58088e51a0ce9d5c0e9a01c5,Namespace:kube-system,Attempt:0,}" May 9 23:58:40.031852 containerd[2025]: time="2025-05-09T23:58:40.031747341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-52,Uid:6e06f0e6eb25eb8e2cabd3ca67ae300d,Namespace:kube-system,Attempt:0,}" May 9 23:58:40.041058 containerd[2025]: time="2025-05-09T23:58:40.040709217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-52,Uid:142be92da19c0df4c3659904b49c5608,Namespace:kube-system,Attempt:0,}" May 9 23:58:40.152777 kubelet[2840]: E0509 23:58:40.152703 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-52?timeout=10s\": dial tcp 172.31.18.52:6443: connect: connection refused" interval="800ms" May 9 23:58:40.362784 kubelet[2840]: I0509 23:58:40.362235 2840 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-52" May 9 23:58:40.362784 kubelet[2840]: E0509 23:58:40.362708 2840 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.52:6443/api/v1/nodes\": dial tcp 172.31.18.52:6443: connect: connection refused" node="ip-172-31-18-52" May 9 23:58:40.537564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922523484.mount: Deactivated successfully. May 9 23:58:40.555012 containerd[2025]: time="2025-05-09T23:58:40.554939280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:40.557037 containerd[2025]: time="2025-05-09T23:58:40.556968216Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:40.559173 containerd[2025]: time="2025-05-09T23:58:40.559078164Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 9 23:58:40.561098 containerd[2025]: time="2025-05-09T23:58:40.561049128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:58:40.563241 containerd[2025]: time="2025-05-09T23:58:40.563170380Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:40.565734 containerd[2025]: time="2025-05-09T23:58:40.565669488Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:58:40.567575 containerd[2025]: time="2025-05-09T23:58:40.567501696Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:40.574881 containerd[2025]: time="2025-05-09T23:58:40.574801884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:40.576941 containerd[2025]: time="2025-05-09T23:58:40.576660120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.480399ms" May 9 23:58:40.583037 containerd[2025]: time="2025-05-09T23:58:40.582967224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.150559ms" May 9 23:58:40.586226 containerd[2025]: time="2025-05-09T23:58:40.585988416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.135523ms" May 9 23:58:40.610264 kubelet[2840]: W0509 23:58:40.610144 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:40.610441 kubelet[2840]: E0509 23:58:40.610280 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:40.795630 containerd[2025]: time="2025-05-09T23:58:40.794734009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:40.795630 containerd[2025]: time="2025-05-09T23:58:40.794877649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:40.795630 containerd[2025]: time="2025-05-09T23:58:40.794915617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:40.795630 containerd[2025]: time="2025-05-09T23:58:40.795072541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:40.800075 containerd[2025]: time="2025-05-09T23:58:40.799802701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:40.800322 containerd[2025]: time="2025-05-09T23:58:40.800043145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:40.800578 containerd[2025]: time="2025-05-09T23:58:40.800283421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:40.800807 containerd[2025]: time="2025-05-09T23:58:40.800698009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:40.800920 containerd[2025]: time="2025-05-09T23:58:40.800782093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:40.800920 containerd[2025]: time="2025-05-09T23:58:40.800819017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:40.801525 containerd[2025]: time="2025-05-09T23:58:40.800952241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:40.802036 containerd[2025]: time="2025-05-09T23:58:40.801858373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:40.809978 kubelet[2840]: W0509 23:58:40.809913 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:40.812363 kubelet[2840]: E0509 23:58:40.809993 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:40.862601 systemd[1]: Started cri-containerd-be340cefcf6fc50f1cd13afb2d1771e3ef678ae4ffae5ea1d64593bed90c85b9.scope - libcontainer container be340cefcf6fc50f1cd13afb2d1771e3ef678ae4ffae5ea1d64593bed90c85b9. May 9 23:58:40.865893 systemd[1]: Started cri-containerd-e8160f2a3ee06a63bb425a3cdcbeeccff3a9c01a8799992bfa5120b729b14eb0.scope - libcontainer container e8160f2a3ee06a63bb425a3cdcbeeccff3a9c01a8799992bfa5120b729b14eb0. May 9 23:58:40.879031 systemd[1]: Started cri-containerd-5998ed0bdafce7ddfcf1a41c27976f75b96b53e7f9dd661dd10ab8b33d05d4cb.scope - libcontainer container 5998ed0bdafce7ddfcf1a41c27976f75b96b53e7f9dd661dd10ab8b33d05d4cb. May 9 23:58:40.889625 kubelet[2840]: W0509 23:58:40.889504 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-52&limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:40.889625 kubelet[2840]: E0509 23:58:40.889613 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-52&limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:40.954722 kubelet[2840]: E0509 23:58:40.954121 2840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-52?timeout=10s\": dial tcp 172.31.18.52:6443: connect: connection refused" interval="1.6s" May 9 23:58:40.972420 containerd[2025]: time="2025-05-09T23:58:40.972338798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-52,Uid:d7d8a92f58088e51a0ce9d5c0e9a01c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"be340cefcf6fc50f1cd13afb2d1771e3ef678ae4ffae5ea1d64593bed90c85b9\"" May 9 23:58:40.979744 containerd[2025]: time="2025-05-09T23:58:40.979676246Z" level=info msg="CreateContainer within sandbox \"be340cefcf6fc50f1cd13afb2d1771e3ef678ae4ffae5ea1d64593bed90c85b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:58:41.003947 containerd[2025]: time="2025-05-09T23:58:41.003632782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-52,Uid:142be92da19c0df4c3659904b49c5608,Namespace:kube-system,Attempt:0,} returns sandbox id \"5998ed0bdafce7ddfcf1a41c27976f75b96b53e7f9dd661dd10ab8b33d05d4cb\"" May 9 23:58:41.011684 containerd[2025]: time="2025-05-09T23:58:41.011281198Z" level=info msg="CreateContainer within sandbox \"5998ed0bdafce7ddfcf1a41c27976f75b96b53e7f9dd661dd10ab8b33d05d4cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:58:41.018421 containerd[2025]: time="2025-05-09T23:58:41.018222466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-52,Uid:6e06f0e6eb25eb8e2cabd3ca67ae300d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8160f2a3ee06a63bb425a3cdcbeeccff3a9c01a8799992bfa5120b729b14eb0\"" May 9 23:58:41.024465 containerd[2025]: time="2025-05-09T23:58:41.024412126Z" level=info msg="CreateContainer within sandbox \"e8160f2a3ee06a63bb425a3cdcbeeccff3a9c01a8799992bfa5120b729b14eb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:58:41.033918 containerd[2025]: time="2025-05-09T23:58:41.033704734Z" level=info msg="CreateContainer within sandbox \"be340cefcf6fc50f1cd13afb2d1771e3ef678ae4ffae5ea1d64593bed90c85b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2480b5124e61ae680fb48501b6e5089cc657cf912e57bee6d93280bd35af564f\"" May 9 23:58:41.034767 containerd[2025]: time="2025-05-09T23:58:41.034707658Z" level=info msg="StartContainer for \"2480b5124e61ae680fb48501b6e5089cc657cf912e57bee6d93280bd35af564f\"" May 9 23:58:41.060467 containerd[2025]: time="2025-05-09T23:58:41.059153134Z" level=info msg="CreateContainer within sandbox \"5998ed0bdafce7ddfcf1a41c27976f75b96b53e7f9dd661dd10ab8b33d05d4cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4179eadf24c4c9767eb19f9dc7460402f62b61429b6c38e8b0594c8686f9d61f\"" May 9 23:58:41.060467 containerd[2025]: time="2025-05-09T23:58:41.060131758Z" level=info msg="StartContainer for \"4179eadf24c4c9767eb19f9dc7460402f62b61429b6c38e8b0594c8686f9d61f\"" May 9 23:58:41.073998 containerd[2025]: time="2025-05-09T23:58:41.073832446Z" level=info msg="CreateContainer within sandbox \"e8160f2a3ee06a63bb425a3cdcbeeccff3a9c01a8799992bfa5120b729b14eb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3902ffa7e9c565c77615ab493f53394711f1cf67461455c577ff3427ab75b83c\"" May 9 23:58:41.075360 containerd[2025]: time="2025-05-09T23:58:41.074971594Z" level=info msg="StartContainer for \"3902ffa7e9c565c77615ab493f53394711f1cf67461455c577ff3427ab75b83c\"" May 9 23:58:41.091264 systemd[1]: Started cri-containerd-2480b5124e61ae680fb48501b6e5089cc657cf912e57bee6d93280bd35af564f.scope - libcontainer container 2480b5124e61ae680fb48501b6e5089cc657cf912e57bee6d93280bd35af564f. May 9 23:58:41.155319 kubelet[2840]: W0509 23:58:41.154016 2840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.52:6443: connect: connection refused May 9 23:58:41.155319 kubelet[2840]: E0509 23:58:41.154119 2840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.52:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:41.157117 systemd[1]: Started cri-containerd-3902ffa7e9c565c77615ab493f53394711f1cf67461455c577ff3427ab75b83c.scope - libcontainer container 3902ffa7e9c565c77615ab493f53394711f1cf67461455c577ff3427ab75b83c. May 9 23:58:41.171759 kubelet[2840]: I0509 23:58:41.171719 2840 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-52" May 9 23:58:41.172473 kubelet[2840]: E0509 23:58:41.172410 2840 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.52:6443/api/v1/nodes\": dial tcp 172.31.18.52:6443: connect: connection refused" node="ip-172-31-18-52" May 9 23:58:41.173788 systemd[1]: Started cri-containerd-4179eadf24c4c9767eb19f9dc7460402f62b61429b6c38e8b0594c8686f9d61f.scope - libcontainer container 4179eadf24c4c9767eb19f9dc7460402f62b61429b6c38e8b0594c8686f9d61f. May 9 23:58:41.208326 containerd[2025]: time="2025-05-09T23:58:41.207681023Z" level=info msg="StartContainer for \"2480b5124e61ae680fb48501b6e5089cc657cf912e57bee6d93280bd35af564f\" returns successfully" May 9 23:58:41.313826 containerd[2025]: time="2025-05-09T23:58:41.312543372Z" level=info msg="StartContainer for \"3902ffa7e9c565c77615ab493f53394711f1cf67461455c577ff3427ab75b83c\" returns successfully" May 9 23:58:41.325514 containerd[2025]: time="2025-05-09T23:58:41.325440180Z" level=info msg="StartContainer for \"4179eadf24c4c9767eb19f9dc7460402f62b61429b6c38e8b0594c8686f9d61f\" returns successfully" May 9 23:58:42.776339 kubelet[2840]: I0509 23:58:42.774354 2840 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-52" May 9 23:58:44.983953 kubelet[2840]: E0509 23:58:44.983881 2840 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-52\" not found" node="ip-172-31-18-52" May 9 23:58:45.067744 kubelet[2840]: I0509 23:58:45.067086 2840 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-52" May 9 23:58:45.109702 kubelet[2840]: E0509 23:58:45.109547 2840 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-52.183e01493ca61bdb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-52,UID:ip-172-31-18-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-52,},FirstTimestamp:2025-05-09 23:58:39.528090587 +0000 UTC m=+0.721531025,LastTimestamp:2025-05-09 23:58:39.528090587 +0000 UTC m=+0.721531025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-52,}" May 9 23:58:45.169323 kubelet[2840]: E0509 23:58:45.167088 2840 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-52.183e01493db3c2fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-52,UID:ip-172-31-18-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-18-52,},FirstTimestamp:2025-05-09 23:58:39.545762555 +0000 UTC m=+0.739203017,LastTimestamp:2025-05-09 23:58:39.545762555 +0000 UTC m=+0.739203017,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-52,}" May 9 23:58:45.526411 kubelet[2840]: I0509 23:58:45.526364 2840 apiserver.go:52] "Watching apiserver" May 9 23:58:45.555402 kubelet[2840]: I0509 23:58:45.555338 2840 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 23:58:45.874303 update_engine[1997]: I20250509 23:58:45.874207 1997 update_attempter.cc:509] Updating boot flags... May 9 23:58:45.996347 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3126) May 9 23:58:46.398328 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3128) May 9 23:58:47.282554 systemd[1]: Reloading requested from client PID 3295 ('systemctl') (unit session-9.scope)... May 9 23:58:47.283045 systemd[1]: Reloading... May 9 23:58:47.492375 zram_generator::config[3347]: No configuration found. May 9 23:58:47.702318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:47.907940 systemd[1]: Reloading finished in 624 ms. May 9 23:58:47.995676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:48.012050 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:58:48.012500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:48.012594 systemd[1]: kubelet.service: Consumed 1.427s CPU time, 116.1M memory peak, 0B memory swap peak. May 9 23:58:48.018786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:48.319856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:48.339577 (kubelet)[3395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:58:48.428364 kubelet[3395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:48.428364 kubelet[3395]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:58:48.428364 kubelet[3395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:48.428364 kubelet[3395]: I0509 23:58:48.427582 3395 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:58:48.439664 kubelet[3395]: I0509 23:58:48.439603 3395 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 9 23:58:48.439867 kubelet[3395]: I0509 23:58:48.439846 3395 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:58:48.440481 kubelet[3395]: I0509 23:58:48.440450 3395 server.go:929] "Client rotation is on, will bootstrap in background" May 9 23:58:48.443513 kubelet[3395]: I0509 23:58:48.443095 3395 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:58:48.458320 kubelet[3395]: I0509 23:58:48.457830 3395 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:58:48.469412 kubelet[3395]: E0509 23:58:48.468904 3395 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:58:48.469412 kubelet[3395]: I0509 23:58:48.468980 3395 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:58:48.474336 kubelet[3395]: I0509 23:58:48.474244 3395 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:58:48.474554 kubelet[3395]: I0509 23:58:48.474481 3395 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 9 23:58:48.474755 kubelet[3395]: I0509 23:58:48.474704 3395 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:58:48.475045 kubelet[3395]: I0509 23:58:48.474756 3395 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-52","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:58:48.475200 kubelet[3395]: I0509 23:58:48.475055 3395 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:58:48.475200 kubelet[3395]: I0509 23:58:48.475076 3395 container_manager_linux.go:300] "Creating device plugin manager" May 9 23:58:48.475200 kubelet[3395]: I0509 23:58:48.475127 3395 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:48.475397 kubelet[3395]: I0509 23:58:48.475335 3395 kubelet.go:408] "Attempting to sync node with API server" May 9 23:58:48.475397 kubelet[3395]: I0509 23:58:48.475361 3395 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:58:48.477382 kubelet[3395]: I0509 23:58:48.475404 3395 kubelet.go:314] "Adding apiserver pod source" May 9 23:58:48.477382 kubelet[3395]: I0509 23:58:48.475424 3395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:58:48.477374 sudo[3408]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 23:58:48.478010 sudo[3408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 23:58:48.482554 kubelet[3395]: I0509 23:58:48.482502 3395 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:58:48.490331 kubelet[3395]: I0509 23:58:48.483276 3395 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:58:48.490331 kubelet[3395]: I0509 23:58:48.490183 3395 server.go:1269] "Started kubelet" May 9 23:58:48.494357 kubelet[3395]: I0509 23:58:48.493420 3395 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:58:48.495333 kubelet[3395]: I0509 23:58:48.495304 3395 server.go:460] "Adding debug handlers to kubelet server" May 9 23:58:48.498611 kubelet[3395]: I0509 23:58:48.496996 3395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:58:48.499129 kubelet[3395]: I0509 23:58:48.499096 3395 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:58:48.499386 kubelet[3395]: I0509 23:58:48.498729 3395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:58:48.500345 kubelet[3395]: I0509 23:58:48.499873 3395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:58:48.512008 kubelet[3395]: I0509 23:58:48.509657 3395 volume_manager.go:289] "Starting Kubelet Volume Manager" May 9 23:58:48.512008 kubelet[3395]: E0509 23:58:48.510009 3395 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-52\" not found" May 9 23:58:48.512455 kubelet[3395]: I0509 23:58:48.512429 3395 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 9 23:58:48.514480 kubelet[3395]: I0509 23:58:48.514343 3395 reconciler.go:26] "Reconciler: start to sync state" May 9 23:58:48.542770 kubelet[3395]: I0509 23:58:48.541846 3395 factory.go:221] Registration of the systemd container factory successfully May 9 23:58:48.542770 kubelet[3395]: I0509 23:58:48.542047 3395 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:58:48.551866 kubelet[3395]: I0509 23:58:48.551810 3395 factory.go:221] Registration of the containerd container factory successfully May 9 23:58:48.574808 kubelet[3395]: I0509 23:58:48.574255 3395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:58:48.577303 kubelet[3395]: I0509 23:58:48.577243 3395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:58:48.577303 kubelet[3395]: I0509 23:58:48.577347 3395 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:58:48.577303 kubelet[3395]: I0509 23:58:48.577384 3395 kubelet.go:2321] "Starting kubelet main sync loop" May 9 23:58:48.577303 kubelet[3395]: E0509 23:58:48.577469 3395 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:58:48.620138 kubelet[3395]: E0509 23:58:48.620085 3395 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:58:48.681351 kubelet[3395]: E0509 23:58:48.677748 3395 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 23:58:48.741621 kubelet[3395]: I0509 23:58:48.741571 3395 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:58:48.741621 kubelet[3395]: I0509 23:58:48.741606 3395 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:58:48.741621 kubelet[3395]: I0509 23:58:48.741642 3395 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:48.741932 kubelet[3395]: I0509 23:58:48.741879 3395 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:58:48.741932 kubelet[3395]: I0509 23:58:48.741900 3395 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:58:48.741932 kubelet[3395]: I0509 23:58:48.741931 3395 policy_none.go:49] "None policy: Start" May 9 23:58:48.746113 kubelet[3395]: I0509 23:58:48.746068 3395 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:58:48.746228 kubelet[3395]: I0509 23:58:48.746134 3395 state_mem.go:35] "Initializing new in-memory state store" May 9 23:58:48.747572 kubelet[3395]: I0509 23:58:48.747431 3395 state_mem.go:75] "Updated machine memory state" May 9 23:58:48.764978 kubelet[3395]: I0509 23:58:48.764931 3395 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:58:48.765403 kubelet[3395]: I0509 23:58:48.765225 3395 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:58:48.768098 kubelet[3395]: I0509 23:58:48.765268 3395 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:58:48.768098 kubelet[3395]: I0509 23:58:48.767945 3395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:58:48.896141 kubelet[3395]: I0509 23:58:48.895137 3395 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-52" May 9 23:58:48.900139 kubelet[3395]: E0509 23:58:48.899522 3395 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-52\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-52" May 9 23:58:48.911714 kubelet[3395]: E0509 23:58:48.909519 3395 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-52\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:48.921130 kubelet[3395]: I0509 23:58:48.917406 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7d8a92f58088e51a0ce9d5c0e9a01c5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-52\" (UID: \"d7d8a92f58088e51a0ce9d5c0e9a01c5\") " pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:48.921130 kubelet[3395]: I0509 23:58:48.917510 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:48.921130 kubelet[3395]: I0509 23:58:48.917586 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:48.921130 kubelet[3395]: I0509 23:58:48.917670 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7d8a92f58088e51a0ce9d5c0e9a01c5-ca-certs\") pod \"kube-apiserver-ip-172-31-18-52\" (UID: \"d7d8a92f58088e51a0ce9d5c0e9a01c5\") " pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:48.921130 kubelet[3395]: I0509 23:58:48.917749 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7d8a92f58088e51a0ce9d5c0e9a01c5-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-52\" (UID: \"d7d8a92f58088e51a0ce9d5c0e9a01c5\") " pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:48.921983 kubelet[3395]: I0509 23:58:48.917791 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:48.921983 kubelet[3395]: I0509 23:58:48.917868 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:48.921983 kubelet[3395]: I0509 23:58:48.917924 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/142be92da19c0df4c3659904b49c5608-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-52\" (UID: \"142be92da19c0df4c3659904b49c5608\") " pod="kube-system/kube-controller-manager-ip-172-31-18-52" May 9 23:58:48.921983 kubelet[3395]: I0509 23:58:48.918003 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e06f0e6eb25eb8e2cabd3ca67ae300d-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-52\" (UID: \"6e06f0e6eb25eb8e2cabd3ca67ae300d\") " pod="kube-system/kube-scheduler-ip-172-31-18-52" May 9 23:58:48.921983 kubelet[3395]: I0509 23:58:48.920619 3395 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-52" May 9 23:58:48.921983 kubelet[3395]: I0509 23:58:48.920729 3395 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-52" May 9 23:58:49.466077 sudo[3408]: pam_unix(sudo:session): session closed for user root May 9 23:58:49.477362 kubelet[3395]: I0509 23:58:49.476581 3395 apiserver.go:52] "Watching apiserver" May 9 23:58:49.513684 kubelet[3395]: I0509 23:58:49.513605 3395 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 9 23:58:49.671143 kubelet[3395]: E0509 23:58:49.671072 3395 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-52\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-52" May 9 23:58:49.695338 kubelet[3395]: I0509 23:58:49.694695 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-52" podStartSLOduration=4.694673049 podStartE2EDuration="4.694673049s" podCreationTimestamp="2025-05-09 23:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:49.693111345 +0000 UTC m=+1.347586028" watchObservedRunningTime="2025-05-09 23:58:49.694673049 +0000 UTC m=+1.349147696" May 9 23:58:49.726418 kubelet[3395]: I0509 23:58:49.724735 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-52" podStartSLOduration=1.7247144730000001 podStartE2EDuration="1.724714473s" podCreationTimestamp="2025-05-09 23:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:49.710677689 +0000 UTC m=+1.365152372" watchObservedRunningTime="2025-05-09 23:58:49.724714473 +0000 UTC m=+1.379189120" May 9 23:58:49.744309 kubelet[3395]: I0509 23:58:49.743870 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-52" podStartSLOduration=2.7438490250000003 podStartE2EDuration="2.743849025s" podCreationTimestamp="2025-05-09 23:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:49.726501225 +0000 UTC m=+1.380975872" watchObservedRunningTime="2025-05-09 23:58:49.743849025 +0000 UTC m=+1.398323696" May 9 23:58:51.737893 sudo[2349]: pam_unix(sudo:session): session closed for user root May 9 23:58:51.761652 sshd[2343]: pam_unix(sshd:session): session closed for user core May 9 23:58:51.767331 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. May 9 23:58:51.767660 systemd[1]: sshd@8-172.31.18.52:22-147.75.109.163:54806.service: Deactivated successfully. May 9 23:58:51.773609 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:58:51.775372 systemd[1]: session-9.scope: Consumed 10.144s CPU time, 153.5M memory peak, 0B memory swap peak. May 9 23:58:51.778585 systemd-logind[1993]: Removed session 9. May 9 23:58:53.352038 kubelet[3395]: I0509 23:58:53.351982 3395 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:58:53.352766 containerd[2025]: time="2025-05-09T23:58:53.352491935Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:58:53.353277 kubelet[3395]: I0509 23:58:53.353242 3395 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:58:54.083338 systemd[1]: Created slice kubepods-besteffort-pod7dabd07b_aabc_407e_b3c6_9d9886e5e28d.slice - libcontainer container kubepods-besteffort-pod7dabd07b_aabc_407e_b3c6_9d9886e5e28d.slice. May 9 23:58:54.120167 systemd[1]: Created slice kubepods-burstable-pod7f2c72c8_3a0e_4b35_9e8f_ccb59ed723d0.slice - libcontainer container kubepods-burstable-pod7f2c72c8_3a0e_4b35_9e8f_ccb59ed723d0.slice. May 9 23:58:54.146527 kubelet[3395]: I0509 23:58:54.145580 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-xtables-lock\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146527 kubelet[3395]: I0509 23:58:54.145642 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc7z2\" (UniqueName: \"kubernetes.io/projected/7dabd07b-aabc-407e-b3c6-9d9886e5e28d-kube-api-access-tc7z2\") pod \"kube-proxy-r8xch\" (UID: \"7dabd07b-aabc-407e-b3c6-9d9886e5e28d\") " pod="kube-system/kube-proxy-r8xch" May 9 23:58:54.146527 kubelet[3395]: I0509 23:58:54.145688 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-run\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146527 kubelet[3395]: I0509 23:58:54.145730 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-etc-cni-netd\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146527 kubelet[3395]: I0509 23:58:54.145768 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-lib-modules\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146527 kubelet[3395]: I0509 23:58:54.145802 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwggq\" (UniqueName: \"kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-kube-api-access-zwggq\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146948 kubelet[3395]: I0509 23:58:54.145836 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dabd07b-aabc-407e-b3c6-9d9886e5e28d-xtables-lock\") pod \"kube-proxy-r8xch\" (UID: \"7dabd07b-aabc-407e-b3c6-9d9886e5e28d\") " pod="kube-system/kube-proxy-r8xch" May 9 23:58:54.146948 kubelet[3395]: I0509 23:58:54.145869 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-bpf-maps\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146948 kubelet[3395]: I0509 23:58:54.145907 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hostproc\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146948 kubelet[3395]: I0509 23:58:54.145941 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cni-path\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146948 kubelet[3395]: I0509 23:58:54.145973 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-kernel\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.146948 kubelet[3395]: I0509 23:58:54.146010 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hubble-tls\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.147279 kubelet[3395]: I0509 23:58:54.146043 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-clustermesh-secrets\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.147279 kubelet[3395]: I0509 23:58:54.146076 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-config-path\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.147279 kubelet[3395]: I0509 23:58:54.146109 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7dabd07b-aabc-407e-b3c6-9d9886e5e28d-kube-proxy\") pod \"kube-proxy-r8xch\" (UID: \"7dabd07b-aabc-407e-b3c6-9d9886e5e28d\") " pod="kube-system/kube-proxy-r8xch" May 9 23:58:54.147279 kubelet[3395]: I0509 23:58:54.146148 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-cgroup\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.147279 kubelet[3395]: I0509 23:58:54.146186 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-net\") pod \"cilium-dtlfl\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " pod="kube-system/cilium-dtlfl" May 9 23:58:54.147951 kubelet[3395]: I0509 23:58:54.146222 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dabd07b-aabc-407e-b3c6-9d9886e5e28d-lib-modules\") pod \"kube-proxy-r8xch\" (UID: \"7dabd07b-aabc-407e-b3c6-9d9886e5e28d\") " pod="kube-system/kube-proxy-r8xch" May 9 23:58:54.400465 containerd[2025]: time="2025-05-09T23:58:54.400397185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8xch,Uid:7dabd07b-aabc-407e-b3c6-9d9886e5e28d,Namespace:kube-system,Attempt:0,}" May 9 23:58:54.431446 containerd[2025]: time="2025-05-09T23:58:54.429589045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dtlfl,Uid:7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0,Namespace:kube-system,Attempt:0,}" May 9 23:58:54.450335 kubelet[3395]: I0509 23:58:54.449138 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqghp\" (UniqueName: \"kubernetes.io/projected/2fbe43f7-f78c-4e45-8a28-e8b093d88025-kube-api-access-tqghp\") pod \"cilium-operator-5d85765b45-xlpxp\" (UID: \"2fbe43f7-f78c-4e45-8a28-e8b093d88025\") " pod="kube-system/cilium-operator-5d85765b45-xlpxp" May 9 23:58:54.450335 kubelet[3395]: I0509 23:58:54.449205 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fbe43f7-f78c-4e45-8a28-e8b093d88025-cilium-config-path\") pod \"cilium-operator-5d85765b45-xlpxp\" (UID: \"2fbe43f7-f78c-4e45-8a28-e8b093d88025\") " pod="kube-system/cilium-operator-5d85765b45-xlpxp" May 9 23:58:54.461316 systemd[1]: Created slice kubepods-besteffort-pod2fbe43f7_f78c_4e45_8a28_e8b093d88025.slice - libcontainer container kubepods-besteffort-pod2fbe43f7_f78c_4e45_8a28_e8b093d88025.slice. May 9 23:58:54.505572 containerd[2025]: time="2025-05-09T23:58:54.505384909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:54.505748 containerd[2025]: time="2025-05-09T23:58:54.505596265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:54.505748 containerd[2025]: time="2025-05-09T23:58:54.505676221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:54.506853 containerd[2025]: time="2025-05-09T23:58:54.506672677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:54.566593 systemd[1]: Started cri-containerd-f347088182740811ec955daa0ac6fec81d3a84ba379ee9dd9a61679038aedb7e.scope - libcontainer container f347088182740811ec955daa0ac6fec81d3a84ba379ee9dd9a61679038aedb7e. May 9 23:58:54.571102 containerd[2025]: time="2025-05-09T23:58:54.570968629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:54.571390 containerd[2025]: time="2025-05-09T23:58:54.571077265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:54.571390 containerd[2025]: time="2025-05-09T23:58:54.571141441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:54.573077 containerd[2025]: time="2025-05-09T23:58:54.572148109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:54.624610 systemd[1]: Started cri-containerd-2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff.scope - libcontainer container 2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff. May 9 23:58:54.640463 containerd[2025]: time="2025-05-09T23:58:54.640341374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8xch,Uid:7dabd07b-aabc-407e-b3c6-9d9886e5e28d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f347088182740811ec955daa0ac6fec81d3a84ba379ee9dd9a61679038aedb7e\"" May 9 23:58:54.648628 containerd[2025]: time="2025-05-09T23:58:54.648580850Z" level=info msg="CreateContainer within sandbox \"f347088182740811ec955daa0ac6fec81d3a84ba379ee9dd9a61679038aedb7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:58:54.695950 containerd[2025]: time="2025-05-09T23:58:54.694732934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dtlfl,Uid:7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\"" May 9 23:58:54.702674 containerd[2025]: time="2025-05-09T23:58:54.702497258Z" level=info msg="CreateContainer within sandbox \"f347088182740811ec955daa0ac6fec81d3a84ba379ee9dd9a61679038aedb7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5748b563a551bef730e7ac34a350db7737307d92b84f9b0535ba5abc883ef49\"" May 9 23:58:54.703674 containerd[2025]: time="2025-05-09T23:58:54.703251986Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:58:54.704274 containerd[2025]: time="2025-05-09T23:58:54.704227982Z" level=info msg="StartContainer for \"d5748b563a551bef730e7ac34a350db7737307d92b84f9b0535ba5abc883ef49\"" May 9 23:58:54.759651 systemd[1]: Started cri-containerd-d5748b563a551bef730e7ac34a350db7737307d92b84f9b0535ba5abc883ef49.scope - libcontainer container d5748b563a551bef730e7ac34a350db7737307d92b84f9b0535ba5abc883ef49. May 9 23:58:54.776233 containerd[2025]: time="2025-05-09T23:58:54.775741682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xlpxp,Uid:2fbe43f7-f78c-4e45-8a28-e8b093d88025,Namespace:kube-system,Attempt:0,}" May 9 23:58:54.817656 containerd[2025]: time="2025-05-09T23:58:54.817591755Z" level=info msg="StartContainer for \"d5748b563a551bef730e7ac34a350db7737307d92b84f9b0535ba5abc883ef49\" returns successfully" May 9 23:58:54.842065 containerd[2025]: time="2025-05-09T23:58:54.841871691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:54.842065 containerd[2025]: time="2025-05-09T23:58:54.841996227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:54.842065 containerd[2025]: time="2025-05-09T23:58:54.842024163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:54.842842 containerd[2025]: time="2025-05-09T23:58:54.842624439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:54.879618 systemd[1]: Started cri-containerd-f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca.scope - libcontainer container f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca. May 9 23:58:54.994554 containerd[2025]: time="2025-05-09T23:58:54.993655959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xlpxp,Uid:2fbe43f7-f78c-4e45-8a28-e8b093d88025,Namespace:kube-system,Attempt:0,} returns sandbox id \"f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca\"" May 9 23:58:55.724382 kubelet[3395]: I0509 23:58:55.724004 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r8xch" podStartSLOduration=1.723856635 podStartE2EDuration="1.723856635s" podCreationTimestamp="2025-05-09 23:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:55.700976415 +0000 UTC m=+7.355451086" watchObservedRunningTime="2025-05-09 23:58:55.723856635 +0000 UTC m=+7.378331294" May 9 23:59:05.319809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378175134.mount: Deactivated successfully. May 9 23:59:07.765453 containerd[2025]: time="2025-05-09T23:59:07.765390747Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:07.769121 containerd[2025]: time="2025-05-09T23:59:07.769053711Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:59:07.771616 containerd[2025]: time="2025-05-09T23:59:07.771529119Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:07.776239 containerd[2025]: time="2025-05-09T23:59:07.775008699Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.071663365s" May 9 23:59:07.776239 containerd[2025]: time="2025-05-09T23:59:07.775116195Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:59:07.778966 containerd[2025]: time="2025-05-09T23:59:07.778898343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:59:07.781380 containerd[2025]: time="2025-05-09T23:59:07.781326375Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:59:07.821002 containerd[2025]: time="2025-05-09T23:59:07.820922151Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908\"" May 9 23:59:07.825216 containerd[2025]: time="2025-05-09T23:59:07.825153459Z" level=info msg="StartContainer for \"11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908\"" May 9 23:59:07.900588 systemd[1]: Started cri-containerd-11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908.scope - libcontainer container 11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908. May 9 23:59:07.949830 containerd[2025]: time="2025-05-09T23:59:07.949768108Z" level=info msg="StartContainer for \"11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908\" returns successfully" May 9 23:59:07.973533 systemd[1]: cri-containerd-11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908.scope: Deactivated successfully. May 9 23:59:08.810212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908-rootfs.mount: Deactivated successfully. May 9 23:59:09.128028 containerd[2025]: time="2025-05-09T23:59:09.127928990Z" level=info msg="shim disconnected" id=11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908 namespace=k8s.io May 9 23:59:09.128028 containerd[2025]: time="2025-05-09T23:59:09.128020142Z" level=warning msg="cleaning up after shim disconnected" id=11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908 namespace=k8s.io May 9 23:59:09.129012 containerd[2025]: time="2025-05-09T23:59:09.128044790Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:09.718558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909348215.mount: Deactivated successfully. May 9 23:59:09.741083 containerd[2025]: time="2025-05-09T23:59:09.739780145Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:59:09.786540 containerd[2025]: time="2025-05-09T23:59:09.786468401Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11\"" May 9 23:59:09.788336 containerd[2025]: time="2025-05-09T23:59:09.788062121Z" level=info msg="StartContainer for \"63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11\"" May 9 23:59:09.870848 systemd[1]: Started cri-containerd-63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11.scope - libcontainer container 63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11. May 9 23:59:09.934129 containerd[2025]: time="2025-05-09T23:59:09.933223374Z" level=info msg="StartContainer for \"63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11\" returns successfully" May 9 23:59:09.957595 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:09.958519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:09.958648 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:09.971729 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:09.977680 systemd[1]: cri-containerd-63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11.scope: Deactivated successfully. May 9 23:59:10.019907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:10.053723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11-rootfs.mount: Deactivated successfully. May 9 23:59:10.082258 containerd[2025]: time="2025-05-09T23:59:10.081939710Z" level=info msg="shim disconnected" id=63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11 namespace=k8s.io May 9 23:59:10.082258 containerd[2025]: time="2025-05-09T23:59:10.082012514Z" level=warning msg="cleaning up after shim disconnected" id=63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11 namespace=k8s.io May 9 23:59:10.082258 containerd[2025]: time="2025-05-09T23:59:10.082033034Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:10.550970 containerd[2025]: time="2025-05-09T23:59:10.550910501Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:10.553428 containerd[2025]: time="2025-05-09T23:59:10.553354661Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:59:10.555713 containerd[2025]: time="2025-05-09T23:59:10.555643049Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:10.558769 containerd[2025]: time="2025-05-09T23:59:10.558559421Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.779594454s" May 9 23:59:10.558769 containerd[2025]: time="2025-05-09T23:59:10.558618473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:59:10.563458 containerd[2025]: time="2025-05-09T23:59:10.562989701Z" level=info msg="CreateContainer within sandbox \"f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:59:10.592265 containerd[2025]: time="2025-05-09T23:59:10.592186121Z" level=info msg="CreateContainer within sandbox \"f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\"" May 9 23:59:10.594165 containerd[2025]: time="2025-05-09T23:59:10.593576309Z" level=info msg="StartContainer for \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\"" May 9 23:59:10.638585 systemd[1]: Started cri-containerd-9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633.scope - libcontainer container 9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633. May 9 23:59:10.686528 containerd[2025]: time="2025-05-09T23:59:10.686380901Z" level=info msg="StartContainer for \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\" returns successfully" May 9 23:59:10.756537 containerd[2025]: time="2025-05-09T23:59:10.756215106Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:59:10.819194 containerd[2025]: time="2025-05-09T23:59:10.818980050Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13\"" May 9 23:59:10.825451 containerd[2025]: time="2025-05-09T23:59:10.825373182Z" level=info msg="StartContainer for \"dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13\"" May 9 23:59:10.877175 kubelet[3395]: I0509 23:59:10.875857 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xlpxp" podStartSLOduration=1.313811629 podStartE2EDuration="16.875837442s" podCreationTimestamp="2025-05-09 23:58:54 +0000 UTC" firstStartedPulling="2025-05-09 23:58:54.998149204 +0000 UTC m=+6.652623863" lastFinishedPulling="2025-05-09 23:59:10.560175017 +0000 UTC m=+22.214649676" observedRunningTime="2025-05-09 23:59:10.779258934 +0000 UTC m=+22.433733617" watchObservedRunningTime="2025-05-09 23:59:10.875837442 +0000 UTC m=+22.530312089" May 9 23:59:10.924632 systemd[1]: Started cri-containerd-dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13.scope - libcontainer container dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13. May 9 23:59:10.993758 containerd[2025]: time="2025-05-09T23:59:10.993590719Z" level=info msg="StartContainer for \"dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13\" returns successfully" May 9 23:59:11.005007 systemd[1]: cri-containerd-dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13.scope: Deactivated successfully. May 9 23:59:11.078281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13-rootfs.mount: Deactivated successfully. May 9 23:59:11.162584 containerd[2025]: time="2025-05-09T23:59:11.162471388Z" level=info msg="shim disconnected" id=dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13 namespace=k8s.io May 9 23:59:11.162584 containerd[2025]: time="2025-05-09T23:59:11.162557608Z" level=warning msg="cleaning up after shim disconnected" id=dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13 namespace=k8s.io May 9 23:59:11.162584 containerd[2025]: time="2025-05-09T23:59:11.162581764Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:11.766796 containerd[2025]: time="2025-05-09T23:59:11.766709263Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:59:11.816728 containerd[2025]: time="2025-05-09T23:59:11.816667879Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6\"" May 9 23:59:11.818432 containerd[2025]: time="2025-05-09T23:59:11.818357359Z" level=info msg="StartContainer for \"6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6\"" May 9 23:59:11.908411 systemd[1]: run-containerd-runc-k8s.io-6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6-runc.Omt17z.mount: Deactivated successfully. May 9 23:59:11.926620 systemd[1]: Started cri-containerd-6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6.scope - libcontainer container 6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6. May 9 23:59:12.039858 containerd[2025]: time="2025-05-09T23:59:12.039704440Z" level=info msg="StartContainer for \"6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6\" returns successfully" May 9 23:59:12.042887 systemd[1]: cri-containerd-6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6.scope: Deactivated successfully. May 9 23:59:12.104255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6-rootfs.mount: Deactivated successfully. May 9 23:59:12.114037 containerd[2025]: time="2025-05-09T23:59:12.113898497Z" level=info msg="shim disconnected" id=6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6 namespace=k8s.io May 9 23:59:12.114037 containerd[2025]: time="2025-05-09T23:59:12.113977805Z" level=warning msg="cleaning up after shim disconnected" id=6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6 namespace=k8s.io May 9 23:59:12.114037 containerd[2025]: time="2025-05-09T23:59:12.113999801Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:12.782216 containerd[2025]: time="2025-05-09T23:59:12.782128820Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:59:12.830697 containerd[2025]: time="2025-05-09T23:59:12.830620076Z" level=info msg="CreateContainer within sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\"" May 9 23:59:12.832837 containerd[2025]: time="2025-05-09T23:59:12.832770152Z" level=info msg="StartContainer for \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\"" May 9 23:59:12.887481 systemd[1]: Started cri-containerd-8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8.scope - libcontainer container 8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8. May 9 23:59:12.941345 containerd[2025]: time="2025-05-09T23:59:12.940440405Z" level=info msg="StartContainer for \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\" returns successfully" May 9 23:59:13.179354 kubelet[3395]: I0509 23:59:13.178453 3395 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 9 23:59:13.244418 systemd[1]: Created slice kubepods-burstable-pod12e8e1c4_d240_4060_a29d_a975c5d73b97.slice - libcontainer container kubepods-burstable-pod12e8e1c4_d240_4060_a29d_a975c5d73b97.slice. May 9 23:59:13.260882 systemd[1]: Created slice kubepods-burstable-pod90478133_0d48_4845_8896_5f90a235859c.slice - libcontainer container kubepods-burstable-pod90478133_0d48_4845_8896_5f90a235859c.slice. May 9 23:59:13.286467 kubelet[3395]: I0509 23:59:13.285320 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgjz\" (UniqueName: \"kubernetes.io/projected/12e8e1c4-d240-4060-a29d-a975c5d73b97-kube-api-access-vdgjz\") pod \"coredns-6f6b679f8f-j6c65\" (UID: \"12e8e1c4-d240-4060-a29d-a975c5d73b97\") " pod="kube-system/coredns-6f6b679f8f-j6c65" May 9 23:59:13.286467 kubelet[3395]: I0509 23:59:13.285398 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90478133-0d48-4845-8896-5f90a235859c-config-volume\") pod \"coredns-6f6b679f8f-892x9\" (UID: \"90478133-0d48-4845-8896-5f90a235859c\") " pod="kube-system/coredns-6f6b679f8f-892x9" May 9 23:59:13.286837 kubelet[3395]: I0509 23:59:13.286388 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e8e1c4-d240-4060-a29d-a975c5d73b97-config-volume\") pod \"coredns-6f6b679f8f-j6c65\" (UID: \"12e8e1c4-d240-4060-a29d-a975c5d73b97\") " pod="kube-system/coredns-6f6b679f8f-j6c65" May 9 23:59:13.286837 kubelet[3395]: I0509 23:59:13.286766 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnh4r\" (UniqueName: \"kubernetes.io/projected/90478133-0d48-4845-8896-5f90a235859c-kube-api-access-xnh4r\") pod \"coredns-6f6b679f8f-892x9\" (UID: \"90478133-0d48-4845-8896-5f90a235859c\") " pod="kube-system/coredns-6f6b679f8f-892x9" May 9 23:59:13.558792 containerd[2025]: time="2025-05-09T23:59:13.558131528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j6c65,Uid:12e8e1c4-d240-4060-a29d-a975c5d73b97,Namespace:kube-system,Attempt:0,}" May 9 23:59:13.572167 containerd[2025]: time="2025-05-09T23:59:13.572114048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-892x9,Uid:90478133-0d48-4845-8896-5f90a235859c,Namespace:kube-system,Attempt:0,}" May 9 23:59:15.903969 (udev-worker)[4188]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:15.910034 systemd-networkd[1908]: cilium_host: Link UP May 9 23:59:15.912555 systemd-networkd[1908]: cilium_net: Link UP May 9 23:59:15.913227 systemd-networkd[1908]: cilium_net: Gained carrier May 9 23:59:15.913604 (udev-worker)[4223]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:15.917663 systemd-networkd[1908]: cilium_host: Gained carrier May 9 23:59:15.917951 systemd-networkd[1908]: cilium_net: Gained IPv6LL May 9 23:59:15.918277 systemd-networkd[1908]: cilium_host: Gained IPv6LL May 9 23:59:16.092098 systemd-networkd[1908]: cilium_vxlan: Link UP May 9 23:59:16.092112 systemd-networkd[1908]: cilium_vxlan: Gained carrier May 9 23:59:16.572381 kernel: NET: Registered PF_ALG protocol family May 9 23:59:17.229482 systemd-networkd[1908]: cilium_vxlan: Gained IPv6LL May 9 23:59:17.873373 (udev-worker)[4237]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:17.875174 systemd-networkd[1908]: lxc_health: Link UP May 9 23:59:17.885621 systemd-networkd[1908]: lxc_health: Gained carrier May 9 23:59:18.185143 systemd-networkd[1908]: lxc1034f5dc23b6: Link UP May 9 23:59:18.193873 systemd-networkd[1908]: lxc4601d3b0a0ae: Link UP May 9 23:59:18.198614 (udev-worker)[4236]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:18.200532 kernel: eth0: renamed from tmp75679 May 9 23:59:18.210432 kernel: eth0: renamed from tmpd9c63 May 9 23:59:18.214067 systemd-networkd[1908]: lxc1034f5dc23b6: Gained carrier May 9 23:59:18.225540 systemd-networkd[1908]: lxc4601d3b0a0ae: Gained carrier May 9 23:59:18.471985 kubelet[3395]: I0509 23:59:18.471160 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dtlfl" podStartSLOduration=11.393550823 podStartE2EDuration="24.471139092s" podCreationTimestamp="2025-05-09 23:58:54 +0000 UTC" firstStartedPulling="2025-05-09 23:58:54.701049998 +0000 UTC m=+6.355524645" lastFinishedPulling="2025-05-09 23:59:07.778638255 +0000 UTC m=+19.433112914" observedRunningTime="2025-05-09 23:59:13.831727653 +0000 UTC m=+25.486202312" watchObservedRunningTime="2025-05-09 23:59:18.471139092 +0000 UTC m=+30.125613763" May 9 23:59:18.957543 systemd-networkd[1908]: lxc_health: Gained IPv6LL May 9 23:59:19.277502 systemd-networkd[1908]: lxc1034f5dc23b6: Gained IPv6LL May 9 23:59:20.110420 systemd-networkd[1908]: lxc4601d3b0a0ae: Gained IPv6LL May 9 23:59:22.482822 ntpd[1986]: Listen normally on 8 cilium_host 192.168.0.22:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 8 cilium_host 192.168.0.22:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 9 cilium_net [fe80::645b:7bff:fe1a:43c7%4]:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 10 cilium_host [fe80::64ef:d8ff:feb3:7545%5]:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 11 cilium_vxlan [fe80::c8ee:1cff:fef0:e3e1%6]:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 12 lxc_health [fe80::c091:91ff:fe6a:2587%8]:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 13 lxc1034f5dc23b6 [fe80::6c8a:51ff:fe41:31d7%10]:123 May 9 23:59:22.484274 ntpd[1986]: 9 May 23:59:22 ntpd[1986]: Listen normally on 14 lxc4601d3b0a0ae [fe80::1499:9eff:fe7d:2386%12]:123 May 9 23:59:22.483164 ntpd[1986]: Listen normally on 9 cilium_net [fe80::645b:7bff:fe1a:43c7%4]:123 May 9 23:59:22.483253 ntpd[1986]: Listen normally on 10 cilium_host [fe80::64ef:d8ff:feb3:7545%5]:123 May 9 23:59:22.483373 ntpd[1986]: Listen normally on 11 cilium_vxlan [fe80::c8ee:1cff:fef0:e3e1%6]:123 May 9 23:59:22.483469 ntpd[1986]: Listen normally on 12 lxc_health [fe80::c091:91ff:fe6a:2587%8]:123 May 9 23:59:22.483542 ntpd[1986]: Listen normally on 13 lxc1034f5dc23b6 [fe80::6c8a:51ff:fe41:31d7%10]:123 May 9 23:59:22.483610 ntpd[1986]: Listen normally on 14 lxc4601d3b0a0ae [fe80::1499:9eff:fe7d:2386%12]:123 May 9 23:59:26.517033 containerd[2025]: time="2025-05-09T23:59:26.516449132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:26.517033 containerd[2025]: time="2025-05-09T23:59:26.516571748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:26.517033 containerd[2025]: time="2025-05-09T23:59:26.516610196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:26.517033 containerd[2025]: time="2025-05-09T23:59:26.516765680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:26.578737 systemd[1]: Started cri-containerd-d9c63f90b9abba5a1e46f0824eeb516df4f6d08a77bf8e59acd50d8226a92dab.scope - libcontainer container d9c63f90b9abba5a1e46f0824eeb516df4f6d08a77bf8e59acd50d8226a92dab. May 9 23:59:26.625055 containerd[2025]: time="2025-05-09T23:59:26.624864657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:26.625703 containerd[2025]: time="2025-05-09T23:59:26.625397577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:26.626433 containerd[2025]: time="2025-05-09T23:59:26.626061105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:26.626433 containerd[2025]: time="2025-05-09T23:59:26.626322897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:26.683638 systemd[1]: Started cri-containerd-75679d80538b542b5a8689409bc918db918309e9eac855384595395ff490255b.scope - libcontainer container 75679d80538b542b5a8689409bc918db918309e9eac855384595395ff490255b. May 9 23:59:26.730154 containerd[2025]: time="2025-05-09T23:59:26.729270165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-892x9,Uid:90478133-0d48-4845-8896-5f90a235859c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c63f90b9abba5a1e46f0824eeb516df4f6d08a77bf8e59acd50d8226a92dab\"" May 9 23:59:26.740255 containerd[2025]: time="2025-05-09T23:59:26.740187477Z" level=info msg="CreateContainer within sandbox \"d9c63f90b9abba5a1e46f0824eeb516df4f6d08a77bf8e59acd50d8226a92dab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:26.791444 containerd[2025]: time="2025-05-09T23:59:26.791248797Z" level=info msg="CreateContainer within sandbox \"d9c63f90b9abba5a1e46f0824eeb516df4f6d08a77bf8e59acd50d8226a92dab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbff50b73474464cb2f277d27c485b9bddfcde4b2f6fd13cd57c9b0abf94d561\"" May 9 23:59:26.793167 containerd[2025]: time="2025-05-09T23:59:26.792577761Z" level=info msg="StartContainer for \"dbff50b73474464cb2f277d27c485b9bddfcde4b2f6fd13cd57c9b0abf94d561\"" May 9 23:59:26.805023 containerd[2025]: time="2025-05-09T23:59:26.804971073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j6c65,Uid:12e8e1c4-d240-4060-a29d-a975c5d73b97,Namespace:kube-system,Attempt:0,} returns sandbox id \"75679d80538b542b5a8689409bc918db918309e9eac855384595395ff490255b\"" May 9 23:59:26.814892 containerd[2025]: time="2025-05-09T23:59:26.814713970Z" level=info msg="CreateContainer within sandbox \"75679d80538b542b5a8689409bc918db918309e9eac855384595395ff490255b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:26.863790 containerd[2025]: time="2025-05-09T23:59:26.861605962Z" level=info msg="CreateContainer within sandbox \"75679d80538b542b5a8689409bc918db918309e9eac855384595395ff490255b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47fbb4f36aa7dca0e9a19b761b4880f339e58178ccb3b102cf5fdc7c68ac2a22\"" May 9 23:59:26.864725 containerd[2025]: time="2025-05-09T23:59:26.864637558Z" level=info msg="StartContainer for \"47fbb4f36aa7dca0e9a19b761b4880f339e58178ccb3b102cf5fdc7c68ac2a22\"" May 9 23:59:26.887679 systemd[1]: Started cri-containerd-dbff50b73474464cb2f277d27c485b9bddfcde4b2f6fd13cd57c9b0abf94d561.scope - libcontainer container dbff50b73474464cb2f277d27c485b9bddfcde4b2f6fd13cd57c9b0abf94d561. May 9 23:59:26.944706 systemd[1]: Started cri-containerd-47fbb4f36aa7dca0e9a19b761b4880f339e58178ccb3b102cf5fdc7c68ac2a22.scope - libcontainer container 47fbb4f36aa7dca0e9a19b761b4880f339e58178ccb3b102cf5fdc7c68ac2a22. May 9 23:59:27.021492 containerd[2025]: time="2025-05-09T23:59:27.020334523Z" level=info msg="StartContainer for \"dbff50b73474464cb2f277d27c485b9bddfcde4b2f6fd13cd57c9b0abf94d561\" returns successfully" May 9 23:59:27.038762 containerd[2025]: time="2025-05-09T23:59:27.038686711Z" level=info msg="StartContainer for \"47fbb4f36aa7dca0e9a19b761b4880f339e58178ccb3b102cf5fdc7c68ac2a22\" returns successfully" May 9 23:59:27.870223 kubelet[3395]: I0509 23:59:27.869898 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-892x9" podStartSLOduration=33.869874143 podStartE2EDuration="33.869874143s" podCreationTimestamp="2025-05-09 23:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:27.866038043 +0000 UTC m=+39.520512714" watchObservedRunningTime="2025-05-09 23:59:27.869874143 +0000 UTC m=+39.524348802" May 9 23:59:27.895853 kubelet[3395]: I0509 23:59:27.895761 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j6c65" podStartSLOduration=33.895737587 podStartE2EDuration="33.895737587s" podCreationTimestamp="2025-05-09 23:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:27.891662303 +0000 UTC m=+39.546136974" watchObservedRunningTime="2025-05-09 23:59:27.895737587 +0000 UTC m=+39.550212246" May 9 23:59:36.013805 systemd[1]: Started sshd@9-172.31.18.52:22-147.75.109.163:51804.service - OpenSSH per-connection server daemon (147.75.109.163:51804). May 9 23:59:36.192581 sshd[4766]: Accepted publickey for core from 147.75.109.163 port 51804 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:36.195417 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:36.204143 systemd-logind[1993]: New session 10 of user core. May 9 23:59:36.209565 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 23:59:36.475047 sshd[4766]: pam_unix(sshd:session): session closed for user core May 9 23:59:36.481597 systemd[1]: sshd@9-172.31.18.52:22-147.75.109.163:51804.service: Deactivated successfully. May 9 23:59:36.486417 systemd[1]: session-10.scope: Deactivated successfully. May 9 23:59:36.488206 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. May 9 23:59:36.490159 systemd-logind[1993]: Removed session 10. May 9 23:59:41.511880 systemd[1]: Started sshd@10-172.31.18.52:22-147.75.109.163:57754.service - OpenSSH per-connection server daemon (147.75.109.163:57754). May 9 23:59:41.690841 sshd[4780]: Accepted publickey for core from 147.75.109.163 port 57754 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:41.693466 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:41.702687 systemd-logind[1993]: New session 11 of user core. May 9 23:59:41.710572 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 23:59:41.948952 sshd[4780]: pam_unix(sshd:session): session closed for user core May 9 23:59:41.955276 systemd[1]: sshd@10-172.31.18.52:22-147.75.109.163:57754.service: Deactivated successfully. May 9 23:59:41.959164 systemd[1]: session-11.scope: Deactivated successfully. May 9 23:59:41.961138 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. May 9 23:59:41.963853 systemd-logind[1993]: Removed session 11. May 9 23:59:46.991828 systemd[1]: Started sshd@11-172.31.18.52:22-147.75.109.163:40956.service - OpenSSH per-connection server daemon (147.75.109.163:40956). May 9 23:59:47.171009 sshd[4794]: Accepted publickey for core from 147.75.109.163 port 40956 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:47.173661 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:47.182509 systemd-logind[1993]: New session 12 of user core. May 9 23:59:47.190584 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 23:59:47.433051 sshd[4794]: pam_unix(sshd:session): session closed for user core May 9 23:59:47.437692 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. May 9 23:59:47.439060 systemd[1]: sshd@11-172.31.18.52:22-147.75.109.163:40956.service: Deactivated successfully. May 9 23:59:47.444210 systemd[1]: session-12.scope: Deactivated successfully. May 9 23:59:47.449028 systemd-logind[1993]: Removed session 12. May 9 23:59:52.471805 systemd[1]: Started sshd@12-172.31.18.52:22-147.75.109.163:40968.service - OpenSSH per-connection server daemon (147.75.109.163:40968). May 9 23:59:52.652265 sshd[4811]: Accepted publickey for core from 147.75.109.163 port 40968 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:52.655106 sshd[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:52.663437 systemd-logind[1993]: New session 13 of user core. May 9 23:59:52.671542 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 23:59:52.908677 sshd[4811]: pam_unix(sshd:session): session closed for user core May 9 23:59:52.915734 systemd[1]: sshd@12-172.31.18.52:22-147.75.109.163:40968.service: Deactivated successfully. May 9 23:59:52.916351 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. May 9 23:59:52.919999 systemd[1]: session-13.scope: Deactivated successfully. May 9 23:59:52.924492 systemd-logind[1993]: Removed session 13. May 9 23:59:57.949861 systemd[1]: Started sshd@13-172.31.18.52:22-147.75.109.163:40540.service - OpenSSH per-connection server daemon (147.75.109.163:40540). May 9 23:59:58.132556 sshd[4827]: Accepted publickey for core from 147.75.109.163 port 40540 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:58.135163 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:58.143641 systemd-logind[1993]: New session 14 of user core. May 9 23:59:58.150563 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 23:59:58.393443 sshd[4827]: pam_unix(sshd:session): session closed for user core May 9 23:59:58.399902 systemd[1]: sshd@13-172.31.18.52:22-147.75.109.163:40540.service: Deactivated successfully. May 9 23:59:58.404045 systemd[1]: session-14.scope: Deactivated successfully. May 9 23:59:58.406906 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. May 9 23:59:58.408823 systemd-logind[1993]: Removed session 14. May 9 23:59:58.433050 systemd[1]: Started sshd@14-172.31.18.52:22-147.75.109.163:40550.service - OpenSSH per-connection server daemon (147.75.109.163:40550). May 9 23:59:58.605771 sshd[4840]: Accepted publickey for core from 147.75.109.163 port 40550 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:58.608455 sshd[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:58.615982 systemd-logind[1993]: New session 15 of user core. May 9 23:59:58.621557 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 23:59:58.935559 sshd[4840]: pam_unix(sshd:session): session closed for user core May 9 23:59:58.949353 systemd[1]: sshd@14-172.31.18.52:22-147.75.109.163:40550.service: Deactivated successfully. May 9 23:59:58.960863 systemd[1]: session-15.scope: Deactivated successfully. May 9 23:59:58.965385 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. May 9 23:59:58.988372 systemd[1]: Started sshd@15-172.31.18.52:22-147.75.109.163:40566.service - OpenSSH per-connection server daemon (147.75.109.163:40566). May 9 23:59:58.990807 systemd-logind[1993]: Removed session 15. May 9 23:59:59.175464 sshd[4850]: Accepted publickey for core from 147.75.109.163 port 40566 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:59.177839 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:59.187105 systemd-logind[1993]: New session 16 of user core. May 9 23:59:59.192553 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 23:59:59.443219 sshd[4850]: pam_unix(sshd:session): session closed for user core May 9 23:59:59.449781 systemd[1]: sshd@15-172.31.18.52:22-147.75.109.163:40566.service: Deactivated successfully. May 9 23:59:59.451611 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. May 9 23:59:59.455236 systemd[1]: session-16.scope: Deactivated successfully. May 9 23:59:59.462128 systemd-logind[1993]: Removed session 16. May 10 00:00:04.485808 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 10 00:00:04.490649 systemd[1]: Started sshd@16-172.31.18.52:22-147.75.109.163:40570.service - OpenSSH per-connection server daemon (147.75.109.163:40570). May 10 00:00:04.502986 systemd[1]: logrotate.service: Deactivated successfully. May 10 00:00:04.673341 sshd[4866]: Accepted publickey for core from 147.75.109.163 port 40570 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:04.675030 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:04.682419 systemd-logind[1993]: New session 17 of user core. May 10 00:00:04.695551 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:00:04.931494 sshd[4866]: pam_unix(sshd:session): session closed for user core May 10 00:00:04.938095 systemd[1]: sshd@16-172.31.18.52:22-147.75.109.163:40570.service: Deactivated successfully. May 10 00:00:04.943031 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:00:04.946710 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. May 10 00:00:04.948631 systemd-logind[1993]: Removed session 17. May 10 00:00:09.972791 systemd[1]: Started sshd@17-172.31.18.52:22-147.75.109.163:33918.service - OpenSSH per-connection server daemon (147.75.109.163:33918). May 10 00:00:10.154621 sshd[4880]: Accepted publickey for core from 147.75.109.163 port 33918 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:10.157273 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:10.165793 systemd-logind[1993]: New session 18 of user core. May 10 00:00:10.174665 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:00:10.414859 sshd[4880]: pam_unix(sshd:session): session closed for user core May 10 00:00:10.419834 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. May 10 00:00:10.420752 systemd[1]: sshd@17-172.31.18.52:22-147.75.109.163:33918.service: Deactivated successfully. May 10 00:00:10.425265 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:00:10.430639 systemd-logind[1993]: Removed session 18. May 10 00:00:15.455837 systemd[1]: Started sshd@18-172.31.18.52:22-147.75.109.163:33930.service - OpenSSH per-connection server daemon (147.75.109.163:33930). May 10 00:00:15.642194 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 33930 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.645148 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.654181 systemd-logind[1993]: New session 19 of user core. May 10 00:00:15.659570 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:00:15.898103 sshd[4892]: pam_unix(sshd:session): session closed for user core May 10 00:00:15.904539 systemd[1]: sshd@18-172.31.18.52:22-147.75.109.163:33930.service: Deactivated successfully. May 10 00:00:15.908109 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:00:15.910421 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. May 10 00:00:15.912462 systemd-logind[1993]: Removed session 19. May 10 00:00:20.935781 systemd[1]: Started sshd@19-172.31.18.52:22-147.75.109.163:34334.service - OpenSSH per-connection server daemon (147.75.109.163:34334). May 10 00:00:21.111636 sshd[4905]: Accepted publickey for core from 147.75.109.163 port 34334 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:21.114273 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:21.124516 systemd-logind[1993]: New session 20 of user core. May 10 00:00:21.131568 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:00:21.367966 sshd[4905]: pam_unix(sshd:session): session closed for user core May 10 00:00:21.375520 systemd[1]: sshd@19-172.31.18.52:22-147.75.109.163:34334.service: Deactivated successfully. May 10 00:00:21.380450 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:00:21.382737 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. May 10 00:00:21.385443 systemd-logind[1993]: Removed session 20. May 10 00:00:21.407823 systemd[1]: Started sshd@20-172.31.18.52:22-147.75.109.163:34338.service - OpenSSH per-connection server daemon (147.75.109.163:34338). May 10 00:00:21.579228 sshd[4918]: Accepted publickey for core from 147.75.109.163 port 34338 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:21.581488 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:21.590926 systemd-logind[1993]: New session 21 of user core. May 10 00:00:21.599554 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:00:21.896750 sshd[4918]: pam_unix(sshd:session): session closed for user core May 10 00:00:21.902870 systemd[1]: sshd@20-172.31.18.52:22-147.75.109.163:34338.service: Deactivated successfully. May 10 00:00:21.906433 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:00:21.908597 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. May 10 00:00:21.910713 systemd-logind[1993]: Removed session 21. May 10 00:00:21.938822 systemd[1]: Started sshd@21-172.31.18.52:22-147.75.109.163:34346.service - OpenSSH per-connection server daemon (147.75.109.163:34346). May 10 00:00:22.110124 sshd[4929]: Accepted publickey for core from 147.75.109.163 port 34346 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:22.112804 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:22.122028 systemd-logind[1993]: New session 22 of user core. May 10 00:00:22.127599 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:00:24.660415 sshd[4929]: pam_unix(sshd:session): session closed for user core May 10 00:00:24.670116 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. May 10 00:00:24.672084 systemd[1]: sshd@21-172.31.18.52:22-147.75.109.163:34346.service: Deactivated successfully. May 10 00:00:24.682370 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:00:24.702243 systemd-logind[1993]: Removed session 22. May 10 00:00:24.708889 systemd[1]: Started sshd@22-172.31.18.52:22-147.75.109.163:34354.service - OpenSSH per-connection server daemon (147.75.109.163:34354). May 10 00:00:24.893664 sshd[4946]: Accepted publickey for core from 147.75.109.163 port 34354 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:24.896919 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:24.904150 systemd-logind[1993]: New session 23 of user core. May 10 00:00:24.911601 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:00:25.392345 sshd[4946]: pam_unix(sshd:session): session closed for user core May 10 00:00:25.398068 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. May 10 00:00:25.398672 systemd[1]: sshd@22-172.31.18.52:22-147.75.109.163:34354.service: Deactivated successfully. May 10 00:00:25.402846 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:00:25.408676 systemd-logind[1993]: Removed session 23. May 10 00:00:25.436822 systemd[1]: Started sshd@23-172.31.18.52:22-147.75.109.163:34366.service - OpenSSH per-connection server daemon (147.75.109.163:34366). May 10 00:00:25.620426 sshd[4960]: Accepted publickey for core from 147.75.109.163 port 34366 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:25.622973 sshd[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:25.630777 systemd-logind[1993]: New session 24 of user core. May 10 00:00:25.639585 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:00:25.876447 sshd[4960]: pam_unix(sshd:session): session closed for user core May 10 00:00:25.882648 systemd[1]: sshd@23-172.31.18.52:22-147.75.109.163:34366.service: Deactivated successfully. May 10 00:00:25.887949 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:00:25.890579 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. May 10 00:00:25.892402 systemd-logind[1993]: Removed session 24. May 10 00:00:30.914809 systemd[1]: Started sshd@24-172.31.18.52:22-147.75.109.163:49102.service - OpenSSH per-connection server daemon (147.75.109.163:49102). May 10 00:00:31.084865 sshd[4972]: Accepted publickey for core from 147.75.109.163 port 49102 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:31.088490 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:31.096982 systemd-logind[1993]: New session 25 of user core. May 10 00:00:31.106541 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:00:31.337568 sshd[4972]: pam_unix(sshd:session): session closed for user core May 10 00:00:31.343498 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. May 10 00:00:31.343760 systemd[1]: sshd@24-172.31.18.52:22-147.75.109.163:49102.service: Deactivated successfully. May 10 00:00:31.349151 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:00:31.354607 systemd-logind[1993]: Removed session 25. May 10 00:00:36.379817 systemd[1]: Started sshd@25-172.31.18.52:22-147.75.109.163:49118.service - OpenSSH per-connection server daemon (147.75.109.163:49118). May 10 00:00:36.555465 sshd[4987]: Accepted publickey for core from 147.75.109.163 port 49118 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:36.558143 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:36.565651 systemd-logind[1993]: New session 26 of user core. May 10 00:00:36.577844 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:00:36.815755 sshd[4987]: pam_unix(sshd:session): session closed for user core May 10 00:00:36.823267 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. May 10 00:00:36.824540 systemd[1]: sshd@25-172.31.18.52:22-147.75.109.163:49118.service: Deactivated successfully. May 10 00:00:36.828663 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:00:36.831991 systemd-logind[1993]: Removed session 26. May 10 00:00:41.857848 systemd[1]: Started sshd@26-172.31.18.52:22-147.75.109.163:34818.service - OpenSSH per-connection server daemon (147.75.109.163:34818). May 10 00:00:42.030362 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 34818 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:42.035065 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:42.043868 systemd-logind[1993]: New session 27 of user core. May 10 00:00:42.050564 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 00:00:42.300689 sshd[5000]: pam_unix(sshd:session): session closed for user core May 10 00:00:42.307411 systemd[1]: sshd@26-172.31.18.52:22-147.75.109.163:34818.service: Deactivated successfully. May 10 00:00:42.311432 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:00:42.314361 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. May 10 00:00:42.316766 systemd-logind[1993]: Removed session 27. May 10 00:00:47.341829 systemd[1]: Started sshd@27-172.31.18.52:22-147.75.109.163:59382.service - OpenSSH per-connection server daemon (147.75.109.163:59382). May 10 00:00:47.513932 sshd[5013]: Accepted publickey for core from 147.75.109.163 port 59382 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:47.517198 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:47.525633 systemd-logind[1993]: New session 28 of user core. May 10 00:00:47.535585 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 00:00:47.768410 sshd[5013]: pam_unix(sshd:session): session closed for user core May 10 00:00:47.775023 systemd[1]: sshd@27-172.31.18.52:22-147.75.109.163:59382.service: Deactivated successfully. May 10 00:00:47.779158 systemd[1]: session-28.scope: Deactivated successfully. May 10 00:00:47.781147 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. May 10 00:00:47.784178 systemd-logind[1993]: Removed session 28. May 10 00:00:47.808824 systemd[1]: Started sshd@28-172.31.18.52:22-147.75.109.163:59392.service - OpenSSH per-connection server daemon (147.75.109.163:59392). May 10 00:00:47.989734 sshd[5026]: Accepted publickey for core from 147.75.109.163 port 59392 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:47.992601 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:48.001063 systemd-logind[1993]: New session 29 of user core. May 10 00:00:48.016592 systemd[1]: Started session-29.scope - Session 29 of User core. May 10 00:00:50.778064 containerd[2025]: time="2025-05-10T00:00:50.777995719Z" level=info msg="StopContainer for \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\" with timeout 30 (s)" May 10 00:00:50.786660 containerd[2025]: time="2025-05-10T00:00:50.785688715Z" level=info msg="Stop container \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\" with signal terminated" May 10 00:00:50.810020 containerd[2025]: time="2025-05-10T00:00:50.809945611Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:00:50.814334 systemd[1]: cri-containerd-9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633.scope: Deactivated successfully. May 10 00:00:50.827121 containerd[2025]: time="2025-05-10T00:00:50.827071531Z" level=info msg="StopContainer for \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\" with timeout 2 (s)" May 10 00:00:50.828082 containerd[2025]: time="2025-05-10T00:00:50.827918923Z" level=info msg="Stop container \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\" with signal terminated" May 10 00:00:50.845563 systemd-networkd[1908]: lxc_health: Link DOWN May 10 00:00:50.845579 systemd-networkd[1908]: lxc_health: Lost carrier May 10 00:00:50.876142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633-rootfs.mount: Deactivated successfully. May 10 00:00:50.880943 systemd[1]: cri-containerd-8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8.scope: Deactivated successfully. May 10 00:00:50.881961 systemd[1]: cri-containerd-8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8.scope: Consumed 14.284s CPU time. May 10 00:00:50.899768 containerd[2025]: time="2025-05-10T00:00:50.899435203Z" level=info msg="shim disconnected" id=9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633 namespace=k8s.io May 10 00:00:50.899768 containerd[2025]: time="2025-05-10T00:00:50.899525575Z" level=warning msg="cleaning up after shim disconnected" id=9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633 namespace=k8s.io May 10 00:00:50.899768 containerd[2025]: time="2025-05-10T00:00:50.899547967Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:50.935978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8-rootfs.mount: Deactivated successfully. May 10 00:00:50.944335 containerd[2025]: time="2025-05-10T00:00:50.944197267Z" level=info msg="shim disconnected" id=8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8 namespace=k8s.io May 10 00:00:50.945214 containerd[2025]: time="2025-05-10T00:00:50.944281327Z" level=warning msg="cleaning up after shim disconnected" id=8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8 namespace=k8s.io May 10 00:00:50.945214 containerd[2025]: time="2025-05-10T00:00:50.944425651Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:50.945214 containerd[2025]: time="2025-05-10T00:00:50.945016039Z" level=info msg="StopContainer for \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\" returns successfully" May 10 00:00:50.946441 containerd[2025]: time="2025-05-10T00:00:50.946099699Z" level=info msg="StopPodSandbox for \"f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca\"" May 10 00:00:50.946441 containerd[2025]: time="2025-05-10T00:00:50.946177663Z" level=info msg="Container to stop \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:50.950802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca-shm.mount: Deactivated successfully. May 10 00:00:50.965392 systemd[1]: cri-containerd-f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca.scope: Deactivated successfully. May 10 00:00:50.987241 containerd[2025]: time="2025-05-10T00:00:50.987009644Z" level=info msg="StopContainer for \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\" returns successfully" May 10 00:00:50.988351 containerd[2025]: time="2025-05-10T00:00:50.988273496Z" level=info msg="StopPodSandbox for \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\"" May 10 00:00:50.988473 containerd[2025]: time="2025-05-10T00:00:50.988365764Z" level=info msg="Container to stop \"11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:50.988473 containerd[2025]: time="2025-05-10T00:00:50.988393100Z" level=info msg="Container to stop \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:50.988473 containerd[2025]: time="2025-05-10T00:00:50.988417004Z" level=info msg="Container to stop \"dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:50.988645 containerd[2025]: time="2025-05-10T00:00:50.988552616Z" level=info msg="Container to stop \"6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:50.988645 containerd[2025]: time="2025-05-10T00:00:50.988581020Z" level=info msg="Container to stop \"63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:50.993397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff-shm.mount: Deactivated successfully. May 10 00:00:51.006164 systemd[1]: cri-containerd-2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff.scope: Deactivated successfully. May 10 00:00:51.039402 containerd[2025]: time="2025-05-10T00:00:51.037895920Z" level=info msg="shim disconnected" id=f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca namespace=k8s.io May 10 00:00:51.040301 containerd[2025]: time="2025-05-10T00:00:51.039972520Z" level=warning msg="cleaning up after shim disconnected" id=f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca namespace=k8s.io May 10 00:00:51.040301 containerd[2025]: time="2025-05-10T00:00:51.040245268Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:51.063140 containerd[2025]: time="2025-05-10T00:00:51.062823016Z" level=info msg="shim disconnected" id=2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff namespace=k8s.io May 10 00:00:51.064748 containerd[2025]: time="2025-05-10T00:00:51.064654936Z" level=warning msg="cleaning up after shim disconnected" id=2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff namespace=k8s.io May 10 00:00:51.065022 containerd[2025]: time="2025-05-10T00:00:51.064802776Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:51.081314 containerd[2025]: time="2025-05-10T00:00:51.081133072Z" level=info msg="TearDown network for sandbox \"f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca\" successfully" May 10 00:00:51.081314 containerd[2025]: time="2025-05-10T00:00:51.081186736Z" level=info msg="StopPodSandbox for \"f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca\" returns successfully" May 10 00:00:51.106516 containerd[2025]: time="2025-05-10T00:00:51.106251652Z" level=info msg="TearDown network for sandbox \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" successfully" May 10 00:00:51.106516 containerd[2025]: time="2025-05-10T00:00:51.106389052Z" level=info msg="StopPodSandbox for \"2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff\" returns successfully" May 10 00:00:51.132883 kubelet[3395]: I0510 00:00:51.132822 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqghp\" (UniqueName: \"kubernetes.io/projected/2fbe43f7-f78c-4e45-8a28-e8b093d88025-kube-api-access-tqghp\") pod \"2fbe43f7-f78c-4e45-8a28-e8b093d88025\" (UID: \"2fbe43f7-f78c-4e45-8a28-e8b093d88025\") " May 10 00:00:51.132883 kubelet[3395]: I0510 00:00:51.132898 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fbe43f7-f78c-4e45-8a28-e8b093d88025-cilium-config-path\") pod \"2fbe43f7-f78c-4e45-8a28-e8b093d88025\" (UID: \"2fbe43f7-f78c-4e45-8a28-e8b093d88025\") " May 10 00:00:51.143184 kubelet[3395]: I0510 00:00:51.143109 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbe43f7-f78c-4e45-8a28-e8b093d88025-kube-api-access-tqghp" (OuterVolumeSpecName: "kube-api-access-tqghp") pod "2fbe43f7-f78c-4e45-8a28-e8b093d88025" (UID: "2fbe43f7-f78c-4e45-8a28-e8b093d88025"). InnerVolumeSpecName "kube-api-access-tqghp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:51.144799 kubelet[3395]: I0510 00:00:51.144678 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fbe43f7-f78c-4e45-8a28-e8b093d88025-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2fbe43f7-f78c-4e45-8a28-e8b093d88025" (UID: "2fbe43f7-f78c-4e45-8a28-e8b093d88025"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:00:51.234188 kubelet[3395]: I0510 00:00:51.233968 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwggq\" (UniqueName: \"kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-kube-api-access-zwggq\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234188 kubelet[3395]: I0510 00:00:51.234033 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-clustermesh-secrets\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234188 kubelet[3395]: I0510 00:00:51.234073 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-run\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234188 kubelet[3395]: I0510 00:00:51.234108 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-kernel\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234188 kubelet[3395]: I0510 00:00:51.234143 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hostproc\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234188 kubelet[3395]: I0510 00:00:51.234180 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cni-path\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234668 kubelet[3395]: I0510 00:00:51.234213 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-etc-cni-netd\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234668 kubelet[3395]: I0510 00:00:51.234245 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-bpf-maps\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234668 kubelet[3395]: I0510 00:00:51.234340 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-xtables-lock\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234668 kubelet[3395]: I0510 00:00:51.234387 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-config-path\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234668 kubelet[3395]: I0510 00:00:51.234425 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hubble-tls\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.234668 kubelet[3395]: I0510 00:00:51.234458 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-lib-modules\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.236148 kubelet[3395]: I0510 00:00:51.234492 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-cgroup\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.236148 kubelet[3395]: I0510 00:00:51.234526 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-net\") pod \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\" (UID: \"7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0\") " May 10 00:00:51.236148 kubelet[3395]: I0510 00:00:51.234586 3395 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tqghp\" (UniqueName: \"kubernetes.io/projected/2fbe43f7-f78c-4e45-8a28-e8b093d88025-kube-api-access-tqghp\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.236148 kubelet[3395]: I0510 00:00:51.234609 3395 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fbe43f7-f78c-4e45-8a28-e8b093d88025-cilium-config-path\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.236148 kubelet[3395]: I0510 00:00:51.234674 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242371 kubelet[3395]: I0510 00:00:51.237761 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242371 kubelet[3395]: I0510 00:00:51.237846 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242371 kubelet[3395]: I0510 00:00:51.237892 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242371 kubelet[3395]: I0510 00:00:51.237930 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242371 kubelet[3395]: I0510 00:00:51.237967 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242742 kubelet[3395]: I0510 00:00:51.238003 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242742 kubelet[3395]: I0510 00:00:51.238040 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242742 kubelet[3395]: I0510 00:00:51.240759 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:00:51.242742 kubelet[3395]: I0510 00:00:51.240859 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.242742 kubelet[3395]: I0510 00:00:51.240901 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:51.250363 kubelet[3395]: I0510 00:00:51.250199 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:00:51.252263 kubelet[3395]: I0510 00:00:51.252189 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:51.253678 kubelet[3395]: I0510 00:00:51.253625 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-kube-api-access-zwggq" (OuterVolumeSpecName: "kube-api-access-zwggq") pod "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" (UID: "7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0"). InnerVolumeSpecName "kube-api-access-zwggq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335154 3395 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-config-path\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335203 3395 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-xtables-lock\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335224 3395 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hubble-tls\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335244 3395 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-net\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335264 3395 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-lib-modules\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335318 3395 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-cgroup\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335347 3395 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zwggq\" (UniqueName: \"kubernetes.io/projected/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-kube-api-access-zwggq\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.335440 kubelet[3395]: I0510 00:00:51.335367 3395 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-clustermesh-secrets\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.336210 kubelet[3395]: I0510 00:00:51.335406 3395 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-host-proc-sys-kernel\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.336953 kubelet[3395]: I0510 00:00:51.336738 3395 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cilium-run\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.336953 kubelet[3395]: I0510 00:00:51.336770 3395 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-hostproc\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.336953 kubelet[3395]: I0510 00:00:51.336806 3395 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-cni-path\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.336953 kubelet[3395]: I0510 00:00:51.336831 3395 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-etc-cni-netd\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.336953 kubelet[3395]: I0510 00:00:51.336850 3395 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0-bpf-maps\") on node \"ip-172-31-18-52\" DevicePath \"\"" May 10 00:00:51.786202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f19189551dbebbd804bae46dc4fbb4814c9050c9db7e14170f941ccb32bc02ca-rootfs.mount: Deactivated successfully. May 10 00:00:51.786411 systemd[1]: var-lib-kubelet-pods-2fbe43f7\x2df78c\x2d4e45\x2d8a28\x2de8b093d88025-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtqghp.mount: Deactivated successfully. May 10 00:00:51.786559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f032de73b7824a6dde9a64552ad00e8dba7dc64ea5d007881d22e9bee7b5bff-rootfs.mount: Deactivated successfully. May 10 00:00:51.786725 systemd[1]: var-lib-kubelet-pods-7f2c72c8\x2d3a0e\x2d4b35\x2d9e8f\x2dccb59ed723d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzwggq.mount: Deactivated successfully. May 10 00:00:51.786917 systemd[1]: var-lib-kubelet-pods-7f2c72c8\x2d3a0e\x2d4b35\x2d9e8f\x2dccb59ed723d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:00:51.787141 systemd[1]: var-lib-kubelet-pods-7f2c72c8\x2d3a0e\x2d4b35\x2d9e8f\x2dccb59ed723d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:00:52.077825 kubelet[3395]: I0510 00:00:52.077000 3395 scope.go:117] "RemoveContainer" containerID="9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633" May 10 00:00:52.082071 containerd[2025]: time="2025-05-10T00:00:52.080816657Z" level=info msg="RemoveContainer for \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\"" May 10 00:00:52.096601 containerd[2025]: time="2025-05-10T00:00:52.096344921Z" level=info msg="RemoveContainer for \"9d6fdfc6aca73515cbdafb435ffb4debedbd6b33bfff5aea31abe21447ca5633\" returns successfully" May 10 00:00:52.099176 kubelet[3395]: I0510 00:00:52.098820 3395 scope.go:117] "RemoveContainer" containerID="8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8" May 10 00:00:52.103052 containerd[2025]: time="2025-05-10T00:00:52.103002257Z" level=info msg="RemoveContainer for \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\"" May 10 00:00:52.103381 systemd[1]: Removed slice kubepods-besteffort-pod2fbe43f7_f78c_4e45_8a28_e8b093d88025.slice - libcontainer container kubepods-besteffort-pod2fbe43f7_f78c_4e45_8a28_e8b093d88025.slice. May 10 00:00:52.107226 systemd[1]: Removed slice kubepods-burstable-pod7f2c72c8_3a0e_4b35_9e8f_ccb59ed723d0.slice - libcontainer container kubepods-burstable-pod7f2c72c8_3a0e_4b35_9e8f_ccb59ed723d0.slice. May 10 00:00:52.107481 systemd[1]: kubepods-burstable-pod7f2c72c8_3a0e_4b35_9e8f_ccb59ed723d0.slice: Consumed 14.432s CPU time. May 10 00:00:52.113559 containerd[2025]: time="2025-05-10T00:00:52.113191421Z" level=info msg="RemoveContainer for \"8dba3b1bc0dafe1a8a07c2cdce1c2449359a25c46129fdf119a9bf27367bdad8\" returns successfully" May 10 00:00:52.113833 kubelet[3395]: I0510 00:00:52.113593 3395 scope.go:117] "RemoveContainer" containerID="6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6" May 10 00:00:52.118429 containerd[2025]: time="2025-05-10T00:00:52.117790253Z" level=info msg="RemoveContainer for \"6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6\"" May 10 00:00:52.125230 containerd[2025]: time="2025-05-10T00:00:52.125151437Z" level=info msg="RemoveContainer for \"6c6f3e5cb1b111cdb61ec4f1c18f332f59173cc22d7c7f8a3ad490ab6db6f3c6\" returns successfully" May 10 00:00:52.125847 kubelet[3395]: I0510 00:00:52.125772 3395 scope.go:117] "RemoveContainer" containerID="dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13" May 10 00:00:52.130186 containerd[2025]: time="2025-05-10T00:00:52.129993713Z" level=info msg="RemoveContainer for \"dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13\"" May 10 00:00:52.139276 containerd[2025]: time="2025-05-10T00:00:52.138616829Z" level=info msg="RemoveContainer for \"dd6b5e8eb31e3cda7393cc5c38e1d6808e98167bc224e6751f52f71459286f13\" returns successfully" May 10 00:00:52.140359 kubelet[3395]: I0510 00:00:52.139901 3395 scope.go:117] "RemoveContainer" containerID="63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11" May 10 00:00:52.145504 containerd[2025]: time="2025-05-10T00:00:52.144856673Z" level=info msg="RemoveContainer for \"63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11\"" May 10 00:00:52.151582 containerd[2025]: time="2025-05-10T00:00:52.151490225Z" level=info msg="RemoveContainer for \"63bcf2ecf5fe277bb5f9b27dd1ccf80f3c0a6b53aaf6cd109d7b49adb425fa11\" returns successfully" May 10 00:00:52.152305 kubelet[3395]: I0510 00:00:52.152238 3395 scope.go:117] "RemoveContainer" containerID="11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908" May 10 00:00:52.156615 containerd[2025]: time="2025-05-10T00:00:52.155836385Z" level=info msg="RemoveContainer for \"11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908\"" May 10 00:00:52.163103 containerd[2025]: time="2025-05-10T00:00:52.163025273Z" level=info msg="RemoveContainer for \"11be07c1e4064fbea01b9edda5e56dfa4843037e9a1687df01f4d1d25521d908\" returns successfully" May 10 00:00:52.583305 kubelet[3395]: I0510 00:00:52.583219 3395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbe43f7-f78c-4e45-8a28-e8b093d88025" path="/var/lib/kubelet/pods/2fbe43f7-f78c-4e45-8a28-e8b093d88025/volumes" May 10 00:00:52.584314 kubelet[3395]: I0510 00:00:52.584258 3395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" path="/var/lib/kubelet/pods/7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0/volumes" May 10 00:00:52.704596 sshd[5026]: pam_unix(sshd:session): session closed for user core May 10 00:00:52.709840 systemd-logind[1993]: Session 29 logged out. Waiting for processes to exit. May 10 00:00:52.710588 systemd[1]: sshd@28-172.31.18.52:22-147.75.109.163:59392.service: Deactivated successfully. May 10 00:00:52.715864 systemd[1]: session-29.scope: Deactivated successfully. May 10 00:00:52.716537 systemd[1]: session-29.scope: Consumed 2.003s CPU time. May 10 00:00:52.721078 systemd-logind[1993]: Removed session 29. May 10 00:00:52.751847 systemd[1]: Started sshd@29-172.31.18.52:22-147.75.109.163:59398.service - OpenSSH per-connection server daemon (147.75.109.163:59398). May 10 00:00:52.929256 sshd[5187]: Accepted publickey for core from 147.75.109.163 port 59398 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:52.931862 sshd[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:52.938988 systemd-logind[1993]: New session 30 of user core. May 10 00:00:52.954574 systemd[1]: Started session-30.scope - Session 30 of User core. May 10 00:00:53.482792 ntpd[1986]: Deleting interface #12 lxc_health, fe80::c091:91ff:fe6a:2587%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs May 10 00:00:53.483518 ntpd[1986]: 10 May 00:00:53 ntpd[1986]: Deleting interface #12 lxc_health, fe80::c091:91ff:fe6a:2587%8#123, interface stats: received=0, sent=0, dropped=0, active_time=91 secs May 10 00:00:53.806396 kubelet[3395]: E0510 00:00:53.806170 3395 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:54.736703 sshd[5187]: pam_unix(sshd:session): session closed for user core May 10 00:00:54.746548 systemd[1]: sshd@29-172.31.18.52:22-147.75.109.163:59398.service: Deactivated successfully. May 10 00:00:54.753249 systemd[1]: session-30.scope: Deactivated successfully. May 10 00:00:54.753898 systemd[1]: session-30.scope: Consumed 1.567s CPU time. May 10 00:00:54.761607 systemd-logind[1993]: Session 30 logged out. Waiting for processes to exit. May 10 00:00:54.766503 kubelet[3395]: E0510 00:00:54.765225 3395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" containerName="mount-cgroup" May 10 00:00:54.766503 kubelet[3395]: E0510 00:00:54.766012 3395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2fbe43f7-f78c-4e45-8a28-e8b093d88025" containerName="cilium-operator" May 10 00:00:54.766503 kubelet[3395]: E0510 00:00:54.766078 3395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" containerName="mount-bpf-fs" May 10 00:00:54.766503 kubelet[3395]: E0510 00:00:54.766096 3395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" containerName="clean-cilium-state" May 10 00:00:54.766503 kubelet[3395]: E0510 00:00:54.766114 3395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" containerName="cilium-agent" May 10 00:00:54.766503 kubelet[3395]: E0510 00:00:54.766153 3395 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" containerName="apply-sysctl-overwrites" May 10 00:00:54.766503 kubelet[3395]: I0510 00:00:54.766239 3395 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbe43f7-f78c-4e45-8a28-e8b093d88025" containerName="cilium-operator" May 10 00:00:54.766503 kubelet[3395]: I0510 00:00:54.766258 3395 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2c72c8-3a0e-4b35-9e8f-ccb59ed723d0" containerName="cilium-agent" May 10 00:00:54.785261 systemd-logind[1993]: Removed session 30. May 10 00:00:54.791508 systemd[1]: Started sshd@30-172.31.18.52:22-147.75.109.163:59412.service - OpenSSH per-connection server daemon (147.75.109.163:59412). May 10 00:00:54.795900 kubelet[3395]: W0510 00:00:54.795606 3395 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-18-52" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-52' and this object May 10 00:00:54.795900 kubelet[3395]: E0510 00:00:54.795676 3395 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-18-52\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-52' and this object" logger="UnhandledError" May 10 00:00:54.795900 kubelet[3395]: W0510 00:00:54.795769 3395 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-18-52" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-52' and this object May 10 00:00:54.795900 kubelet[3395]: W0510 00:00:54.795795 3395 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-52" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-52' and this object May 10 00:00:54.795900 kubelet[3395]: E0510 00:00:54.795831 3395 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-18-52\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-52' and this object" logger="UnhandledError" May 10 00:00:54.796272 kubelet[3395]: E0510 00:00:54.795794 3395 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-18-52\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-52' and this object" logger="UnhandledError" May 10 00:00:54.796272 kubelet[3395]: W0510 00:00:54.795860 3395 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-18-52" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-52' and this object May 10 00:00:54.796272 kubelet[3395]: E0510 00:00:54.795885 3395 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-18-52\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-52' and this object" logger="UnhandledError" May 10 00:00:54.820088 systemd[1]: Created slice kubepods-burstable-podde80a014_a666_4cf1_aacc_fb5db1f333a7.slice - libcontainer container kubepods-burstable-podde80a014_a666_4cf1_aacc_fb5db1f333a7.slice. May 10 00:00:54.859341 kubelet[3395]: I0510 00:00:54.858631 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de80a014-a666-4cf1-aacc-fb5db1f333a7-clustermesh-secrets\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.861316 kubelet[3395]: I0510 00:00:54.860093 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-config-path\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.861316 kubelet[3395]: I0510 00:00:54.860183 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-lib-modules\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.861316 kubelet[3395]: I0510 00:00:54.860256 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-bpf-maps\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.861316 kubelet[3395]: I0510 00:00:54.860412 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-xtables-lock\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.861316 kubelet[3395]: I0510 00:00:54.860461 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-ipsec-secrets\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.861316 kubelet[3395]: I0510 00:00:54.860535 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-etc-cni-netd\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.862331 kubelet[3395]: I0510 00:00:54.861834 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-run\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.862331 kubelet[3395]: I0510 00:00:54.861991 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-cni-path\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.862331 kubelet[3395]: I0510 00:00:54.862073 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-host-proc-sys-kernel\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.862331 kubelet[3395]: I0510 00:00:54.862114 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-hostproc\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.862331 kubelet[3395]: I0510 00:00:54.862193 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de80a014-a666-4cf1-aacc-fb5db1f333a7-hubble-tls\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.865569 kubelet[3395]: I0510 00:00:54.862272 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2ll\" (UniqueName: \"kubernetes.io/projected/de80a014-a666-4cf1-aacc-fb5db1f333a7-kube-api-access-xx2ll\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.865569 kubelet[3395]: I0510 00:00:54.862779 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-cgroup\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:54.865569 kubelet[3395]: I0510 00:00:54.865381 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de80a014-a666-4cf1-aacc-fb5db1f333a7-host-proc-sys-net\") pod \"cilium-fcr26\" (UID: \"de80a014-a666-4cf1-aacc-fb5db1f333a7\") " pod="kube-system/cilium-fcr26" May 10 00:00:55.008481 sshd[5198]: Accepted publickey for core from 147.75.109.163 port 59412 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:55.012953 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:55.020923 systemd-logind[1993]: New session 31 of user core. May 10 00:00:55.032566 systemd[1]: Started session-31.scope - Session 31 of User core. May 10 00:00:55.154746 sshd[5198]: pam_unix(sshd:session): session closed for user core May 10 00:00:55.160136 systemd[1]: sshd@30-172.31.18.52:22-147.75.109.163:59412.service: Deactivated successfully. May 10 00:00:55.164794 systemd[1]: session-31.scope: Deactivated successfully. May 10 00:00:55.169960 systemd-logind[1993]: Session 31 logged out. Waiting for processes to exit. May 10 00:00:55.172118 systemd-logind[1993]: Removed session 31. May 10 00:00:55.192825 systemd[1]: Started sshd@31-172.31.18.52:22-147.75.109.163:59420.service - OpenSSH per-connection server daemon (147.75.109.163:59420). May 10 00:00:55.369863 sshd[5209]: Accepted publickey for core from 147.75.109.163 port 59420 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:55.371878 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:55.379765 systemd-logind[1993]: New session 32 of user core. May 10 00:00:55.389539 systemd[1]: Started session-32.scope - Session 32 of User core. May 10 00:00:55.967671 kubelet[3395]: E0510 00:00:55.967585 3395 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 10 00:00:55.968254 kubelet[3395]: E0510 00:00:55.967710 3395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-config-path podName:de80a014-a666-4cf1-aacc-fb5db1f333a7 nodeName:}" failed. No retries permitted until 2025-05-10 00:00:56.467679208 +0000 UTC m=+128.122153867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-config-path") pod "cilium-fcr26" (UID: "de80a014-a666-4cf1-aacc-fb5db1f333a7") : failed to sync configmap cache: timed out waiting for the condition May 10 00:00:55.968254 kubelet[3395]: E0510 00:00:55.967608 3395 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 10 00:00:55.968254 kubelet[3395]: E0510 00:00:55.968078 3395 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-ipsec-secrets podName:de80a014-a666-4cf1-aacc-fb5db1f333a7 nodeName:}" failed. No retries permitted until 2025-05-10 00:00:56.468056536 +0000 UTC m=+128.122531195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/de80a014-a666-4cf1-aacc-fb5db1f333a7-cilium-ipsec-secrets") pod "cilium-fcr26" (UID: "de80a014-a666-4cf1-aacc-fb5db1f333a7") : failed to sync secret cache: timed out waiting for the condition May 10 00:00:56.632188 containerd[2025]: time="2025-05-10T00:00:56.632136060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcr26,Uid:de80a014-a666-4cf1-aacc-fb5db1f333a7,Namespace:kube-system,Attempt:0,}" May 10 00:00:56.677031 containerd[2025]: time="2025-05-10T00:00:56.676628916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:56.677031 containerd[2025]: time="2025-05-10T00:00:56.676866624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:56.677469 containerd[2025]: time="2025-05-10T00:00:56.677025852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:56.677536 containerd[2025]: time="2025-05-10T00:00:56.677457420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:56.713437 systemd[1]: run-containerd-runc-k8s.io-b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137-runc.gKstrr.mount: Deactivated successfully. May 10 00:00:56.727638 systemd[1]: Started cri-containerd-b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137.scope - libcontainer container b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137. May 10 00:00:56.769611 containerd[2025]: time="2025-05-10T00:00:56.769366044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcr26,Uid:de80a014-a666-4cf1-aacc-fb5db1f333a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\"" May 10 00:00:56.776032 containerd[2025]: time="2025-05-10T00:00:56.775961856Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:00:56.804003 containerd[2025]: time="2025-05-10T00:00:56.803923165Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40\"" May 10 00:00:56.805977 containerd[2025]: time="2025-05-10T00:00:56.805813933Z" level=info msg="StartContainer for \"6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40\"" May 10 00:00:56.854620 systemd[1]: Started cri-containerd-6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40.scope - libcontainer container 6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40. May 10 00:00:56.903427 containerd[2025]: time="2025-05-10T00:00:56.903227509Z" level=info msg="StartContainer for \"6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40\" returns successfully" May 10 00:00:56.920601 systemd[1]: cri-containerd-6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40.scope: Deactivated successfully. May 10 00:00:56.973516 containerd[2025]: time="2025-05-10T00:00:56.973419805Z" level=info msg="shim disconnected" id=6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40 namespace=k8s.io May 10 00:00:56.973516 containerd[2025]: time="2025-05-10T00:00:56.973493173Z" level=warning msg="cleaning up after shim disconnected" id=6c503c159fbc724a54182e6a9519c87826436a698ea2ebf85298e5a7ccdb4e40 namespace=k8s.io May 10 00:00:56.973516 containerd[2025]: time="2025-05-10T00:00:56.973514617Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:57.116434 containerd[2025]: time="2025-05-10T00:00:57.115021726Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:00:57.146647 containerd[2025]: time="2025-05-10T00:00:57.146389726Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5\"" May 10 00:00:57.147970 containerd[2025]: time="2025-05-10T00:00:57.147566854Z" level=info msg="StartContainer for \"cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5\"" May 10 00:00:57.203652 systemd[1]: Started cri-containerd-cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5.scope - libcontainer container cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5. May 10 00:00:57.251161 containerd[2025]: time="2025-05-10T00:00:57.251083631Z" level=info msg="StartContainer for \"cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5\" returns successfully" May 10 00:00:57.264977 systemd[1]: cri-containerd-cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5.scope: Deactivated successfully. May 10 00:00:57.310502 containerd[2025]: time="2025-05-10T00:00:57.310176299Z" level=info msg="shim disconnected" id=cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5 namespace=k8s.io May 10 00:00:57.310502 containerd[2025]: time="2025-05-10T00:00:57.310340591Z" level=warning msg="cleaning up after shim disconnected" id=cbd81e043e75dcfe46cb4c480be943c0cad39e24d09c6833d6618e820287d6a5 namespace=k8s.io May 10 00:00:57.310502 containerd[2025]: time="2025-05-10T00:00:57.310366031Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:58.116487 containerd[2025]: time="2025-05-10T00:00:58.116196743Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:00:58.151035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103635646.mount: Deactivated successfully. May 10 00:00:58.153109 containerd[2025]: time="2025-05-10T00:00:58.152189327Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97\"" May 10 00:00:58.155738 containerd[2025]: time="2025-05-10T00:00:58.155153147Z" level=info msg="StartContainer for \"0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97\"" May 10 00:00:58.217592 systemd[1]: Started cri-containerd-0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97.scope - libcontainer container 0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97. May 10 00:00:58.270414 containerd[2025]: time="2025-05-10T00:00:58.270079980Z" level=info msg="StartContainer for \"0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97\" returns successfully" May 10 00:00:58.275604 systemd[1]: cri-containerd-0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97.scope: Deactivated successfully. May 10 00:00:58.326472 containerd[2025]: time="2025-05-10T00:00:58.326163936Z" level=info msg="shim disconnected" id=0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97 namespace=k8s.io May 10 00:00:58.326472 containerd[2025]: time="2025-05-10T00:00:58.326235744Z" level=warning msg="cleaning up after shim disconnected" id=0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97 namespace=k8s.io May 10 00:00:58.326472 containerd[2025]: time="2025-05-10T00:00:58.326256180Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:58.488518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a3e863bd460de93f4baea42682003ba3293ec110810e07ca3e58e6355558c97-rootfs.mount: Deactivated successfully. May 10 00:00:58.808137 kubelet[3395]: E0510 00:00:58.807712 3395 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:59.123263 containerd[2025]: time="2025-05-10T00:00:59.122720772Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:00:59.152879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696721620.mount: Deactivated successfully. May 10 00:00:59.157783 containerd[2025]: time="2025-05-10T00:00:59.157712136Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83\"" May 10 00:00:59.160588 containerd[2025]: time="2025-05-10T00:00:59.159498312Z" level=info msg="StartContainer for \"164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83\"" May 10 00:00:59.239966 systemd[1]: Started cri-containerd-164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83.scope - libcontainer container 164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83. May 10 00:00:59.287993 systemd[1]: cri-containerd-164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83.scope: Deactivated successfully. May 10 00:00:59.294956 containerd[2025]: time="2025-05-10T00:00:59.294620833Z" level=info msg="StartContainer for \"164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83\" returns successfully" May 10 00:00:59.374147 containerd[2025]: time="2025-05-10T00:00:59.373621477Z" level=info msg="shim disconnected" id=164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83 namespace=k8s.io May 10 00:00:59.374147 containerd[2025]: time="2025-05-10T00:00:59.373700377Z" level=warning msg="cleaning up after shim disconnected" id=164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83 namespace=k8s.io May 10 00:00:59.374147 containerd[2025]: time="2025-05-10T00:00:59.373725517Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:59.486357 systemd[1]: run-containerd-runc-k8s.io-164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83-runc.CXBNps.mount: Deactivated successfully. May 10 00:00:59.486543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-164f66f24228f60ac2209e2f674b4acfd48e0d515a9c0fe0cbbf915de86eff83-rootfs.mount: Deactivated successfully. May 10 00:01:00.132158 containerd[2025]: time="2025-05-10T00:01:00.132077641Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:01:00.172049 containerd[2025]: time="2025-05-10T00:01:00.171865729Z" level=info msg="CreateContainer within sandbox \"b0ad1c09401edff7b3112c942a7298cb7422e7717fa2f21fa48ff0cf50fef137\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"604d53e5cdf3aba2c428430045171569f4c0e93348485b1bf65ba81ce6bf334a\"" May 10 00:01:00.173273 containerd[2025]: time="2025-05-10T00:01:00.173083933Z" level=info msg="StartContainer for \"604d53e5cdf3aba2c428430045171569f4c0e93348485b1bf65ba81ce6bf334a\"" May 10 00:01:00.229617 systemd[1]: Started cri-containerd-604d53e5cdf3aba2c428430045171569f4c0e93348485b1bf65ba81ce6bf334a.scope - libcontainer container 604d53e5cdf3aba2c428430045171569f4c0e93348485b1bf65ba81ce6bf334a. May 10 00:01:00.282409 containerd[2025]: time="2025-05-10T00:01:00.282274886Z" level=info msg="StartContainer for \"604d53e5cdf3aba2c428430045171569f4c0e93348485b1bf65ba81ce6bf334a\" returns successfully" May 10 00:01:01.061512 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 10 00:01:01.349938 kubelet[3395]: I0510 00:01:01.346353 3395 setters.go:600] "Node became not ready" node="ip-172-31-18-52" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:01:01Z","lastTransitionTime":"2025-05-10T00:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:01:05.382403 systemd-networkd[1908]: lxc_health: Link UP May 10 00:01:05.392400 (udev-worker)[6056]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:05.396385 systemd-networkd[1908]: lxc_health: Gained carrier May 10 00:01:06.541553 systemd-networkd[1908]: lxc_health: Gained IPv6LL May 10 00:01:06.673259 kubelet[3395]: I0510 00:01:06.673156 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fcr26" podStartSLOduration=12.673135438 podStartE2EDuration="12.673135438s" podCreationTimestamp="2025-05-10 00:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:01:01.171024038 +0000 UTC m=+132.825498721" watchObservedRunningTime="2025-05-10 00:01:06.673135438 +0000 UTC m=+138.327610097" May 10 00:01:09.482912 ntpd[1986]: Listen normally on 15 lxc_health [fe80::b4ae:1bff:fe15:a390%14]:123 May 10 00:01:09.483498 ntpd[1986]: 10 May 00:01:09 ntpd[1986]: Listen normally on 15 lxc_health [fe80::b4ae:1bff:fe15:a390%14]:123 May 10 00:01:10.964109 systemd[1]: run-containerd-runc-k8s.io-604d53e5cdf3aba2c428430045171569f4c0e93348485b1bf65ba81ce6bf334a-runc.MuIPYz.mount: Deactivated successfully. May 10 00:01:11.088649 sshd[5209]: pam_unix(sshd:session): session closed for user core May 10 00:01:11.095553 systemd[1]: sshd@31-172.31.18.52:22-147.75.109.163:59420.service: Deactivated successfully. May 10 00:01:11.102000 systemd[1]: session-32.scope: Deactivated successfully. May 10 00:01:11.107375 systemd-logind[1993]: Session 32 logged out. Waiting for processes to exit. May 10 00:01:11.110487 systemd-logind[1993]: Removed session 32.