May 14 23:49:04.204524 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 14 23:49:04.204569 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:49:04.204595 kernel: KASLR disabled due to lack of seed May 14 23:49:04.204611 kernel: efi: EFI v2.7 by EDK II May 14 23:49:04.204629 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78557598 May 14 23:49:04.204644 kernel: secureboot: Secure boot disabled May 14 23:49:04.204662 kernel: ACPI: Early table checksum verification disabled May 14 23:49:04.204677 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 14 23:49:04.204694 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 14 23:49:04.204709 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 14 23:49:04.204729 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 14 23:49:04.204745 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 14 23:49:04.204761 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 14 23:49:04.204776 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 14 23:49:04.204795 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 14 23:49:04.204815 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 14 23:49:04.204833 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 14 23:49:04.204849 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 14 23:49:04.204865 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 14 23:49:04.204881 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 14 23:49:04.204898 kernel: printk: bootconsole [uart0] enabled May 14 23:49:04.204914 kernel: NUMA: Failed to initialise from firmware May 14 23:49:04.204930 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 14 23:49:04.204946 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 14 23:49:04.204962 kernel: Zone ranges: May 14 23:49:04.204978 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 14 23:49:04.204999 kernel: DMA32 empty May 14 23:49:04.205016 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 14 23:49:04.207093 kernel: Movable zone start for each node May 14 23:49:04.207143 kernel: Early memory node ranges May 14 23:49:04.207161 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 14 23:49:04.207178 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 14 23:49:04.207194 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 14 23:49:04.207211 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 14 23:49:04.207227 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 14 23:49:04.207243 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 14 23:49:04.207259 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 14 23:49:04.207276 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 14 23:49:04.207303 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 14 23:49:04.207320 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 14 23:49:04.207344 kernel: psci: probing for conduit method from ACPI. May 14 23:49:04.207361 kernel: psci: PSCIv1.0 detected in firmware. May 14 23:49:04.207378 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:49:04.207400 kernel: psci: Trusted OS migration not required May 14 23:49:04.207417 kernel: psci: SMC Calling Convention v1.1 May 14 23:49:04.207434 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:49:04.207451 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:49:04.207469 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:49:04.207487 kernel: Detected PIPT I-cache on CPU0 May 14 23:49:04.207504 kernel: CPU features: detected: GIC system register CPU interface May 14 23:49:04.207521 kernel: CPU features: detected: Spectre-v2 May 14 23:49:04.207538 kernel: CPU features: detected: Spectre-v3a May 14 23:49:04.207555 kernel: CPU features: detected: Spectre-BHB May 14 23:49:04.207572 kernel: CPU features: detected: ARM erratum 1742098 May 14 23:49:04.207589 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 14 23:49:04.207611 kernel: alternatives: applying boot alternatives May 14 23:49:04.207630 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:04.207650 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:49:04.207667 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:49:04.207685 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:49:04.207702 kernel: Fallback order for Node 0: 0 May 14 23:49:04.207719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 14 23:49:04.207736 kernel: Policy zone: Normal May 14 23:49:04.207753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:49:04.207770 kernel: software IO TLB: area num 2. May 14 23:49:04.207791 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 14 23:49:04.207809 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) May 14 23:49:04.207826 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:49:04.207843 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:49:04.207862 kernel: rcu: RCU event tracing is enabled. May 14 23:49:04.207880 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:49:04.207898 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:49:04.207915 kernel: Tracing variant of Tasks RCU enabled. May 14 23:49:04.207932 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:49:04.207950 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:49:04.207967 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:49:04.207988 kernel: GICv3: 96 SPIs implemented May 14 23:49:04.208006 kernel: GICv3: 0 Extended SPIs implemented May 14 23:49:04.208025 kernel: Root IRQ handler: gic_handle_irq May 14 23:49:04.208068 kernel: GICv3: GICv3 features: 16 PPIs May 14 23:49:04.208088 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 14 23:49:04.208105 kernel: ITS [mem 0x10080000-0x1009ffff] May 14 23:49:04.208123 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:49:04.208141 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 14 23:49:04.208160 kernel: GICv3: using LPI property table @0x00000004000d0000 May 14 23:49:04.208178 kernel: ITS: Using hypervisor restricted LPI range [128] May 14 23:49:04.208196 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 14 23:49:04.208214 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:49:04.208239 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 14 23:49:04.208257 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 14 23:49:04.208275 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 14 23:49:04.208293 kernel: Console: colour dummy device 80x25 May 14 23:49:04.208312 kernel: printk: console [tty1] enabled May 14 23:49:04.208331 kernel: ACPI: Core revision 20230628 May 14 23:49:04.208351 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 14 23:49:04.208371 kernel: pid_max: default: 32768 minimum: 301 May 14 23:49:04.208389 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:49:04.208407 kernel: landlock: Up and running. May 14 23:49:04.208431 kernel: SELinux: Initializing. May 14 23:49:04.208449 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:04.208469 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:04.208487 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:04.208505 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:04.208525 kernel: rcu: Hierarchical SRCU implementation. May 14 23:49:04.208543 kernel: rcu: Max phase no-delay instances is 400. May 14 23:49:04.208561 kernel: Platform MSI: ITS@0x10080000 domain created May 14 23:49:04.208583 kernel: PCI/MSI: ITS@0x10080000 domain created May 14 23:49:04.208601 kernel: Remapping and enabling EFI services. May 14 23:49:04.208618 kernel: smp: Bringing up secondary CPUs ... May 14 23:49:04.208636 kernel: Detected PIPT I-cache on CPU1 May 14 23:49:04.208654 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 14 23:49:04.208673 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 14 23:49:04.208695 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 14 23:49:04.208712 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:49:04.208730 kernel: SMP: Total of 2 processors activated. May 14 23:49:04.208747 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:49:04.208768 kernel: CPU features: detected: 32-bit EL1 Support May 14 23:49:04.208786 kernel: CPU features: detected: CRC32 instructions May 14 23:49:04.208815 kernel: CPU: All CPU(s) started at EL1 May 14 23:49:04.208837 kernel: alternatives: applying system-wide alternatives May 14 23:49:04.208855 kernel: devtmpfs: initialized May 14 23:49:04.208873 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:49:04.208892 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:49:04.208910 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:49:04.208928 kernel: SMBIOS 3.0.0 present. May 14 23:49:04.208951 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 14 23:49:04.208970 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:49:04.208988 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:49:04.209006 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:49:04.209025 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:49:04.210267 kernel: audit: initializing netlink subsys (disabled) May 14 23:49:04.210293 kernel: audit: type=2000 audit(0.219:1): state=initialized audit_enabled=0 res=1 May 14 23:49:04.210322 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:49:04.210341 kernel: cpuidle: using governor menu May 14 23:49:04.210360 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:49:04.210382 kernel: ASID allocator initialised with 65536 entries May 14 23:49:04.210403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:49:04.210421 kernel: Serial: AMBA PL011 UART driver May 14 23:49:04.210439 kernel: Modules: 17744 pages in range for non-PLT usage May 14 23:49:04.210457 kernel: Modules: 509264 pages in range for PLT usage May 14 23:49:04.210475 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:49:04.210512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:49:04.210535 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:49:04.210553 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:49:04.210572 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:49:04.210590 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:49:04.210608 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:49:04.210626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:49:04.210644 kernel: ACPI: Added _OSI(Module Device) May 14 23:49:04.210662 kernel: ACPI: Added _OSI(Processor Device) May 14 23:49:04.210686 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:49:04.210704 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:49:04.210722 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:49:04.210743 kernel: ACPI: Interpreter enabled May 14 23:49:04.210762 kernel: ACPI: Using GIC for interrupt routing May 14 23:49:04.210782 kernel: ACPI: MCFG table detected, 1 entries May 14 23:49:04.210801 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 14 23:49:04.211136 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:49:04.211349 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:49:04.211555 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:49:04.211757 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 14 23:49:04.211965 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 14 23:49:04.211990 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 14 23:49:04.212008 kernel: acpiphp: Slot [1] registered May 14 23:49:04.212027 kernel: acpiphp: Slot [2] registered May 14 23:49:04.215153 kernel: acpiphp: Slot [3] registered May 14 23:49:04.215187 kernel: acpiphp: Slot [4] registered May 14 23:49:04.215207 kernel: acpiphp: Slot [5] registered May 14 23:49:04.215226 kernel: acpiphp: Slot [6] registered May 14 23:49:04.215244 kernel: acpiphp: Slot [7] registered May 14 23:49:04.215262 kernel: acpiphp: Slot [8] registered May 14 23:49:04.215281 kernel: acpiphp: Slot [9] registered May 14 23:49:04.215300 kernel: acpiphp: Slot [10] registered May 14 23:49:04.215318 kernel: acpiphp: Slot [11] registered May 14 23:49:04.215337 kernel: acpiphp: Slot [12] registered May 14 23:49:04.215355 kernel: acpiphp: Slot [13] registered May 14 23:49:04.215378 kernel: acpiphp: Slot [14] registered May 14 23:49:04.215397 kernel: acpiphp: Slot [15] registered May 14 23:49:04.215415 kernel: acpiphp: Slot [16] registered May 14 23:49:04.215434 kernel: acpiphp: Slot [17] registered May 14 23:49:04.215452 kernel: acpiphp: Slot [18] registered May 14 23:49:04.215470 kernel: acpiphp: Slot [19] registered May 14 23:49:04.215488 kernel: acpiphp: Slot [20] registered May 14 23:49:04.215506 kernel: acpiphp: Slot [21] registered May 14 23:49:04.215524 kernel: acpiphp: Slot [22] registered May 14 23:49:04.215547 kernel: acpiphp: Slot [23] registered May 14 23:49:04.215566 kernel: acpiphp: Slot [24] registered May 14 23:49:04.215584 kernel: acpiphp: Slot [25] registered May 14 23:49:04.215602 kernel: acpiphp: Slot [26] registered May 14 23:49:04.215620 kernel: acpiphp: Slot [27] registered May 14 23:49:04.215638 kernel: acpiphp: Slot [28] registered May 14 23:49:04.215672 kernel: acpiphp: Slot [29] registered May 14 23:49:04.215697 kernel: acpiphp: Slot [30] registered May 14 23:49:04.215716 kernel: acpiphp: Slot [31] registered May 14 23:49:04.215735 kernel: PCI host bridge to bus 0000:00 May 14 23:49:04.215996 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 14 23:49:04.216661 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:49:04.216871 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 14 23:49:04.217096 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 14 23:49:04.217338 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 14 23:49:04.217582 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 14 23:49:04.217808 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 14 23:49:04.218051 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 14 23:49:04.218283 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 14 23:49:04.218512 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 23:49:04.218757 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 14 23:49:04.218979 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 14 23:49:04.219235 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 14 23:49:04.219469 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 14 23:49:04.219697 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 23:49:04.219908 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 14 23:49:04.220201 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 14 23:49:04.220425 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 14 23:49:04.220638 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 14 23:49:04.220855 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 14 23:49:04.221100 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 14 23:49:04.221295 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:49:04.221502 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 14 23:49:04.221531 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:49:04.221552 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:49:04.221571 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:49:04.221591 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:49:04.221612 kernel: iommu: Default domain type: Translated May 14 23:49:04.221645 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:49:04.221664 kernel: efivars: Registered efivars operations May 14 23:49:04.221683 kernel: vgaarb: loaded May 14 23:49:04.221702 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:49:04.221721 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:49:04.221740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:49:04.221759 kernel: pnp: PnP ACPI init May 14 23:49:04.221994 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 14 23:49:04.222027 kernel: pnp: PnP ACPI: found 1 devices May 14 23:49:04.222085 kernel: NET: Registered PF_INET protocol family May 14 23:49:04.222104 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:49:04.222123 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:49:04.222142 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:49:04.222161 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:49:04.222179 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:49:04.222198 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:49:04.222217 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:04.222242 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:04.222261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:49:04.222279 kernel: PCI: CLS 0 bytes, default 64 May 14 23:49:04.222297 kernel: kvm [1]: HYP mode not available May 14 23:49:04.222316 kernel: Initialise system trusted keyrings May 14 23:49:04.222334 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:49:04.222353 kernel: Key type asymmetric registered May 14 23:49:04.222371 kernel: Asymmetric key parser 'x509' registered May 14 23:49:04.222389 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:49:04.222412 kernel: io scheduler mq-deadline registered May 14 23:49:04.222431 kernel: io scheduler kyber registered May 14 23:49:04.222449 kernel: io scheduler bfq registered May 14 23:49:04.222707 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 14 23:49:04.222741 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:49:04.222760 kernel: ACPI: button: Power Button [PWRB] May 14 23:49:04.222778 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 14 23:49:04.222797 kernel: ACPI: button: Sleep Button [SLPB] May 14 23:49:04.222822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:49:04.222842 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 14 23:49:04.223080 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 14 23:49:04.223108 kernel: printk: console [ttyS0] disabled May 14 23:49:04.223128 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 14 23:49:04.223147 kernel: printk: console [ttyS0] enabled May 14 23:49:04.223165 kernel: printk: bootconsole [uart0] disabled May 14 23:49:04.223184 kernel: thunder_xcv, ver 1.0 May 14 23:49:04.223202 kernel: thunder_bgx, ver 1.0 May 14 23:49:04.223220 kernel: nicpf, ver 1.0 May 14 23:49:04.223245 kernel: nicvf, ver 1.0 May 14 23:49:04.223462 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:49:04.223664 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:49:03 UTC (1747266543) May 14 23:49:04.223689 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:49:04.223708 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 14 23:49:04.223727 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:49:04.223745 kernel: watchdog: Hard watchdog permanently disabled May 14 23:49:04.223769 kernel: NET: Registered PF_INET6 protocol family May 14 23:49:04.223788 kernel: Segment Routing with IPv6 May 14 23:49:04.223806 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:49:04.223824 kernel: NET: Registered PF_PACKET protocol family May 14 23:49:04.223842 kernel: Key type dns_resolver registered May 14 23:49:04.223861 kernel: registered taskstats version 1 May 14 23:49:04.223879 kernel: Loading compiled-in X.509 certificates May 14 23:49:04.223897 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:49:04.223915 kernel: Key type .fscrypt registered May 14 23:49:04.223933 kernel: Key type fscrypt-provisioning registered May 14 23:49:04.223956 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:49:04.223974 kernel: ima: Allocated hash algorithm: sha1 May 14 23:49:04.223993 kernel: ima: No architecture policies found May 14 23:49:04.224011 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:49:04.224030 kernel: clk: Disabling unused clocks May 14 23:49:04.224085 kernel: Freeing unused kernel memory: 38336K May 14 23:49:04.224104 kernel: Run /init as init process May 14 23:49:04.224123 kernel: with arguments: May 14 23:49:04.224141 kernel: /init May 14 23:49:04.224165 kernel: with environment: May 14 23:49:04.224183 kernel: HOME=/ May 14 23:49:04.224249 kernel: TERM=linux May 14 23:49:04.224270 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:49:04.224291 systemd[1]: Successfully made /usr/ read-only. May 14 23:49:04.224316 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:04.224338 systemd[1]: Detected virtualization amazon. May 14 23:49:04.224365 systemd[1]: Detected architecture arm64. May 14 23:49:04.224385 systemd[1]: Running in initrd. May 14 23:49:04.224404 systemd[1]: No hostname configured, using default hostname. May 14 23:49:04.224425 systemd[1]: Hostname set to . May 14 23:49:04.224444 systemd[1]: Initializing machine ID from VM UUID. May 14 23:49:04.224463 systemd[1]: Queued start job for default target initrd.target. May 14 23:49:04.224484 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:04.224503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:04.224524 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:49:04.224550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:04.224570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:49:04.224592 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:49:04.224614 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:49:04.224635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:49:04.224655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:04.224679 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:04.224699 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:04.224718 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:04.224738 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:04.224757 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:04.224777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:04.224796 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:04.224816 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:49:04.224836 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:49:04.224859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:04.224879 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:04.224899 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:04.224919 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:04.224939 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:49:04.224959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:04.224978 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:49:04.224998 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:49:04.225022 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:04.225108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:04.225131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:04.225151 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:49:04.225171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:04.225204 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:49:04.225238 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:49:04.225259 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:49:04.225278 kernel: Bridge firewalling registered May 14 23:49:04.225298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:04.225318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:04.225381 systemd-journald[252]: Collecting audit messages is disabled. May 14 23:49:04.225430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:04.225451 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:49:04.225472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:04.225492 systemd-journald[252]: Journal started May 14 23:49:04.225533 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2acbb2620e108c1313526ff24a647d) is 8M, max 75.3M, 67.3M free. May 14 23:49:04.132302 systemd-modules-load[253]: Inserted module 'overlay' May 14 23:49:04.157654 systemd-modules-load[253]: Inserted module 'br_netfilter' May 14 23:49:04.235055 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:04.235122 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:04.234359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:04.242289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:04.275672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:04.279893 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:04.300487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:04.317756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:04.339811 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:49:04.371232 dracut-cmdline[292]: dracut-dracut-053 May 14 23:49:04.380530 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:04.407574 systemd-resolved[283]: Positive Trust Anchors: May 14 23:49:04.407612 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:04.407673 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:04.563554 kernel: SCSI subsystem initialized May 14 23:49:04.570177 kernel: Loading iSCSI transport class v2.0-870. May 14 23:49:04.582153 kernel: iscsi: registered transport (tcp) May 14 23:49:04.604435 kernel: iscsi: registered transport (qla4xxx) May 14 23:49:04.604523 kernel: QLogic iSCSI HBA Driver May 14 23:49:04.665129 kernel: random: crng init done May 14 23:49:04.664309 systemd-resolved[283]: Defaulting to hostname 'linux'. May 14 23:49:04.667891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:04.670145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:04.696531 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:49:04.704306 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:49:04.746306 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:49:04.746386 kernel: device-mapper: uevent: version 1.0.3 May 14 23:49:04.746413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:49:04.812080 kernel: raid6: neonx8 gen() 6583 MB/s May 14 23:49:04.829066 kernel: raid6: neonx4 gen() 6536 MB/s May 14 23:49:04.846067 kernel: raid6: neonx2 gen() 5447 MB/s May 14 23:49:04.863066 kernel: raid6: neonx1 gen() 3937 MB/s May 14 23:49:04.880066 kernel: raid6: int64x8 gen() 3599 MB/s May 14 23:49:04.897066 kernel: raid6: int64x4 gen() 3711 MB/s May 14 23:49:04.914066 kernel: raid6: int64x2 gen() 3612 MB/s May 14 23:49:04.931860 kernel: raid6: int64x1 gen() 2771 MB/s May 14 23:49:04.931892 kernel: raid6: using algorithm neonx8 gen() 6583 MB/s May 14 23:49:04.949814 kernel: raid6: .... xor() 4814 MB/s, rmw enabled May 14 23:49:04.949850 kernel: raid6: using neon recovery algorithm May 14 23:49:04.957070 kernel: xor: measuring software checksum speed May 14 23:49:04.957133 kernel: 8regs : 11891 MB/sec May 14 23:49:04.959069 kernel: 32regs : 12005 MB/sec May 14 23:49:04.961051 kernel: arm64_neon : 8946 MB/sec May 14 23:49:04.961084 kernel: xor: using function: 32regs (12005 MB/sec) May 14 23:49:05.044091 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:49:05.062563 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:05.072358 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:05.110264 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 14 23:49:05.120717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:05.143435 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:49:05.169913 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation May 14 23:49:05.225468 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:05.235356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:05.361809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:05.375666 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:49:05.425435 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:49:05.430649 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:05.450174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:05.469257 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:05.493548 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:49:05.525144 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:05.584275 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:49:05.584342 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 14 23:49:05.593991 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 14 23:49:05.594385 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 14 23:49:05.608189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:05.630135 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ad:82:5b:d0:07 May 14 23:49:05.608453 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:05.611212 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:05.613359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:05.644512 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 14 23:49:05.644551 kernel: nvme nvme0: pci function 0000:00:04.0 May 14 23:49:05.613650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:05.629693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:05.654153 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 14 23:49:05.656412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:05.663527 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. May 14 23:49:05.669427 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:49:05.669473 kernel: GPT:9289727 != 16777215 May 14 23:49:05.669499 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:49:05.664721 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:05.675186 kernel: GPT:9289727 != 16777215 May 14 23:49:05.675221 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:49:05.675245 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 23:49:05.700130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:05.711347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:05.753585 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:05.773186 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (534) May 14 23:49:05.832069 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (535) May 14 23:49:05.898660 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 14 23:49:05.940744 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 14 23:49:05.979869 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 14 23:49:06.002857 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 14 23:49:06.005475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 14 23:49:06.022316 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:49:06.035649 disk-uuid[664]: Primary Header is updated. May 14 23:49:06.035649 disk-uuid[664]: Secondary Entries is updated. May 14 23:49:06.035649 disk-uuid[664]: Secondary Header is updated. May 14 23:49:06.045095 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 23:49:07.060081 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 23:49:07.062056 disk-uuid[665]: The operation has completed successfully. May 14 23:49:07.266794 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:49:07.266995 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:49:07.353329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:49:07.364457 sh[926]: Success May 14 23:49:07.390379 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:49:07.512267 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:49:07.516675 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:49:07.527338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:49:07.557216 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:49:07.557290 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:07.557316 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:49:07.560314 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:49:07.560348 kernel: BTRFS info (device dm-0): using free space tree May 14 23:49:07.655075 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 23:49:07.691350 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:49:07.695354 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:49:07.708271 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:49:07.714273 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:49:07.753848 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:07.753917 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:07.755435 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 23:49:07.762157 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 23:49:07.770134 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:07.776126 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:49:07.786434 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:49:07.890861 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:07.910353 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:07.968151 systemd-networkd[1117]: lo: Link UP May 14 23:49:07.968173 systemd-networkd[1117]: lo: Gained carrier May 14 23:49:07.973483 systemd-networkd[1117]: Enumeration completed May 14 23:49:07.973662 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:07.976137 systemd[1]: Reached target network.target - Network. May 14 23:49:07.977924 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:07.977932 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:07.991685 systemd-networkd[1117]: eth0: Link UP May 14 23:49:07.991705 systemd-networkd[1117]: eth0: Gained carrier May 14 23:49:07.991723 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:08.008112 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.28.25/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 14 23:49:08.160430 ignition[1024]: Ignition 2.20.0 May 14 23:49:08.160452 ignition[1024]: Stage: fetch-offline May 14 23:49:08.160877 ignition[1024]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:08.160900 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:08.161396 ignition[1024]: Ignition finished successfully May 14 23:49:08.170294 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:08.181340 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:49:08.212638 ignition[1127]: Ignition 2.20.0 May 14 23:49:08.213165 ignition[1127]: Stage: fetch May 14 23:49:08.213750 ignition[1127]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:08.213774 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:08.214006 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:08.239114 ignition[1127]: PUT result: OK May 14 23:49:08.243058 ignition[1127]: parsed url from cmdline: "" May 14 23:49:08.243192 ignition[1127]: no config URL provided May 14 23:49:08.243214 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:49:08.243241 ignition[1127]: no config at "/usr/lib/ignition/user.ign" May 14 23:49:08.243298 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:08.250877 ignition[1127]: PUT result: OK May 14 23:49:08.250958 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 14 23:49:08.254728 ignition[1127]: GET result: OK May 14 23:49:08.254925 ignition[1127]: parsing config with SHA512: 10b7f8c6662a9351321f70f5874105b0ce34ce43f21090ca6d6c6807d13ca93b828d4040fa5036a592e2571e0878d9e99461cf03c3c055bec6fb5204d862c1c3 May 14 23:49:08.264697 unknown[1127]: fetched base config from "system" May 14 23:49:08.264725 unknown[1127]: fetched base config from "system" May 14 23:49:08.264740 unknown[1127]: fetched user config from "aws" May 14 23:49:08.267693 ignition[1127]: fetch: fetch complete May 14 23:49:08.267705 ignition[1127]: fetch: fetch passed May 14 23:49:08.274490 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:49:08.267802 ignition[1127]: Ignition finished successfully May 14 23:49:08.292416 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:49:08.323450 ignition[1134]: Ignition 2.20.0 May 14 23:49:08.323952 ignition[1134]: Stage: kargs May 14 23:49:08.324599 ignition[1134]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:08.324624 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:08.324791 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:08.327629 ignition[1134]: PUT result: OK May 14 23:49:08.337909 ignition[1134]: kargs: kargs passed May 14 23:49:08.338005 ignition[1134]: Ignition finished successfully May 14 23:49:08.341895 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:49:08.350333 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:49:08.380441 ignition[1140]: Ignition 2.20.0 May 14 23:49:08.380475 ignition[1140]: Stage: disks May 14 23:49:08.381361 ignition[1140]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:08.381387 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:08.381563 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:08.383471 ignition[1140]: PUT result: OK May 14 23:49:08.393372 ignition[1140]: disks: disks passed May 14 23:49:08.393629 ignition[1140]: Ignition finished successfully May 14 23:49:08.398763 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:49:08.401618 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:49:08.403838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:49:08.406104 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:08.408009 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:08.409983 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:08.429323 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:49:08.474278 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:49:08.479269 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:49:08.491257 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:49:08.591341 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:49:08.592547 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:49:08.596198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:49:08.610198 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:08.616942 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:49:08.621214 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:49:08.621311 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:49:08.634564 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:08.641023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:49:08.644779 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) May 14 23:49:08.649516 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:08.649552 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:08.651123 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 23:49:08.659095 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 23:49:08.660329 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:49:08.671118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:09.057871 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:49:09.087068 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory May 14 23:49:09.095975 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:49:09.103864 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:49:09.339227 systemd-networkd[1117]: eth0: Gained IPv6LL May 14 23:49:09.382492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:49:09.391259 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:49:09.397194 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:49:09.420409 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:49:09.424083 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:09.450124 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:49:09.465878 ignition[1279]: INFO : Ignition 2.20.0 May 14 23:49:09.465878 ignition[1279]: INFO : Stage: mount May 14 23:49:09.469185 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:09.469185 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:09.473464 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:09.476090 ignition[1279]: INFO : PUT result: OK May 14 23:49:09.480924 ignition[1279]: INFO : mount: mount passed May 14 23:49:09.480924 ignition[1279]: INFO : Ignition finished successfully May 14 23:49:09.485616 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:49:09.500403 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:49:09.603429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:09.627080 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) May 14 23:49:09.631284 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:09.631328 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:09.631354 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 23:49:09.638164 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 23:49:09.641098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:09.674799 ignition[1308]: INFO : Ignition 2.20.0 May 14 23:49:09.674799 ignition[1308]: INFO : Stage: files May 14 23:49:09.678144 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:09.678144 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:09.678144 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:09.685194 ignition[1308]: INFO : PUT result: OK May 14 23:49:09.689732 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping May 14 23:49:09.695466 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:49:09.695466 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:49:09.715543 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:49:09.718617 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:49:09.721573 unknown[1308]: wrote ssh authorized keys file for user: core May 14 23:49:09.723931 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:49:09.726666 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:49:09.726666 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:49:09.817225 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:49:10.123305 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:49:10.127251 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:10.127251 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:49:10.570934 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:49:10.688527 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:10.694159 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:49:10.694159 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:49:10.694159 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:10.694159 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:10.694159 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:49:10.713275 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 23:49:11.125516 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:49:11.455625 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:49:11.455625 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:49:11.462523 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:11.462523 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:11.462523 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:49:11.462523 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 23:49:11.462523 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:49:11.462523 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:11.462523 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:11.462523 ignition[1308]: INFO : files: files passed May 14 23:49:11.462523 ignition[1308]: INFO : Ignition finished successfully May 14 23:49:11.491104 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:49:11.507390 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:49:11.514648 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:49:11.528293 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:49:11.530121 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:49:11.550945 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:11.550945 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:11.558509 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:11.564454 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:11.571324 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:49:11.583321 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:49:11.636785 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:49:11.637245 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:49:11.644193 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:49:11.646315 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:49:11.648449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:49:11.664989 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:49:11.691109 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:11.703395 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:49:11.729074 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:11.732221 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:11.738314 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:49:11.740277 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:49:11.740510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:11.745076 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:49:11.752363 systemd[1]: Stopped target basic.target - Basic System. May 14 23:49:11.754238 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:49:11.756515 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:11.764204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:49:11.766597 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:49:11.773268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:11.775964 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:49:11.782295 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:49:11.784826 systemd[1]: Stopped target swap.target - Swaps. May 14 23:49:11.789578 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:49:11.789815 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:11.792754 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:11.798468 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:11.804516 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:49:11.810524 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:11.815385 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:49:11.817445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:49:11.821721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:49:11.824104 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:11.828839 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:49:11.830826 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:49:11.847370 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:49:11.849792 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:49:11.850561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:11.859719 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:49:11.866205 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:49:11.866793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:11.873465 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:49:11.873702 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:11.905617 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:49:11.907562 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:49:11.918907 ignition[1360]: INFO : Ignition 2.20.0 May 14 23:49:11.918907 ignition[1360]: INFO : Stage: umount May 14 23:49:11.918907 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:11.918907 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:11.918737 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:49:11.930641 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:11.930641 ignition[1360]: INFO : PUT result: OK May 14 23:49:11.934923 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:49:11.935236 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:49:11.941833 ignition[1360]: INFO : umount: umount passed May 14 23:49:11.941833 ignition[1360]: INFO : Ignition finished successfully May 14 23:49:11.947627 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:49:11.948067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:49:11.954336 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:49:11.954444 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:49:11.956408 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:49:11.956489 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:49:11.959223 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:49:11.959300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:49:11.961235 systemd[1]: Stopped target network.target - Network. May 14 23:49:11.964659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:49:11.964746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:11.967382 systemd[1]: Stopped target paths.target - Path Units. May 14 23:49:11.969004 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:49:11.987499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:11.990366 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:49:11.996344 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:49:11.998211 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:49:11.998291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:12.000187 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:49:12.000253 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:12.002241 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:49:12.002321 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:49:12.004455 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:49:12.004536 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:49:12.007965 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:49:12.008062 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:49:12.010304 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:49:12.012692 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:49:12.040230 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:49:12.042320 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:49:12.050656 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:49:12.053591 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:49:12.054106 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:49:12.061805 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:49:12.063538 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:49:12.063666 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:12.078361 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:49:12.080963 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:49:12.081100 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:12.083819 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:49:12.083900 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:12.088275 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:49:12.088358 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:49:12.103987 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:49:12.104109 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:12.110971 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:12.115007 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:49:12.121646 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:12.140112 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:49:12.142060 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:49:12.146513 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:49:12.146951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:12.154823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:49:12.154961 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:49:12.157278 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:49:12.157352 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:12.157835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:49:12.157924 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:12.159488 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:49:12.159568 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:49:12.160097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:12.160176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:12.171468 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:49:12.174353 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:49:12.174496 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:12.181310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:12.181496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:12.195992 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:49:12.196152 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:12.196867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:49:12.198088 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:49:12.202934 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:49:12.225126 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:49:12.238866 systemd[1]: Switching root. May 14 23:49:12.300854 systemd-journald[252]: Journal stopped May 14 23:49:14.908148 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). May 14 23:49:14.908290 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:49:14.908333 kernel: SELinux: policy capability open_perms=1 May 14 23:49:14.908375 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:49:14.908406 kernel: SELinux: policy capability always_check_network=0 May 14 23:49:14.908444 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:49:14.908472 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:49:14.908501 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:49:14.908531 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:49:14.908561 kernel: audit: type=1403 audit(1747266552.857:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:49:14.908590 systemd[1]: Successfully loaded SELinux policy in 76.768ms. May 14 23:49:14.908643 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.205ms. May 14 23:49:14.908676 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:14.908707 systemd[1]: Detected virtualization amazon. May 14 23:49:14.908742 systemd[1]: Detected architecture arm64. May 14 23:49:14.908774 systemd[1]: Detected first boot. May 14 23:49:14.908805 systemd[1]: Initializing machine ID from VM UUID. May 14 23:49:14.908836 zram_generator::config[1405]: No configuration found. May 14 23:49:14.908868 kernel: NET: Registered PF_VSOCK protocol family May 14 23:49:14.910915 systemd[1]: Populated /etc with preset unit settings. May 14 23:49:14.910963 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:49:14.911009 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:49:14.913118 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:49:14.913166 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:49:14.913201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:49:14.913234 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:49:14.913265 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:49:14.913299 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:49:14.913330 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:49:14.913359 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:49:14.913399 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:49:14.913428 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:49:14.913460 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:14.913489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:14.913518 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:49:14.913547 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:49:14.913579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:49:14.913611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:14.913641 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 23:49:14.913684 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:14.913715 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:49:14.913745 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:49:14.913774 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:49:14.913803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:49:14.913837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:14.913870 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:14.913901 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:14.913936 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:14.913965 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:49:14.913995 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:49:14.914024 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:49:14.914077 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:14.914110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:14.914141 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:14.914171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:49:14.914201 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:49:14.914235 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:49:14.914264 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:49:14.914296 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:49:14.914325 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:49:14.914354 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:49:14.914405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:49:14.914443 systemd[1]: Reached target machines.target - Containers. May 14 23:49:14.914474 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:49:14.914510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:14.914544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:14.914573 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:49:14.914602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:14.914630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:14.914661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:14.914689 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:49:14.914721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:14.914753 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:49:14.914796 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:49:14.914825 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:49:14.914855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:49:14.914886 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:49:14.914916 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:14.914947 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:14.914976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:14.915004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:49:14.918124 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:49:14.918192 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:49:14.918224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:14.918256 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:49:14.918289 systemd[1]: Stopped verity-setup.service. May 14 23:49:14.918330 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:49:14.918364 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:49:14.918418 kernel: fuse: init (API version 7.39) May 14 23:49:14.918461 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:49:14.918501 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:49:14.918533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:49:14.918569 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:49:14.918606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:14.918644 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:49:14.918674 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:49:14.918706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:14.918735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:14.918767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:14.918798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:14.918831 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:49:14.918864 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:49:14.918894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:14.918924 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:49:14.918955 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:49:14.918985 kernel: loop: module loaded May 14 23:49:14.919012 kernel: ACPI: bus type drm_connector registered May 14 23:49:14.919117 systemd-journald[1491]: Collecting audit messages is disabled. May 14 23:49:14.919179 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:49:14.919214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:49:14.919253 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:14.919283 systemd-journald[1491]: Journal started May 14 23:49:14.919331 systemd-journald[1491]: Runtime Journal (/run/log/journal/ec2acbb2620e108c1313526ff24a647d) is 8M, max 75.3M, 67.3M free. May 14 23:49:14.296520 systemd[1]: Queued start job for default target multi-user.target. May 14 23:49:14.310347 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 14 23:49:14.311287 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:49:14.934117 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:49:14.952111 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:49:14.963122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:49:14.963208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:14.983475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:49:14.983563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:14.991844 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:49:15.010439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:15.023253 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:49:15.027143 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:15.028630 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:49:15.031880 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:15.033161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:15.035866 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:15.037164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:15.039878 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:49:15.045347 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:49:15.049561 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:49:15.052338 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:49:15.071822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:49:15.089224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:49:15.124106 kernel: loop0: detected capacity change from 0 to 194096 May 14 23:49:15.139416 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:49:15.143272 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:49:15.157319 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:49:15.171359 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:49:15.173779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:15.178963 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:49:15.184943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:15.198083 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:49:15.217674 systemd-journald[1491]: Time spent on flushing to /var/log/journal/ec2acbb2620e108c1313526ff24a647d is 94.043ms for 927 entries. May 14 23:49:15.217674 systemd-journald[1491]: System Journal (/var/log/journal/ec2acbb2620e108c1313526ff24a647d) is 8M, max 195.6M, 187.6M free. May 14 23:49:15.323077 systemd-journald[1491]: Received client request to flush runtime journal. May 14 23:49:15.323170 kernel: loop1: detected capacity change from 0 to 113512 May 14 23:49:15.256147 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:49:15.291777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:15.305388 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:49:15.314978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:49:15.327241 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:49:15.358776 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:49:15.376929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:15.380768 udevadm[1555]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 23:49:15.387090 kernel: loop2: detected capacity change from 0 to 53784 May 14 23:49:15.433737 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. May 14 23:49:15.434340 systemd-tmpfiles[1560]: ACLs are not supported, ignoring. May 14 23:49:15.444804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:15.454087 kernel: loop3: detected capacity change from 0 to 123192 May 14 23:49:15.589080 kernel: loop4: detected capacity change from 0 to 194096 May 14 23:49:15.624106 kernel: loop5: detected capacity change from 0 to 113512 May 14 23:49:15.642093 kernel: loop6: detected capacity change from 0 to 53784 May 14 23:49:15.667440 kernel: loop7: detected capacity change from 0 to 123192 May 14 23:49:15.682240 (sd-merge)[1565]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 14 23:49:15.683343 (sd-merge)[1565]: Merged extensions into '/usr'. May 14 23:49:15.696549 systemd[1]: Reload requested from client PID 1521 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:49:15.696587 systemd[1]: Reloading... May 14 23:49:15.836091 zram_generator::config[1590]: No configuration found. May 14 23:49:16.185866 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:16.342772 systemd[1]: Reloading finished in 645 ms. May 14 23:49:16.366902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:49:16.385448 systemd[1]: Starting ensure-sysext.service... May 14 23:49:16.400371 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:16.444302 systemd[1]: Reload requested from client PID 1644 ('systemctl') (unit ensure-sysext.service)... May 14 23:49:16.444525 systemd[1]: Reloading... May 14 23:49:16.476071 systemd-tmpfiles[1645]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:49:16.476601 systemd-tmpfiles[1645]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:49:16.480234 systemd-tmpfiles[1645]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:49:16.480797 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. May 14 23:49:16.480926 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. May 14 23:49:16.502557 systemd-tmpfiles[1645]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:16.502579 systemd-tmpfiles[1645]: Skipping /boot May 14 23:49:16.533965 systemd-tmpfiles[1645]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:16.534164 systemd-tmpfiles[1645]: Skipping /boot May 14 23:49:16.620099 zram_generator::config[1678]: No configuration found. May 14 23:49:16.858249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:16.868088 ldconfig[1517]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:49:17.007378 systemd[1]: Reloading finished in 562 ms. May 14 23:49:17.030143 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:49:17.033024 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:49:17.055106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:17.076600 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:17.086645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:49:17.093598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:49:17.103581 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:17.115852 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:17.122619 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:49:17.132186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:17.143544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:17.156546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:17.173537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:17.175783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:17.176069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:17.187516 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:49:17.193511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:17.194971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:17.195211 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:17.206386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:17.215123 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:17.217549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:17.217822 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:17.219334 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:49:17.223399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:17.226191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:17.236232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:49:17.243489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:17.243908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:17.248388 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:17.249386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:17.272580 systemd[1]: Finished ensure-sysext.service. May 14 23:49:17.283120 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:17.285144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:17.297305 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:49:17.300654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:17.300773 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:17.312442 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:49:17.336111 systemd-udevd[1735]: Using default interface naming scheme 'v255'. May 14 23:49:17.374208 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:49:17.385888 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:49:17.389428 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:49:17.413229 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:17.432446 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:17.437796 augenrules[1778]: No rules May 14 23:49:17.446548 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:17.450179 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:17.457502 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:49:17.633608 systemd-resolved[1733]: Positive Trust Anchors: May 14 23:49:17.634164 systemd-resolved[1733]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:17.634231 systemd-resolved[1733]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:17.645635 systemd-resolved[1733]: Defaulting to hostname 'linux'. May 14 23:49:17.650006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:17.652391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:17.690329 systemd-networkd[1773]: lo: Link UP May 14 23:49:17.690355 systemd-networkd[1773]: lo: Gained carrier May 14 23:49:17.693864 systemd-networkd[1773]: Enumeration completed May 14 23:49:17.694117 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:17.696462 systemd[1]: Reached target network.target - Network. May 14 23:49:17.721475 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:49:17.730486 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:49:17.737270 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 23:49:17.758549 (udev-worker)[1772]: Network interface NamePolicy= disabled on kernel command line. May 14 23:49:17.785375 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:49:17.811940 systemd-networkd[1773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:17.811965 systemd-networkd[1773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:17.814975 systemd-networkd[1773]: eth0: Link UP May 14 23:49:17.815297 systemd-networkd[1773]: eth0: Gained carrier May 14 23:49:17.815343 systemd-networkd[1773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:17.830234 systemd-networkd[1773]: eth0: DHCPv4 address 172.31.28.25/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 14 23:49:17.922102 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1802) May 14 23:49:18.094651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:18.172311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 14 23:49:18.174739 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:49:18.193903 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:49:18.199367 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:49:18.233961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:49:18.238649 lvm[1905]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:18.278860 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:49:18.282146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:18.293458 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:49:18.305003 lvm[1911]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:18.309263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:18.313640 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:18.315896 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:49:18.318430 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:49:18.321191 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:49:18.323434 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:49:18.325826 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:49:18.328241 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:49:18.328288 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:18.330074 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:18.333727 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:49:18.338503 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:49:18.347076 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:49:18.349952 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:49:18.352631 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:49:18.366442 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:49:18.369242 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:49:18.372562 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:49:18.374865 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:18.376854 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:18.378782 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:18.378848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:18.387227 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:49:18.393410 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:49:18.398519 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:49:18.405578 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:49:18.413718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:49:18.418200 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:49:18.420487 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:49:18.430168 systemd[1]: Started ntpd.service - Network Time Service. May 14 23:49:18.437260 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:49:18.451283 systemd[1]: Starting setup-oem.service - Setup OEM... May 14 23:49:18.473572 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:49:18.483494 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:49:18.520072 jq[1919]: false May 14 23:49:18.513578 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:49:18.519520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:49:18.521474 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:49:18.524422 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:49:18.534272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:49:18.544235 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:49:18.553817 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:49:18.559870 dbus-daemon[1918]: [system] SELinux support is enabled May 14 23:49:18.554270 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:49:18.572426 extend-filesystems[1920]: Found loop4 May 14 23:49:18.572426 extend-filesystems[1920]: Found loop5 May 14 23:49:18.572426 extend-filesystems[1920]: Found loop6 May 14 23:49:18.572426 extend-filesystems[1920]: Found loop7 May 14 23:49:18.572426 extend-filesystems[1920]: Found nvme0n1 May 14 23:49:18.572426 extend-filesystems[1920]: Found nvme0n1p1 May 14 23:49:18.572426 extend-filesystems[1920]: Found nvme0n1p2 May 14 23:49:18.572426 extend-filesystems[1920]: Found nvme0n1p3 May 14 23:49:18.572426 extend-filesystems[1920]: Found usr May 14 23:49:18.572426 extend-filesystems[1920]: Found nvme0n1p4 May 14 23:49:18.565569 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:49:18.573696 dbus-daemon[1918]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1773 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 14 23:49:18.612620 extend-filesystems[1920]: Found nvme0n1p6 May 14 23:49:18.612620 extend-filesystems[1920]: Found nvme0n1p7 May 14 23:49:18.612620 extend-filesystems[1920]: Found nvme0n1p9 May 14 23:49:18.612620 extend-filesystems[1920]: Checking size of /dev/nvme0n1p9 May 14 23:49:18.618286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:49:18.618785 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:49:18.661060 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:49:18.659595 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 23:49:18.665881 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:49:18.665929 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:49:18.671457 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:49:18.671498 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:49:18.708391 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 14 23:49:18.717754 jq[1932]: true May 14 23:49:18.736336 tar[1934]: linux-arm64/helm May 14 23:49:18.737857 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:49:18.740630 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:49:18.753221 extend-filesystems[1920]: Resized partition /dev/nvme0n1p9 May 14 23:49:18.759209 update_engine[1931]: I20250514 23:49:18.758489 1931 main.cc:92] Flatcar Update Engine starting May 14 23:49:18.761327 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: ntpd 4.2.8p17@1.4004-o Wed May 14 21:39:21 UTC 2025 (1): Starting May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: ---------------------------------------------------- May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: ntp-4 is maintained by Network Time Foundation, May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: corporation. Support and training for ntp-4 are May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: available at https://www.nwtime.org/support May 14 23:49:18.766194 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: ---------------------------------------------------- May 14 23:49:18.763422 ntpd[1922]: ntpd 4.2.8p17@1.4004-o Wed May 14 21:39:21 UTC 2025 (1): Starting May 14 23:49:18.763467 ntpd[1922]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 14 23:49:18.763487 ntpd[1922]: ---------------------------------------------------- May 14 23:49:18.763508 ntpd[1922]: ntp-4 is maintained by Network Time Foundation, May 14 23:49:18.763526 ntpd[1922]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 14 23:49:18.763544 ntpd[1922]: corporation. Support and training for ntp-4 are May 14 23:49:18.763561 ntpd[1922]: available at https://www.nwtime.org/support May 14 23:49:18.763578 ntpd[1922]: ---------------------------------------------------- May 14 23:49:18.773105 extend-filesystems[1967]: resize2fs 1.47.1 (20-May-2024) May 14 23:49:18.772789 ntpd[1922]: proto: precision = 0.096 usec (-23) May 14 23:49:18.777404 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: proto: precision = 0.096 usec (-23) May 14 23:49:18.778439 ntpd[1922]: basedate set to 2025-05-02 May 14 23:49:18.780185 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: basedate set to 2025-05-02 May 14 23:49:18.780185 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: gps base set to 2025-05-04 (week 2365) May 14 23:49:18.778474 ntpd[1922]: gps base set to 2025-05-04 (week 2365) May 14 23:49:18.784710 systemd[1]: Started update-engine.service - Update Engine. May 14 23:49:18.787268 ntpd[1922]: Listen and drop on 0 v6wildcard [::]:123 May 14 23:49:18.788718 update_engine[1931]: I20250514 23:49:18.788341 1931 update_check_scheduler.cc:74] Next update check in 11m37s May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Listen and drop on 0 v6wildcard [::]:123 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Listen normally on 2 lo 127.0.0.1:123 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Listen normally on 3 eth0 172.31.28.25:123 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Listen normally on 4 lo [::1]:123 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: bind(21) AF_INET6 fe80::4ad:82ff:fe5b:d007%2#123 flags 0x11 failed: Cannot assign requested address May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: unable to create socket on eth0 (5) for fe80::4ad:82ff:fe5b:d007%2#123 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: failed to init interface for address fe80::4ad:82ff:fe5b:d007%2 May 14 23:49:18.788773 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: Listening on routing socket on fd #21 for interface updates May 14 23:49:18.787362 ntpd[1922]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 14 23:49:18.787630 ntpd[1922]: Listen normally on 2 lo 127.0.0.1:123 May 14 23:49:18.787691 ntpd[1922]: Listen normally on 3 eth0 172.31.28.25:123 May 14 23:49:18.787757 ntpd[1922]: Listen normally on 4 lo [::1]:123 May 14 23:49:18.787830 ntpd[1922]: bind(21) AF_INET6 fe80::4ad:82ff:fe5b:d007%2#123 flags 0x11 failed: Cannot assign requested address May 14 23:49:18.787867 ntpd[1922]: unable to create socket on eth0 (5) for fe80::4ad:82ff:fe5b:d007%2#123 May 14 23:49:18.787895 ntpd[1922]: failed to init interface for address fe80::4ad:82ff:fe5b:d007%2 May 14 23:49:18.787943 ntpd[1922]: Listening on routing socket on fd #21 for interface updates May 14 23:49:18.801357 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:49:18.808106 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:18.819766 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 14 23:49:18.811157 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:18.820090 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:18.820090 ntpd[1922]: 14 May 23:49:18 ntpd[1922]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:18.848622 jq[1964]: true May 14 23:49:18.903602 systemd[1]: Finished setup-oem.service - Setup OEM. May 14 23:49:18.920491 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 14 23:49:18.937084 extend-filesystems[1967]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 14 23:49:18.937084 extend-filesystems[1967]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:49:18.937084 extend-filesystems[1967]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 14 23:49:18.975154 extend-filesystems[1920]: Resized filesystem in /dev/nvme0n1p9 May 14 23:49:18.952876 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.938 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.943 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.943 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.944 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.944 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.946 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.946 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.951 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.951 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.951 INFO Fetch failed with 404: resource not found May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.951 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.953 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.953 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.953 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.953 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.954 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.954 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.954 INFO Fetch successful May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.954 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 14 23:49:18.980806 coreos-metadata[1917]: May 14 23:49:18.956 INFO Fetch successful May 14 23:49:18.956427 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:49:19.044115 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1791) May 14 23:49:19.093913 bash[2006]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:19.096475 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:49:19.112328 systemd[1]: Starting sshkeys.service... May 14 23:49:19.126166 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:49:19.132520 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:49:19.216692 systemd-logind[1930]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:49:19.216752 systemd-logind[1930]: Watching system buttons on /dev/input/event1 (Sleep Button) May 14 23:49:19.218389 systemd-logind[1930]: New seat seat0. May 14 23:49:19.225989 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 23:49:19.245868 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 23:49:19.252869 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:49:19.313546 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.hostname1' May 14 23:49:19.317732 dbus-daemon[1918]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1960 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 14 23:49:19.333712 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 14 23:49:19.359056 systemd[1]: Starting polkit.service - Authorization Manager... May 14 23:49:19.386293 systemd-networkd[1773]: eth0: Gained IPv6LL May 14 23:49:19.395284 locksmithd[1970]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:49:19.406734 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:49:19.415665 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:49:19.433028 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 14 23:49:19.456179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:19.466357 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:49:19.488653 polkitd[2041]: Started polkitd version 121 May 14 23:49:19.520399 polkitd[2041]: Loading rules from directory /etc/polkit-1/rules.d May 14 23:49:19.520529 polkitd[2041]: Loading rules from directory /usr/share/polkit-1/rules.d May 14 23:49:19.527969 polkitd[2041]: Finished loading, compiling and executing 2 rules May 14 23:49:19.537291 dbus-daemon[1918]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 14 23:49:19.537577 systemd[1]: Started polkit.service - Authorization Manager. May 14 23:49:19.543131 polkitd[2041]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 14 23:49:19.601467 containerd[1955]: time="2025-05-14T23:49:19.601185323Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:49:19.611959 amazon-ssm-agent[2069]: Initializing new seelog logger May 14 23:49:19.611959 amazon-ssm-agent[2069]: New Seelog Logger Creation Complete May 14 23:49:19.611959 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.611959 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.611959 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 processing appconfig overrides May 14 23:49:19.613152 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.613292 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.614096 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 processing appconfig overrides May 14 23:49:19.614096 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.614096 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.614096 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 processing appconfig overrides May 14 23:49:19.615654 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO Proxy environment variables: May 14 23:49:19.619514 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.619644 amazon-ssm-agent[2069]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:19.619853 amazon-ssm-agent[2069]: 2025/05/14 23:49:19 processing appconfig overrides May 14 23:49:19.631147 systemd-hostnamed[1960]: Hostname set to (transient) May 14 23:49:19.637422 systemd-resolved[1733]: System hostname changed to 'ip-172-31-28-25'. May 14 23:49:19.677106 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:49:19.696690 coreos-metadata[2028]: May 14 23:49:19.696 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 14 23:49:19.706454 coreos-metadata[2028]: May 14 23:49:19.704 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 14 23:49:19.708419 coreos-metadata[2028]: May 14 23:49:19.708 INFO Fetch successful May 14 23:49:19.708419 coreos-metadata[2028]: May 14 23:49:19.708 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 14 23:49:19.709935 coreos-metadata[2028]: May 14 23:49:19.709 INFO Fetch successful May 14 23:49:19.714471 unknown[2028]: wrote ssh authorized keys file for user: core May 14 23:49:19.720557 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO https_proxy: May 14 23:49:19.775088 update-ssh-keys[2120]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:19.780884 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 23:49:19.795177 systemd[1]: Finished sshkeys.service. May 14 23:49:19.823069 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO http_proxy: May 14 23:49:19.924600 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO no_proxy: May 14 23:49:19.931632 containerd[1955]: time="2025-05-14T23:49:19.930752905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.949577 containerd[1955]: time="2025-05-14T23:49:19.949510717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.950465689Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.950526517Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.950840281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.950874013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.950997085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.951027553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.954550501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.954590281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.954620377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.954643597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.955074 containerd[1955]: time="2025-05-14T23:49:19.954817825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.961370 containerd[1955]: time="2025-05-14T23:49:19.960497485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:19.961370 containerd[1955]: time="2025-05-14T23:49:19.960839029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:19.961370 containerd[1955]: time="2025-05-14T23:49:19.960869437Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:49:19.961370 containerd[1955]: time="2025-05-14T23:49:19.961065601Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:49:19.961370 containerd[1955]: time="2025-05-14T23:49:19.961168945Z" level=info msg="metadata content store policy set" policy=shared May 14 23:49:19.977952 containerd[1955]: time="2025-05-14T23:49:19.976800769Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:49:19.979068 containerd[1955]: time="2025-05-14T23:49:19.978146401Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:49:19.979068 containerd[1955]: time="2025-05-14T23:49:19.978201421Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:49:19.979068 containerd[1955]: time="2025-05-14T23:49:19.978238561Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:49:19.979068 containerd[1955]: time="2025-05-14T23:49:19.978271357Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:49:19.979068 containerd[1955]: time="2025-05-14T23:49:19.978560425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.985587061Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991336993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991387657Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991425853Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991457821Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991489813Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991521001Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991553113Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991585885Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991617001Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991644997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991671997Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991720513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994318 containerd[1955]: time="2025-05-14T23:49:19.991752049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991782217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991822825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991853605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991883929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991910605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991942369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.991971637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:49:19.994950 containerd[1955]: time="2025-05-14T23:49:19.992004397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998692825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998766901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998798485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998845969Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998896237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998938657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:49:19.999700 containerd[1955]: time="2025-05-14T23:49:19.998966365Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:49:20.000267 containerd[1955]: time="2025-05-14T23:49:20.000143529Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000379917Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000414681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000446817Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000471549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000502749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000526881Z" level=info msg="NRI interface is disabled by configuration." May 14 23:49:20.006622 containerd[1955]: time="2025-05-14T23:49:20.000555405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:49:20.007004 containerd[1955]: time="2025-05-14T23:49:20.001068105Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:49:20.007004 containerd[1955]: time="2025-05-14T23:49:20.001162629Z" level=info msg="Connect containerd service" May 14 23:49:20.007004 containerd[1955]: time="2025-05-14T23:49:20.001225905Z" level=info msg="using legacy CRI server" May 14 23:49:20.007004 containerd[1955]: time="2025-05-14T23:49:20.001243173Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:49:20.007004 containerd[1955]: time="2025-05-14T23:49:20.001481685Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.015699730Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016346890Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016435066Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016515826Z" level=info msg="Start subscribing containerd event" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016571110Z" level=info msg="Start recovering state" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016684258Z" level=info msg="Start event monitor" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016707034Z" level=info msg="Start snapshots syncer" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016729846Z" level=info msg="Start cni network conf syncer for default" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016752406Z" level=info msg="Start streaming server" May 14 23:49:20.018093 containerd[1955]: time="2025-05-14T23:49:20.016868998Z" level=info msg="containerd successfully booted in 0.430698s" May 14 23:49:20.017225 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:49:20.028310 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO Checking if agent identity type OnPrem can be assumed May 14 23:49:20.127122 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO Checking if agent identity type EC2 can be assumed May 14 23:49:20.228119 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO Agent will take identity from EC2 May 14 23:49:20.252305 sshd_keygen[1949]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:49:20.326586 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 23:49:20.351194 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:49:20.366689 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:49:20.378577 systemd[1]: Started sshd@0-172.31.28.25:22-139.178.89.65:41300.service - OpenSSH per-connection server daemon (139.178.89.65:41300). May 14 23:49:20.420187 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:49:20.420610 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:49:20.427304 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 23:49:20.436314 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:49:20.465608 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:49:20.481134 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:49:20.488796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 23:49:20.492575 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:49:20.528197 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 23:49:20.628397 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 14 23:49:20.709137 sshd[2153]: Accepted publickey for core from 139.178.89.65 port 41300 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:20.710175 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:20.731536 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 14 23:49:20.737386 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:49:20.749509 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:49:20.787140 systemd-logind[1930]: New session 1 of user core. May 14 23:49:20.806525 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:49:20.826161 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:49:20.832715 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] Starting Core Agent May 14 23:49:20.847522 (systemd)[2165]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:49:20.857588 systemd-logind[1930]: New session c1 of user core. May 14 23:49:20.933014 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 14 23:49:20.941073 tar[1934]: linux-arm64/LICENSE May 14 23:49:20.941073 tar[1934]: linux-arm64/README.md May 14 23:49:20.984402 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:49:21.034210 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [Registrar] Starting registrar module May 14 23:49:21.134427 amazon-ssm-agent[2069]: 2025-05-14 23:49:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 14 23:49:21.222360 systemd[2165]: Queued start job for default target default.target. May 14 23:49:21.232162 systemd[2165]: Created slice app.slice - User Application Slice. May 14 23:49:21.232229 systemd[2165]: Reached target paths.target - Paths. May 14 23:49:21.232325 systemd[2165]: Reached target timers.target - Timers. May 14 23:49:21.237234 systemd[2165]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:49:21.269895 systemd[2165]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:49:21.270375 systemd[2165]: Reached target sockets.target - Sockets. May 14 23:49:21.270464 systemd[2165]: Reached target basic.target - Basic System. May 14 23:49:21.270556 systemd[2165]: Reached target default.target - Main User Target. May 14 23:49:21.270614 systemd[2165]: Startup finished in 393ms. May 14 23:49:21.271802 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:49:21.284119 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:49:21.350126 amazon-ssm-agent[2069]: 2025-05-14 23:49:21 INFO [EC2Identity] EC2 registration was successful. May 14 23:49:21.377600 amazon-ssm-agent[2069]: 2025-05-14 23:49:21 INFO [CredentialRefresher] credentialRefresher has started May 14 23:49:21.377600 amazon-ssm-agent[2069]: 2025-05-14 23:49:21 INFO [CredentialRefresher] Starting credentials refresher loop May 14 23:49:21.377759 amazon-ssm-agent[2069]: 2025-05-14 23:49:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 14 23:49:21.445692 systemd[1]: Started sshd@1-172.31.28.25:22-139.178.89.65:41316.service - OpenSSH per-connection server daemon (139.178.89.65:41316). May 14 23:49:21.452141 amazon-ssm-agent[2069]: 2025-05-14 23:49:21 INFO [CredentialRefresher] Next credential rotation will be in 32.44165918176667 minutes May 14 23:49:21.610060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:21.613413 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:49:21.617295 systemd[1]: Startup finished in 1.086s (kernel) + 9.046s (initrd) + 8.833s (userspace) = 18.966s. May 14 23:49:21.624162 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:21.648565 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 41316 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:21.653935 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:21.670423 systemd-logind[1930]: New session 2 of user core. May 14 23:49:21.677327 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:49:21.764175 ntpd[1922]: Listen normally on 6 eth0 [fe80::4ad:82ff:fe5b:d007%2]:123 May 14 23:49:21.765398 ntpd[1922]: 14 May 23:49:21 ntpd[1922]: Listen normally on 6 eth0 [fe80::4ad:82ff:fe5b:d007%2]:123 May 14 23:49:21.802936 sshd[2191]: Connection closed by 139.178.89.65 port 41316 May 14 23:49:21.804362 sshd-session[2179]: pam_unix(sshd:session): session closed for user core May 14 23:49:21.809951 systemd[1]: sshd@1-172.31.28.25:22-139.178.89.65:41316.service: Deactivated successfully. May 14 23:49:21.815618 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:49:21.822071 systemd-logind[1930]: Session 2 logged out. Waiting for processes to exit. May 14 23:49:21.824268 systemd-logind[1930]: Removed session 2. May 14 23:49:21.846832 systemd[1]: Started sshd@2-172.31.28.25:22-139.178.89.65:41320.service - OpenSSH per-connection server daemon (139.178.89.65:41320). May 14 23:49:22.036225 sshd[2197]: Accepted publickey for core from 139.178.89.65 port 41320 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:22.039518 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:22.048206 systemd-logind[1930]: New session 3 of user core. May 14 23:49:22.056331 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:49:22.178088 sshd[2203]: Connection closed by 139.178.89.65 port 41320 May 14 23:49:22.180248 sshd-session[2197]: pam_unix(sshd:session): session closed for user core May 14 23:49:22.187601 systemd[1]: sshd@2-172.31.28.25:22-139.178.89.65:41320.service: Deactivated successfully. May 14 23:49:22.191964 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:49:22.194387 systemd-logind[1930]: Session 3 logged out. Waiting for processes to exit. May 14 23:49:22.197378 systemd-logind[1930]: Removed session 3. May 14 23:49:22.218640 systemd[1]: Started sshd@3-172.31.28.25:22-139.178.89.65:41332.service - OpenSSH per-connection server daemon (139.178.89.65:41332). May 14 23:49:22.405254 sshd[2209]: Accepted publickey for core from 139.178.89.65 port 41332 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:22.409387 amazon-ssm-agent[2069]: 2025-05-14 23:49:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 14 23:49:22.409791 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:22.429792 systemd-logind[1930]: New session 4 of user core. May 14 23:49:22.435381 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:49:22.510071 amazon-ssm-agent[2069]: 2025-05-14 23:49:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2213) started May 14 23:49:22.572136 sshd[2217]: Connection closed by 139.178.89.65 port 41332 May 14 23:49:22.577506 sshd-session[2209]: pam_unix(sshd:session): session closed for user core May 14 23:49:22.588945 systemd[1]: sshd@3-172.31.28.25:22-139.178.89.65:41332.service: Deactivated successfully. May 14 23:49:22.595295 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:49:22.600164 systemd-logind[1930]: Session 4 logged out. Waiting for processes to exit. May 14 23:49:22.611129 amazon-ssm-agent[2069]: 2025-05-14 23:49:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 14 23:49:22.626593 systemd[1]: Started sshd@4-172.31.28.25:22-139.178.89.65:41338.service - OpenSSH per-connection server daemon (139.178.89.65:41338). May 14 23:49:22.629722 systemd-logind[1930]: Removed session 4. May 14 23:49:22.776385 kubelet[2186]: E0514 23:49:22.776202 2186 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:22.781259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:22.781622 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:22.783198 systemd[1]: kubelet.service: Consumed 1.342s CPU time, 242.4M memory peak. May 14 23:49:22.818240 sshd[2228]: Accepted publickey for core from 139.178.89.65 port 41338 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:22.820618 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:22.829509 systemd-logind[1930]: New session 5 of user core. May 14 23:49:22.839296 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:49:22.975850 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:49:22.976503 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:23.000734 sudo[2234]: pam_unix(sudo:session): session closed for user root May 14 23:49:23.023610 sshd[2233]: Connection closed by 139.178.89.65 port 41338 May 14 23:49:23.024648 sshd-session[2228]: pam_unix(sshd:session): session closed for user core May 14 23:49:23.032144 systemd[1]: sshd@4-172.31.28.25:22-139.178.89.65:41338.service: Deactivated successfully. May 14 23:49:23.035406 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:49:23.037001 systemd-logind[1930]: Session 5 logged out. Waiting for processes to exit. May 14 23:49:23.039211 systemd-logind[1930]: Removed session 5. May 14 23:49:23.070513 systemd[1]: Started sshd@5-172.31.28.25:22-139.178.89.65:41342.service - OpenSSH per-connection server daemon (139.178.89.65:41342). May 14 23:49:23.247735 sshd[2240]: Accepted publickey for core from 139.178.89.65 port 41342 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:23.250208 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:23.260425 systemd-logind[1930]: New session 6 of user core. May 14 23:49:23.268295 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:49:23.373364 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:49:23.373990 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:23.380160 sudo[2244]: pam_unix(sudo:session): session closed for user root May 14 23:49:23.390360 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:49:23.390971 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:23.414622 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:23.462281 augenrules[2266]: No rules May 14 23:49:23.465239 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:23.465711 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:23.467806 sudo[2243]: pam_unix(sudo:session): session closed for user root May 14 23:49:23.492793 sshd[2242]: Connection closed by 139.178.89.65 port 41342 May 14 23:49:23.492082 sshd-session[2240]: pam_unix(sshd:session): session closed for user core May 14 23:49:23.497162 systemd[1]: sshd@5-172.31.28.25:22-139.178.89.65:41342.service: Deactivated successfully. May 14 23:49:23.499907 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:49:23.503473 systemd-logind[1930]: Session 6 logged out. Waiting for processes to exit. May 14 23:49:23.505999 systemd-logind[1930]: Removed session 6. May 14 23:49:23.530211 systemd[1]: Started sshd@6-172.31.28.25:22-139.178.89.65:41344.service - OpenSSH per-connection server daemon (139.178.89.65:41344). May 14 23:49:23.724564 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 41344 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:23.727202 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:23.735838 systemd-logind[1930]: New session 7 of user core. May 14 23:49:23.747303 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:49:23.848689 sudo[2278]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:49:23.849804 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:24.789883 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:49:24.792225 (dockerd)[2296]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:49:25.246048 dockerd[2296]: time="2025-05-14T23:49:25.245948524Z" level=info msg="Starting up" May 14 23:49:25.558613 dockerd[2296]: time="2025-05-14T23:49:25.558178517Z" level=info msg="Loading containers: start." May 14 23:49:26.235347 systemd-resolved[1733]: Clock change detected. Flushing caches. May 14 23:49:26.315158 kernel: Initializing XFRM netlink socket May 14 23:49:26.370517 (udev-worker)[2321]: Network interface NamePolicy= disabled on kernel command line. May 14 23:49:26.462045 systemd-networkd[1773]: docker0: Link UP May 14 23:49:26.505470 dockerd[2296]: time="2025-05-14T23:49:26.505393169Z" level=info msg="Loading containers: done." May 14 23:49:26.530304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1706656943-merged.mount: Deactivated successfully. May 14 23:49:26.535504 dockerd[2296]: time="2025-05-14T23:49:26.535429649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:49:26.535660 dockerd[2296]: time="2025-05-14T23:49:26.535576493Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:49:26.535841 dockerd[2296]: time="2025-05-14T23:49:26.535793873Z" level=info msg="Daemon has completed initialization" May 14 23:49:26.589497 dockerd[2296]: time="2025-05-14T23:49:26.589286357Z" level=info msg="API listen on /run/docker.sock" May 14 23:49:26.589761 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:49:28.308205 containerd[1955]: time="2025-05-14T23:49:28.307779714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 23:49:28.979902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820558908.mount: Deactivated successfully. May 14 23:49:31.005992 containerd[1955]: time="2025-05-14T23:49:31.005936119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:31.008514 containerd[1955]: time="2025-05-14T23:49:31.008429827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 14 23:49:31.009917 containerd[1955]: time="2025-05-14T23:49:31.009835891Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:31.015485 containerd[1955]: time="2025-05-14T23:49:31.015404047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:31.018906 containerd[1955]: time="2025-05-14T23:49:31.017949703Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.710107325s" May 14 23:49:31.018906 containerd[1955]: time="2025-05-14T23:49:31.018011035Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 23:49:31.054705 containerd[1955]: time="2025-05-14T23:49:31.054640303Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 23:49:33.284828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:49:33.293489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:33.645509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:33.650209 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:33.753413 kubelet[2564]: E0514 23:49:33.752908 2564 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:33.762472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:33.762796 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:33.763618 systemd[1]: kubelet.service: Consumed 315ms CPU time, 94.5M memory peak. May 14 23:49:33.776791 containerd[1955]: time="2025-05-14T23:49:33.776719489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:33.778910 containerd[1955]: time="2025-05-14T23:49:33.778840453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 14 23:49:33.780442 containerd[1955]: time="2025-05-14T23:49:33.780373141Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:33.786125 containerd[1955]: time="2025-05-14T23:49:33.785974033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:33.788556 containerd[1955]: time="2025-05-14T23:49:33.788343757Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.73363929s" May 14 23:49:33.788556 containerd[1955]: time="2025-05-14T23:49:33.788401069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 23:49:33.840256 containerd[1955]: time="2025-05-14T23:49:33.840049597Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 23:49:35.574835 containerd[1955]: time="2025-05-14T23:49:35.574760186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:35.576391 containerd[1955]: time="2025-05-14T23:49:35.576293810Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 14 23:49:35.579740 containerd[1955]: time="2025-05-14T23:49:35.579651722Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:35.586353 containerd[1955]: time="2025-05-14T23:49:35.586275266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:35.590193 containerd[1955]: time="2025-05-14T23:49:35.588341306Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.748183205s" May 14 23:49:35.590193 containerd[1955]: time="2025-05-14T23:49:35.588395186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 23:49:35.629054 containerd[1955]: time="2025-05-14T23:49:35.629000438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 23:49:36.909088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043247137.mount: Deactivated successfully. May 14 23:49:37.400160 containerd[1955]: time="2025-05-14T23:49:37.399357519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:37.402313 containerd[1955]: time="2025-05-14T23:49:37.402242679Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 14 23:49:37.403268 containerd[1955]: time="2025-05-14T23:49:37.403200867Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:37.406667 containerd[1955]: time="2025-05-14T23:49:37.406603527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:37.408741 containerd[1955]: time="2025-05-14T23:49:37.408006723Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.778943573s" May 14 23:49:37.408741 containerd[1955]: time="2025-05-14T23:49:37.408063831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 23:49:37.451882 containerd[1955]: time="2025-05-14T23:49:37.451828251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:49:38.045843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759567110.mount: Deactivated successfully. May 14 23:49:39.154622 containerd[1955]: time="2025-05-14T23:49:39.154537600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.156719 containerd[1955]: time="2025-05-14T23:49:39.156649312Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 23:49:39.157676 containerd[1955]: time="2025-05-14T23:49:39.157224820Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.164301 containerd[1955]: time="2025-05-14T23:49:39.164206660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.167326 containerd[1955]: time="2025-05-14T23:49:39.166605640Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.714715289s" May 14 23:49:39.167326 containerd[1955]: time="2025-05-14T23:49:39.166667560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:49:39.209178 containerd[1955]: time="2025-05-14T23:49:39.208740172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 23:49:39.706617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323803874.mount: Deactivated successfully. May 14 23:49:39.720171 containerd[1955]: time="2025-05-14T23:49:39.719463930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.721450 containerd[1955]: time="2025-05-14T23:49:39.721370490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 14 23:49:39.723685 containerd[1955]: time="2025-05-14T23:49:39.723616650Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.728553 containerd[1955]: time="2025-05-14T23:49:39.728502511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.730412 containerd[1955]: time="2025-05-14T23:49:39.730228723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 521.431455ms" May 14 23:49:39.730412 containerd[1955]: time="2025-05-14T23:49:39.730277587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 23:49:39.768990 containerd[1955]: time="2025-05-14T23:49:39.768922459Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 23:49:40.403990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089403582.mount: Deactivated successfully. May 14 23:49:43.785502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:49:43.794705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:44.010425 containerd[1955]: time="2025-05-14T23:49:44.010330988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:44.021071 containerd[1955]: time="2025-05-14T23:49:44.020472116Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 14 23:49:44.032408 containerd[1955]: time="2025-05-14T23:49:44.030226952Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:44.053770 containerd[1955]: time="2025-05-14T23:49:44.053622188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:44.060711 containerd[1955]: time="2025-05-14T23:49:44.060158048Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.291168329s" May 14 23:49:44.060927 containerd[1955]: time="2025-05-14T23:49:44.060893840Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 23:49:44.283491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:44.295574 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:44.368812 kubelet[2719]: E0514 23:49:44.368626 2719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:44.373424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:44.373749 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:44.375876 systemd[1]: kubelet.service: Consumed 291ms CPU time, 94.5M memory peak. May 14 23:49:50.138440 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 14 23:49:51.368423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:51.368757 systemd[1]: kubelet.service: Consumed 291ms CPU time, 94.5M memory peak. May 14 23:49:51.379578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:51.423771 systemd[1]: Reload requested from client PID 2788 ('systemctl') (unit session-7.scope)... May 14 23:49:51.423808 systemd[1]: Reloading... May 14 23:49:51.689154 zram_generator::config[2842]: No configuration found. May 14 23:49:51.901563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:52.128866 systemd[1]: Reloading finished in 704 ms. May 14 23:49:52.208384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:52.225699 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:49:52.231469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:52.233845 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:49:52.235046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:52.235328 systemd[1]: kubelet.service: Consumed 207ms CPU time, 83.4M memory peak. May 14 23:49:52.241635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:52.542377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:52.556700 (kubelet)[2899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:49:52.633250 kubelet[2899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:49:52.633250 kubelet[2899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:49:52.633250 kubelet[2899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:49:52.633874 kubelet[2899]: I0514 23:49:52.633338 2899 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:49:53.515136 kubelet[2899]: I0514 23:49:53.514393 2899 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 23:49:53.515136 kubelet[2899]: I0514 23:49:53.514434 2899 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:49:53.515136 kubelet[2899]: I0514 23:49:53.514748 2899 server.go:927] "Client rotation is on, will bootstrap in background" May 14 23:49:53.544625 kubelet[2899]: E0514 23:49:53.544561 2899 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.546783 kubelet[2899]: I0514 23:49:53.546731 2899 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:49:53.564846 kubelet[2899]: I0514 23:49:53.564807 2899 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:49:53.565609 kubelet[2899]: I0514 23:49:53.565560 2899 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:49:53.566551 kubelet[2899]: I0514 23:49:53.565717 2899 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 23:49:53.566551 kubelet[2899]: I0514 23:49:53.566031 2899 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:49:53.566551 kubelet[2899]: I0514 23:49:53.566050 2899 container_manager_linux.go:301] "Creating device plugin manager" May 14 23:49:53.566551 kubelet[2899]: I0514 23:49:53.566308 2899 state_mem.go:36] "Initialized new in-memory state store" May 14 23:49:53.567887 kubelet[2899]: I0514 23:49:53.567853 2899 kubelet.go:400] "Attempting to sync node with API server" May 14 23:49:53.568066 kubelet[2899]: I0514 23:49:53.568041 2899 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:49:53.568326 kubelet[2899]: I0514 23:49:53.568304 2899 kubelet.go:312] "Adding apiserver pod source" May 14 23:49:53.569139 kubelet[2899]: I0514 23:49:53.568486 2899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:49:53.569880 kubelet[2899]: W0514 23:49:53.569817 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.570079 kubelet[2899]: E0514 23:49:53.570056 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.570387 kubelet[2899]: W0514 23:49:53.570337 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-25&limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.570508 kubelet[2899]: E0514 23:49:53.570488 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-25&limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.570787 kubelet[2899]: I0514 23:49:53.570760 2899 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:49:53.571248 kubelet[2899]: I0514 23:49:53.571225 2899 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:49:53.571479 kubelet[2899]: W0514 23:49:53.571460 2899 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:49:53.573403 kubelet[2899]: I0514 23:49:53.573032 2899 server.go:1264] "Started kubelet" May 14 23:49:53.581875 kubelet[2899]: I0514 23:49:53.581842 2899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:49:53.583902 kubelet[2899]: E0514 23:49:53.583415 2899 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.25:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-25.183f89b59dd351a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-25,UID:ip-172-31-28-25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-25,},FirstTimestamp:2025-05-14 23:49:53.572999587 +0000 UTC m=+1.009242234,LastTimestamp:2025-05-14 23:49:53.572999587 +0000 UTC m=+1.009242234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-25,}" May 14 23:49:53.588270 kubelet[2899]: I0514 23:49:53.588070 2899 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:49:53.590235 kubelet[2899]: I0514 23:49:53.590181 2899 server.go:455] "Adding debug handlers to kubelet server" May 14 23:49:53.592151 kubelet[2899]: I0514 23:49:53.591646 2899 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 23:49:53.592921 kubelet[2899]: I0514 23:49:53.592842 2899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:49:53.597039 kubelet[2899]: I0514 23:49:53.595207 2899 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:49:53.597039 kubelet[2899]: I0514 23:49:53.593405 2899 reconciler.go:26] "Reconciler: start to sync state" May 14 23:49:53.597039 kubelet[2899]: I0514 23:49:53.593290 2899 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:49:53.597039 kubelet[2899]: W0514 23:49:53.596185 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.597039 kubelet[2899]: E0514 23:49:53.596263 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.597039 kubelet[2899]: E0514 23:49:53.596372 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-25?timeout=10s\": dial tcp 172.31.28.25:6443: connect: connection refused" interval="200ms" May 14 23:49:53.597974 kubelet[2899]: I0514 23:49:53.597940 2899 factory.go:221] Registration of the systemd container factory successfully May 14 23:49:53.598804 kubelet[2899]: I0514 23:49:53.598759 2899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:49:53.599234 kubelet[2899]: E0514 23:49:53.598321 2899 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:49:53.602174 kubelet[2899]: I0514 23:49:53.602128 2899 factory.go:221] Registration of the containerd container factory successfully May 14 23:49:53.638489 kubelet[2899]: I0514 23:49:53.638264 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:49:53.644048 kubelet[2899]: I0514 23:49:53.643461 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:49:53.644048 kubelet[2899]: I0514 23:49:53.643560 2899 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:49:53.644048 kubelet[2899]: I0514 23:49:53.643601 2899 kubelet.go:2337] "Starting kubelet main sync loop" May 14 23:49:53.644048 kubelet[2899]: E0514 23:49:53.643670 2899 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:49:53.647659 kubelet[2899]: W0514 23:49:53.647413 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.647659 kubelet[2899]: E0514 23:49:53.647496 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:53.656774 kubelet[2899]: I0514 23:49:53.656049 2899 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:49:53.656774 kubelet[2899]: I0514 23:49:53.656083 2899 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:49:53.656774 kubelet[2899]: I0514 23:49:53.656147 2899 state_mem.go:36] "Initialized new in-memory state store" May 14 23:49:53.661073 kubelet[2899]: I0514 23:49:53.660819 2899 policy_none.go:49] "None policy: Start" May 14 23:49:53.662223 kubelet[2899]: I0514 23:49:53.662147 2899 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:49:53.662223 kubelet[2899]: I0514 23:49:53.662195 2899 state_mem.go:35] "Initializing new in-memory state store" May 14 23:49:53.673694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:49:53.692831 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:49:53.699857 kubelet[2899]: I0514 23:49:53.699812 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-25" May 14 23:49:53.701846 kubelet[2899]: E0514 23:49:53.701795 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.25:6443/api/v1/nodes\": dial tcp 172.31.28.25:6443: connect: connection refused" node="ip-172-31-28-25" May 14 23:49:53.705258 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:49:53.718142 kubelet[2899]: I0514 23:49:53.718084 2899 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:49:53.719531 kubelet[2899]: I0514 23:49:53.718885 2899 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:49:53.719531 kubelet[2899]: I0514 23:49:53.719081 2899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:49:53.725023 kubelet[2899]: E0514 23:49:53.724989 2899 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-25\" not found" May 14 23:49:53.744766 kubelet[2899]: I0514 23:49:53.744718 2899 topology_manager.go:215] "Topology Admit Handler" podUID="4b8b638974cc35f6c354fab3f90737ad" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-25" May 14 23:49:53.747358 kubelet[2899]: I0514 23:49:53.747315 2899 topology_manager.go:215] "Topology Admit Handler" podUID="6e328882e2eca8c3db134813c35d49b7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-25" May 14 23:49:53.751183 kubelet[2899]: I0514 23:49:53.750926 2899 topology_manager.go:215] "Topology Admit Handler" podUID="6778b4cb0646f6df9dbf41e61a19703b" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-25" May 14 23:49:53.763993 systemd[1]: Created slice kubepods-burstable-pod4b8b638974cc35f6c354fab3f90737ad.slice - libcontainer container kubepods-burstable-pod4b8b638974cc35f6c354fab3f90737ad.slice. May 14 23:49:53.792805 systemd[1]: Created slice kubepods-burstable-pod6778b4cb0646f6df9dbf41e61a19703b.slice - libcontainer container kubepods-burstable-pod6778b4cb0646f6df9dbf41e61a19703b.slice. May 14 23:49:53.797546 kubelet[2899]: E0514 23:49:53.797447 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-25?timeout=10s\": dial tcp 172.31.28.25:6443: connect: connection refused" interval="400ms" May 14 23:49:53.802678 systemd[1]: Created slice kubepods-burstable-pod6e328882e2eca8c3db134813c35d49b7.slice - libcontainer container kubepods-burstable-pod6e328882e2eca8c3db134813c35d49b7.slice. May 14 23:49:53.897188 kubelet[2899]: I0514 23:49:53.896964 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:49:53.897188 kubelet[2899]: I0514 23:49:53.897030 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b8b638974cc35f6c354fab3f90737ad-ca-certs\") pod \"kube-apiserver-ip-172-31-28-25\" (UID: \"4b8b638974cc35f6c354fab3f90737ad\") " pod="kube-system/kube-apiserver-ip-172-31-28-25" May 14 23:49:53.897188 kubelet[2899]: I0514 23:49:53.897088 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b8b638974cc35f6c354fab3f90737ad-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-25\" (UID: \"4b8b638974cc35f6c354fab3f90737ad\") " pod="kube-system/kube-apiserver-ip-172-31-28-25" May 14 23:49:53.897188 kubelet[2899]: I0514 23:49:53.897150 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:49:53.897188 kubelet[2899]: I0514 23:49:53.897192 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:49:53.897643 kubelet[2899]: I0514 23:49:53.897228 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b8b638974cc35f6c354fab3f90737ad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-25\" (UID: \"4b8b638974cc35f6c354fab3f90737ad\") " pod="kube-system/kube-apiserver-ip-172-31-28-25" May 14 23:49:53.897643 kubelet[2899]: I0514 23:49:53.897263 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:49:53.897643 kubelet[2899]: I0514 23:49:53.897299 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:49:53.897643 kubelet[2899]: I0514 23:49:53.897337 2899 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6778b4cb0646f6df9dbf41e61a19703b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-25\" (UID: \"6778b4cb0646f6df9dbf41e61a19703b\") " pod="kube-system/kube-scheduler-ip-172-31-28-25" May 14 23:49:53.905467 kubelet[2899]: I0514 23:49:53.905419 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-25" May 14 23:49:53.905921 kubelet[2899]: E0514 23:49:53.905876 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.25:6443/api/v1/nodes\": dial tcp 172.31.28.25:6443: connect: connection refused" node="ip-172-31-28-25" May 14 23:49:54.088132 containerd[1955]: time="2025-05-14T23:49:54.087961758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-25,Uid:4b8b638974cc35f6c354fab3f90737ad,Namespace:kube-system,Attempt:0,}" May 14 23:49:54.099827 containerd[1955]: time="2025-05-14T23:49:54.099767634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-25,Uid:6778b4cb0646f6df9dbf41e61a19703b,Namespace:kube-system,Attempt:0,}" May 14 23:49:54.108321 containerd[1955]: time="2025-05-14T23:49:54.108205518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-25,Uid:6e328882e2eca8c3db134813c35d49b7,Namespace:kube-system,Attempt:0,}" May 14 23:49:54.198552 kubelet[2899]: E0514 23:49:54.198456 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-25?timeout=10s\": dial tcp 172.31.28.25:6443: connect: connection refused" interval="800ms" May 14 23:49:54.309437 kubelet[2899]: I0514 23:49:54.309390 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-25" May 14 23:49:54.310391 kubelet[2899]: E0514 23:49:54.310319 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.25:6443/api/v1/nodes\": dial tcp 172.31.28.25:6443: connect: connection refused" node="ip-172-31-28-25" May 14 23:49:54.589946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545795065.mount: Deactivated successfully. May 14 23:49:54.602861 containerd[1955]: time="2025-05-14T23:49:54.602788700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:49:54.611575 containerd[1955]: time="2025-05-14T23:49:54.611491856Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 14 23:49:54.613401 containerd[1955]: time="2025-05-14T23:49:54.613342052Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:49:54.616153 containerd[1955]: time="2025-05-14T23:49:54.615996428Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:49:54.619665 containerd[1955]: time="2025-05-14T23:49:54.619598900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:49:54.621839 containerd[1955]: time="2025-05-14T23:49:54.621760508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:49:54.623808 containerd[1955]: time="2025-05-14T23:49:54.623740533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:49:54.626312 containerd[1955]: time="2025-05-14T23:49:54.626076045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:49:54.630329 containerd[1955]: time="2025-05-14T23:49:54.629600337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.716363ms" May 14 23:49:54.633756 containerd[1955]: time="2025-05-14T23:49:54.633659757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.556075ms" May 14 23:49:54.668918 containerd[1955]: time="2025-05-14T23:49:54.668522409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.209671ms" May 14 23:49:54.700832 kubelet[2899]: W0514 23:49:54.700386 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.700832 kubelet[2899]: E0514 23:49:54.700482 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.737659 kubelet[2899]: W0514 23:49:54.737565 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-25&limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.737659 kubelet[2899]: E0514 23:49:54.737666 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-25&limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.800748 kubelet[2899]: W0514 23:49:54.800672 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.800748 kubelet[2899]: E0514 23:49:54.800749 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.848959 containerd[1955]: time="2025-05-14T23:49:54.847896466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:49:54.848959 containerd[1955]: time="2025-05-14T23:49:54.848001310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:49:54.848959 containerd[1955]: time="2025-05-14T23:49:54.848027038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:49:54.848959 containerd[1955]: time="2025-05-14T23:49:54.848199370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:49:54.854514 containerd[1955]: time="2025-05-14T23:49:54.854152402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:49:54.854514 containerd[1955]: time="2025-05-14T23:49:54.854255938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:49:54.854514 containerd[1955]: time="2025-05-14T23:49:54.854293186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:49:54.854961 containerd[1955]: time="2025-05-14T23:49:54.854463778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:49:54.861230 containerd[1955]: time="2025-05-14T23:49:54.860639158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:49:54.861230 containerd[1955]: time="2025-05-14T23:49:54.860746474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:49:54.861230 containerd[1955]: time="2025-05-14T23:49:54.860782246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:49:54.861230 containerd[1955]: time="2025-05-14T23:49:54.860928430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:49:54.867076 kubelet[2899]: W0514 23:49:54.866553 2899 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.867076 kubelet[2899]: E0514 23:49:54.866663 2899 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.25:6443: connect: connection refused May 14 23:49:54.897178 systemd[1]: Started cri-containerd-e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601.scope - libcontainer container e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601. May 14 23:49:54.919415 systemd[1]: Started cri-containerd-bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7.scope - libcontainer container bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7. May 14 23:49:54.945456 systemd[1]: Started cri-containerd-33e2775f2f49a8fed6dbb0af4283e7c99208e59d9bebdc8aef219fc146394a2d.scope - libcontainer container 33e2775f2f49a8fed6dbb0af4283e7c99208e59d9bebdc8aef219fc146394a2d. May 14 23:49:55.001125 kubelet[2899]: E0514 23:49:54.999257 2899 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-25?timeout=10s\": dial tcp 172.31.28.25:6443: connect: connection refused" interval="1.6s" May 14 23:49:55.023376 containerd[1955]: time="2025-05-14T23:49:55.023324130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-25,Uid:6e328882e2eca8c3db134813c35d49b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7\"" May 14 23:49:55.041436 containerd[1955]: time="2025-05-14T23:49:55.041383795Z" level=info msg="CreateContainer within sandbox \"bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:49:55.057658 containerd[1955]: time="2025-05-14T23:49:55.057605587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-25,Uid:4b8b638974cc35f6c354fab3f90737ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"33e2775f2f49a8fed6dbb0af4283e7c99208e59d9bebdc8aef219fc146394a2d\"" May 14 23:49:55.067068 containerd[1955]: time="2025-05-14T23:49:55.066997003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-25,Uid:6778b4cb0646f6df9dbf41e61a19703b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601\"" May 14 23:49:55.070655 containerd[1955]: time="2025-05-14T23:49:55.070588651Z" level=info msg="CreateContainer within sandbox \"33e2775f2f49a8fed6dbb0af4283e7c99208e59d9bebdc8aef219fc146394a2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:49:55.074730 containerd[1955]: time="2025-05-14T23:49:55.074677027Z" level=info msg="CreateContainer within sandbox \"e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:49:55.106779 containerd[1955]: time="2025-05-14T23:49:55.105652051Z" level=info msg="CreateContainer within sandbox \"bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e\"" May 14 23:49:55.108018 containerd[1955]: time="2025-05-14T23:49:55.107957023Z" level=info msg="StartContainer for \"ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e\"" May 14 23:49:55.115808 kubelet[2899]: I0514 23:49:55.115303 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-25" May 14 23:49:55.115808 kubelet[2899]: E0514 23:49:55.115746 2899 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.25:6443/api/v1/nodes\": dial tcp 172.31.28.25:6443: connect: connection refused" node="ip-172-31-28-25" May 14 23:49:55.122633 containerd[1955]: time="2025-05-14T23:49:55.122566975Z" level=info msg="CreateContainer within sandbox \"e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3\"" May 14 23:49:55.124446 containerd[1955]: time="2025-05-14T23:49:55.124335283Z" level=info msg="StartContainer for \"b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3\"" May 14 23:49:55.129995 containerd[1955]: time="2025-05-14T23:49:55.129691231Z" level=info msg="CreateContainer within sandbox \"33e2775f2f49a8fed6dbb0af4283e7c99208e59d9bebdc8aef219fc146394a2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"609dd0621f2f6f42ef221c02ac5c744f27b2d516f44cda0ad9aa3f1ba7256eaf\"" May 14 23:49:55.133147 containerd[1955]: time="2025-05-14T23:49:55.132315727Z" level=info msg="StartContainer for \"609dd0621f2f6f42ef221c02ac5c744f27b2d516f44cda0ad9aa3f1ba7256eaf\"" May 14 23:49:55.172413 systemd[1]: Started cri-containerd-ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e.scope - libcontainer container ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e. May 14 23:49:55.203486 systemd[1]: Started cri-containerd-b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3.scope - libcontainer container b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3. May 14 23:49:55.223685 systemd[1]: Started cri-containerd-609dd0621f2f6f42ef221c02ac5c744f27b2d516f44cda0ad9aa3f1ba7256eaf.scope - libcontainer container 609dd0621f2f6f42ef221c02ac5c744f27b2d516f44cda0ad9aa3f1ba7256eaf. May 14 23:49:55.327544 containerd[1955]: time="2025-05-14T23:49:55.327467744Z" level=info msg="StartContainer for \"ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e\" returns successfully" May 14 23:49:55.349877 containerd[1955]: time="2025-05-14T23:49:55.349818524Z" level=info msg="StartContainer for \"b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3\" returns successfully" May 14 23:49:55.366782 containerd[1955]: time="2025-05-14T23:49:55.366485336Z" level=info msg="StartContainer for \"609dd0621f2f6f42ef221c02ac5c744f27b2d516f44cda0ad9aa3f1ba7256eaf\" returns successfully" May 14 23:49:56.721437 kubelet[2899]: I0514 23:49:56.721380 2899 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-25" May 14 23:49:59.221338 kubelet[2899]: E0514 23:49:59.221268 2899 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-25\" not found" node="ip-172-31-28-25" May 14 23:49:59.343033 kubelet[2899]: E0514 23:49:59.342866 2899 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-25.183f89b59dd351a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-25,UID:ip-172-31-28-25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-25,},FirstTimestamp:2025-05-14 23:49:53.572999587 +0000 UTC m=+1.009242234,LastTimestamp:2025-05-14 23:49:53.572999587 +0000 UTC m=+1.009242234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-25,}" May 14 23:49:59.394555 kubelet[2899]: I0514 23:49:59.394490 2899 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-25" May 14 23:49:59.434890 kubelet[2899]: E0514 23:49:59.434721 2899 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-25.183f89b59f54687f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-25,UID:ip-172-31-28-25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-28-25,},FirstTimestamp:2025-05-14 23:49:53.598236799 +0000 UTC m=+1.034479434,LastTimestamp:2025-05-14 23:49:53.598236799 +0000 UTC m=+1.034479434,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-25,}" May 14 23:49:59.517337 kubelet[2899]: E0514 23:49:59.516453 2899 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-28-25\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:49:59.517337 kubelet[2899]: E0514 23:49:59.516464 2899 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-25.183f89b5a2b60b3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-25,UID:ip-172-31-28-25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-28-25 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-28-25,},FirstTimestamp:2025-05-14 23:49:53.6549671 +0000 UTC m=+1.091209735,LastTimestamp:2025-05-14 23:49:53.6549671 +0000 UTC m=+1.091209735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-25,}" May 14 23:49:59.572140 kubelet[2899]: I0514 23:49:59.571796 2899 apiserver.go:52] "Watching apiserver" May 14 23:49:59.596404 kubelet[2899]: I0514 23:49:59.596319 2899 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:50:01.522605 systemd[1]: Reload requested from client PID 3172 ('systemctl') (unit session-7.scope)... May 14 23:50:01.523071 systemd[1]: Reloading... May 14 23:50:01.836142 zram_generator::config[3217]: No configuration found. May 14 23:50:02.123279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:02.388842 systemd[1]: Reloading finished in 865 ms. May 14 23:50:02.442348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:02.459296 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:50:02.459756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:02.459836 systemd[1]: kubelet.service: Consumed 1.745s CPU time, 113.6M memory peak. May 14 23:50:02.473643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:02.837506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:02.846966 (kubelet)[3277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:03.007864 kubelet[3277]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:03.007864 kubelet[3277]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:50:03.007864 kubelet[3277]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:03.008405 kubelet[3277]: I0514 23:50:03.007882 3277 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:50:03.017211 kubelet[3277]: I0514 23:50:03.017025 3277 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 23:50:03.017211 kubelet[3277]: I0514 23:50:03.017067 3277 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:50:03.017799 kubelet[3277]: I0514 23:50:03.017469 3277 server.go:927] "Client rotation is on, will bootstrap in background" May 14 23:50:03.020161 kubelet[3277]: I0514 23:50:03.019927 3277 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:50:03.022386 kubelet[3277]: I0514 23:50:03.022229 3277 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:50:03.051626 kubelet[3277]: I0514 23:50:03.049765 3277 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:50:03.051626 kubelet[3277]: I0514 23:50:03.050177 3277 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:50:03.051626 kubelet[3277]: I0514 23:50:03.050218 3277 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 23:50:03.051626 kubelet[3277]: I0514 23:50:03.050492 3277 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:50:03.052032 kubelet[3277]: I0514 23:50:03.050511 3277 container_manager_linux.go:301] "Creating device plugin manager" May 14 23:50:03.052032 kubelet[3277]: I0514 23:50:03.050568 3277 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:03.052032 kubelet[3277]: I0514 23:50:03.050741 3277 kubelet.go:400] "Attempting to sync node with API server" May 14 23:50:03.052032 kubelet[3277]: I0514 23:50:03.050763 3277 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:50:03.052032 kubelet[3277]: I0514 23:50:03.050810 3277 kubelet.go:312] "Adding apiserver pod source" May 14 23:50:03.055204 kubelet[3277]: I0514 23:50:03.055169 3277 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:50:03.059320 kubelet[3277]: I0514 23:50:03.059281 3277 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:50:03.059734 kubelet[3277]: I0514 23:50:03.059713 3277 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:50:03.060515 kubelet[3277]: I0514 23:50:03.060486 3277 server.go:1264] "Started kubelet" May 14 23:50:03.069486 sudo[3291]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:50:03.070185 sudo[3291]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:50:03.070760 kubelet[3277]: I0514 23:50:03.070447 3277 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:50:03.079507 kubelet[3277]: I0514 23:50:03.079329 3277 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:50:03.081089 kubelet[3277]: I0514 23:50:03.081001 3277 server.go:455] "Adding debug handlers to kubelet server" May 14 23:50:03.097345 kubelet[3277]: I0514 23:50:03.095831 3277 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:50:03.110606 kubelet[3277]: I0514 23:50:03.110568 3277 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:50:03.115071 kubelet[3277]: E0514 23:50:03.113623 3277 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-28-25\" not found" May 14 23:50:03.115071 kubelet[3277]: I0514 23:50:03.113693 3277 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 23:50:03.115071 kubelet[3277]: I0514 23:50:03.113861 3277 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:50:03.118470 kubelet[3277]: I0514 23:50:03.118403 3277 reconciler.go:26] "Reconciler: start to sync state" May 14 23:50:03.132147 kubelet[3277]: E0514 23:50:03.130551 3277 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:50:03.142496 kubelet[3277]: I0514 23:50:03.142447 3277 factory.go:221] Registration of the systemd container factory successfully May 14 23:50:03.142650 kubelet[3277]: I0514 23:50:03.142615 3277 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:50:03.151749 kubelet[3277]: I0514 23:50:03.151698 3277 factory.go:221] Registration of the containerd container factory successfully May 14 23:50:03.159120 kubelet[3277]: I0514 23:50:03.157482 3277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:50:03.162693 kubelet[3277]: I0514 23:50:03.162653 3277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:50:03.165170 kubelet[3277]: I0514 23:50:03.162868 3277 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:50:03.165170 kubelet[3277]: I0514 23:50:03.162900 3277 kubelet.go:2337] "Starting kubelet main sync loop" May 14 23:50:03.165170 kubelet[3277]: E0514 23:50:03.162972 3277 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:50:03.264493 kubelet[3277]: E0514 23:50:03.263326 3277 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:03.266344 kubelet[3277]: I0514 23:50:03.265907 3277 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-25" May 14 23:50:03.307204 kubelet[3277]: I0514 23:50:03.307166 3277 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-28-25" May 14 23:50:03.310259 kubelet[3277]: I0514 23:50:03.310224 3277 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-25" May 14 23:50:03.430838 kubelet[3277]: I0514 23:50:03.430219 3277 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:50:03.431611 kubelet[3277]: I0514 23:50:03.431554 3277 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:50:03.431823 kubelet[3277]: I0514 23:50:03.431802 3277 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:03.432379 kubelet[3277]: I0514 23:50:03.432215 3277 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:50:03.432522 kubelet[3277]: I0514 23:50:03.432477 3277 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:50:03.432614 kubelet[3277]: I0514 23:50:03.432596 3277 policy_none.go:49] "None policy: Start" May 14 23:50:03.434443 kubelet[3277]: I0514 23:50:03.434412 3277 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:50:03.434746 kubelet[3277]: I0514 23:50:03.434617 3277 state_mem.go:35] "Initializing new in-memory state store" May 14 23:50:03.435557 kubelet[3277]: I0514 23:50:03.435532 3277 state_mem.go:75] "Updated machine memory state" May 14 23:50:03.453271 kubelet[3277]: I0514 23:50:03.452905 3277 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:50:03.457123 kubelet[3277]: I0514 23:50:03.455801 3277 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:50:03.466573 kubelet[3277]: I0514 23:50:03.463577 3277 topology_manager.go:215] "Topology Admit Handler" podUID="6778b4cb0646f6df9dbf41e61a19703b" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-25" May 14 23:50:03.466573 kubelet[3277]: I0514 23:50:03.463724 3277 topology_manager.go:215] "Topology Admit Handler" podUID="4b8b638974cc35f6c354fab3f90737ad" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-25" May 14 23:50:03.466573 kubelet[3277]: I0514 23:50:03.463817 3277 topology_manager.go:215] "Topology Admit Handler" podUID="6e328882e2eca8c3db134813c35d49b7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-25" May 14 23:50:03.466573 kubelet[3277]: I0514 23:50:03.464493 3277 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:50:03.527519 kubelet[3277]: I0514 23:50:03.527264 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b8b638974cc35f6c354fab3f90737ad-ca-certs\") pod \"kube-apiserver-ip-172-31-28-25\" (UID: \"4b8b638974cc35f6c354fab3f90737ad\") " pod="kube-system/kube-apiserver-ip-172-31-28-25" May 14 23:50:03.529780 kubelet[3277]: I0514 23:50:03.528965 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b8b638974cc35f6c354fab3f90737ad-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-25\" (UID: \"4b8b638974cc35f6c354fab3f90737ad\") " pod="kube-system/kube-apiserver-ip-172-31-28-25" May 14 23:50:03.529780 kubelet[3277]: I0514 23:50:03.529050 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:50:03.529780 kubelet[3277]: I0514 23:50:03.529088 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:50:03.529780 kubelet[3277]: I0514 23:50:03.529146 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:50:03.529780 kubelet[3277]: I0514 23:50:03.529184 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:50:03.531472 kubelet[3277]: I0514 23:50:03.529228 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6778b4cb0646f6df9dbf41e61a19703b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-25\" (UID: \"6778b4cb0646f6df9dbf41e61a19703b\") " pod="kube-system/kube-scheduler-ip-172-31-28-25" May 14 23:50:03.531472 kubelet[3277]: I0514 23:50:03.529262 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b8b638974cc35f6c354fab3f90737ad-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-25\" (UID: \"4b8b638974cc35f6c354fab3f90737ad\") " pod="kube-system/kube-apiserver-ip-172-31-28-25" May 14 23:50:03.531472 kubelet[3277]: I0514 23:50:03.529295 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e328882e2eca8c3db134813c35d49b7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-25\" (UID: \"6e328882e2eca8c3db134813c35d49b7\") " pod="kube-system/kube-controller-manager-ip-172-31-28-25" May 14 23:50:04.056936 kubelet[3277]: I0514 23:50:04.056883 3277 apiserver.go:52] "Watching apiserver" May 14 23:50:04.076451 sudo[3291]: pam_unix(sudo:session): session closed for user root May 14 23:50:04.114563 kubelet[3277]: I0514 23:50:04.114502 3277 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:50:04.123495 update_engine[1931]: I20250514 23:50:04.123391 1931 update_attempter.cc:509] Updating boot flags... May 14 23:50:04.270219 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3338) May 14 23:50:04.403446 kubelet[3277]: I0514 23:50:04.401887 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-25" podStartSLOduration=1.401866157 podStartE2EDuration="1.401866157s" podCreationTimestamp="2025-05-14 23:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:04.401282549 +0000 UTC m=+1.546453965" watchObservedRunningTime="2025-05-14 23:50:04.401866157 +0000 UTC m=+1.547037585" May 14 23:50:04.439632 kubelet[3277]: I0514 23:50:04.439283 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-25" podStartSLOduration=1.439258157 podStartE2EDuration="1.439258157s" podCreationTimestamp="2025-05-14 23:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:04.434925053 +0000 UTC m=+1.580096433" watchObservedRunningTime="2025-05-14 23:50:04.439258157 +0000 UTC m=+1.584429561" May 14 23:50:04.499465 kubelet[3277]: I0514 23:50:04.496953 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-25" podStartSLOduration=1.496916214 podStartE2EDuration="1.496916214s" podCreationTimestamp="2025-05-14 23:50:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:04.461849345 +0000 UTC m=+1.607020737" watchObservedRunningTime="2025-05-14 23:50:04.496916214 +0000 UTC m=+1.642087594" May 14 23:50:04.824125 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3337) May 14 23:50:07.039864 sudo[2278]: pam_unix(sudo:session): session closed for user root May 14 23:50:07.062376 sshd[2277]: Connection closed by 139.178.89.65 port 41344 May 14 23:50:07.063238 sshd-session[2275]: pam_unix(sshd:session): session closed for user core May 14 23:50:07.070677 systemd[1]: sshd@6-172.31.28.25:22-139.178.89.65:41344.service: Deactivated successfully. May 14 23:50:07.075504 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:50:07.076035 systemd[1]: session-7.scope: Consumed 11.098s CPU time, 293.5M memory peak. May 14 23:50:07.078769 systemd-logind[1930]: Session 7 logged out. Waiting for processes to exit. May 14 23:50:07.080778 systemd-logind[1930]: Removed session 7. May 14 23:50:14.412404 kubelet[3277]: I0514 23:50:14.412351 3277 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:50:14.413534 kubelet[3277]: I0514 23:50:14.413391 3277 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:50:14.413602 containerd[1955]: time="2025-05-14T23:50:14.412914087Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:50:15.359194 kubelet[3277]: I0514 23:50:15.355792 3277 topology_manager.go:215] "Topology Admit Handler" podUID="55b4617b-33cd-43f2-960c-080e4b2e7441" podNamespace="kube-system" podName="kube-proxy-5xrsw" May 14 23:50:15.379858 systemd[1]: Created slice kubepods-besteffort-pod55b4617b_33cd_43f2_960c_080e4b2e7441.slice - libcontainer container kubepods-besteffort-pod55b4617b_33cd_43f2_960c_080e4b2e7441.slice. May 14 23:50:15.405229 kubelet[3277]: I0514 23:50:15.405162 3277 topology_manager.go:215] "Topology Admit Handler" podUID="0c6745cf-908e-4741-9367-980ed710a49b" podNamespace="kube-system" podName="cilium-dcjl7" May 14 23:50:15.408348 kubelet[3277]: I0514 23:50:15.407115 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55b4617b-33cd-43f2-960c-080e4b2e7441-xtables-lock\") pod \"kube-proxy-5xrsw\" (UID: \"55b4617b-33cd-43f2-960c-080e4b2e7441\") " pod="kube-system/kube-proxy-5xrsw" May 14 23:50:15.408623 kubelet[3277]: I0514 23:50:15.408472 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnpc\" (UniqueName: \"kubernetes.io/projected/55b4617b-33cd-43f2-960c-080e4b2e7441-kube-api-access-wrnpc\") pod \"kube-proxy-5xrsw\" (UID: \"55b4617b-33cd-43f2-960c-080e4b2e7441\") " pod="kube-system/kube-proxy-5xrsw" May 14 23:50:15.408623 kubelet[3277]: I0514 23:50:15.408524 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55b4617b-33cd-43f2-960c-080e4b2e7441-kube-proxy\") pod \"kube-proxy-5xrsw\" (UID: \"55b4617b-33cd-43f2-960c-080e4b2e7441\") " pod="kube-system/kube-proxy-5xrsw" May 14 23:50:15.408623 kubelet[3277]: I0514 23:50:15.408566 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b4617b-33cd-43f2-960c-080e4b2e7441-lib-modules\") pod \"kube-proxy-5xrsw\" (UID: \"55b4617b-33cd-43f2-960c-080e4b2e7441\") " pod="kube-system/kube-proxy-5xrsw" May 14 23:50:15.413232 kubelet[3277]: W0514 23:50:15.411628 3277 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-25' and this object May 14 23:50:15.413232 kubelet[3277]: E0514 23:50:15.411682 3277 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-25' and this object May 14 23:50:15.413864 kubelet[3277]: W0514 23:50:15.413451 3277 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-25' and this object May 14 23:50:15.413864 kubelet[3277]: E0514 23:50:15.413500 3277 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-25' and this object May 14 23:50:15.416238 kubelet[3277]: W0514 23:50:15.415443 3277 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-28-25" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-25' and this object May 14 23:50:15.416238 kubelet[3277]: E0514 23:50:15.415505 3277 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-28-25" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-25' and this object May 14 23:50:15.428726 systemd[1]: Created slice kubepods-burstable-pod0c6745cf_908e_4741_9367_980ed710a49b.slice - libcontainer container kubepods-burstable-pod0c6745cf_908e_4741_9367_980ed710a49b.slice. May 14 23:50:15.492340 kubelet[3277]: I0514 23:50:15.491998 3277 topology_manager.go:215] "Topology Admit Handler" podUID="ae07a3c6-148f-4676-9a10-4f983071aeb6" podNamespace="kube-system" podName="cilium-operator-599987898-hdpqz" May 14 23:50:15.508918 kubelet[3277]: I0514 23:50:15.508846 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cni-path\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509068 kubelet[3277]: I0514 23:50:15.508943 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-net\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509068 kubelet[3277]: I0514 23:50:15.509008 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-hubble-tls\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509068 kubelet[3277]: I0514 23:50:15.509048 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-bpf-maps\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509276 kubelet[3277]: I0514 23:50:15.509208 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c6745cf-908e-4741-9367-980ed710a49b-clustermesh-secrets\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509355 kubelet[3277]: I0514 23:50:15.509278 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae07a3c6-148f-4676-9a10-4f983071aeb6-cilium-config-path\") pod \"cilium-operator-599987898-hdpqz\" (UID: \"ae07a3c6-148f-4676-9a10-4f983071aeb6\") " pod="kube-system/cilium-operator-599987898-hdpqz" May 14 23:50:15.509414 kubelet[3277]: I0514 23:50:15.509359 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-run\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509465 kubelet[3277]: I0514 23:50:15.509400 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c6745cf-908e-4741-9367-980ed710a49b-cilium-config-path\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509522 kubelet[3277]: I0514 23:50:15.509473 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2jr\" (UniqueName: \"kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-kube-api-access-bz2jr\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.509572 kubelet[3277]: I0514 23:50:15.509535 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grh4g\" (UniqueName: \"kubernetes.io/projected/ae07a3c6-148f-4676-9a10-4f983071aeb6-kube-api-access-grh4g\") pod \"cilium-operator-599987898-hdpqz\" (UID: \"ae07a3c6-148f-4676-9a10-4f983071aeb6\") " pod="kube-system/cilium-operator-599987898-hdpqz" May 14 23:50:15.509700 kubelet[3277]: I0514 23:50:15.509642 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-hostproc\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.511014 kubelet[3277]: I0514 23:50:15.509713 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-etc-cni-netd\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.511014 kubelet[3277]: I0514 23:50:15.509755 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-lib-modules\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.511014 kubelet[3277]: I0514 23:50:15.509984 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-cgroup\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.511014 kubelet[3277]: I0514 23:50:15.510032 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-xtables-lock\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.511014 kubelet[3277]: I0514 23:50:15.510683 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-kernel\") pod \"cilium-dcjl7\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " pod="kube-system/cilium-dcjl7" May 14 23:50:15.510671 systemd[1]: Created slice kubepods-besteffort-podae07a3c6_148f_4676_9a10_4f983071aeb6.slice - libcontainer container kubepods-besteffort-podae07a3c6_148f_4676_9a10_4f983071aeb6.slice. May 14 23:50:15.692839 containerd[1955]: time="2025-05-14T23:50:15.692618933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xrsw,Uid:55b4617b-33cd-43f2-960c-080e4b2e7441,Namespace:kube-system,Attempt:0,}" May 14 23:50:15.748636 containerd[1955]: time="2025-05-14T23:50:15.748143545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:15.748636 containerd[1955]: time="2025-05-14T23:50:15.748361789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:15.748636 containerd[1955]: time="2025-05-14T23:50:15.748444649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:15.749054 containerd[1955]: time="2025-05-14T23:50:15.748722653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:15.797406 systemd[1]: Started cri-containerd-c5b7df8e40d1bd954ca404095ee2f5147055b95219c103bdb703026f94e5658d.scope - libcontainer container c5b7df8e40d1bd954ca404095ee2f5147055b95219c103bdb703026f94e5658d. May 14 23:50:15.841248 containerd[1955]: time="2025-05-14T23:50:15.841162098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xrsw,Uid:55b4617b-33cd-43f2-960c-080e4b2e7441,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b7df8e40d1bd954ca404095ee2f5147055b95219c103bdb703026f94e5658d\"" May 14 23:50:15.848406 containerd[1955]: time="2025-05-14T23:50:15.847266654Z" level=info msg="CreateContainer within sandbox \"c5b7df8e40d1bd954ca404095ee2f5147055b95219c103bdb703026f94e5658d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:50:15.879387 containerd[1955]: time="2025-05-14T23:50:15.879326226Z" level=info msg="CreateContainer within sandbox \"c5b7df8e40d1bd954ca404095ee2f5147055b95219c103bdb703026f94e5658d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"19ca44165cf193c39a2acf8712b3bf3c07bba781e29956d2eec52375a8b84cf4\"" May 14 23:50:15.881301 containerd[1955]: time="2025-05-14T23:50:15.881197146Z" level=info msg="StartContainer for \"19ca44165cf193c39a2acf8712b3bf3c07bba781e29956d2eec52375a8b84cf4\"" May 14 23:50:15.933421 systemd[1]: Started cri-containerd-19ca44165cf193c39a2acf8712b3bf3c07bba781e29956d2eec52375a8b84cf4.scope - libcontainer container 19ca44165cf193c39a2acf8712b3bf3c07bba781e29956d2eec52375a8b84cf4. May 14 23:50:15.997240 containerd[1955]: time="2025-05-14T23:50:15.996984799Z" level=info msg="StartContainer for \"19ca44165cf193c39a2acf8712b3bf3c07bba781e29956d2eec52375a8b84cf4\" returns successfully" May 14 23:50:16.613121 kubelet[3277]: E0514 23:50:16.613038 3277 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 14 23:50:16.613722 kubelet[3277]: E0514 23:50:16.613187 3277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c6745cf-908e-4741-9367-980ed710a49b-clustermesh-secrets podName:0c6745cf-908e-4741-9367-980ed710a49b nodeName:}" failed. No retries permitted until 2025-05-14 23:50:17.113155414 +0000 UTC m=+14.258326806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/0c6745cf-908e-4741-9367-980ed710a49b-clustermesh-secrets") pod "cilium-dcjl7" (UID: "0c6745cf-908e-4741-9367-980ed710a49b") : failed to sync secret cache: timed out waiting for the condition May 14 23:50:16.613722 kubelet[3277]: E0514 23:50:16.613233 3277 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 14 23:50:16.613722 kubelet[3277]: E0514 23:50:16.613290 3277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae07a3c6-148f-4676-9a10-4f983071aeb6-cilium-config-path podName:ae07a3c6-148f-4676-9a10-4f983071aeb6 nodeName:}" failed. No retries permitted until 2025-05-14 23:50:17.113275018 +0000 UTC m=+14.258446398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ae07a3c6-148f-4676-9a10-4f983071aeb6-cilium-config-path") pod "cilium-operator-599987898-hdpqz" (UID: "ae07a3c6-148f-4676-9a10-4f983071aeb6") : failed to sync configmap cache: timed out waiting for the condition May 14 23:50:16.613722 kubelet[3277]: E0514 23:50:16.613672 3277 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 14 23:50:16.614085 kubelet[3277]: E0514 23:50:16.613739 3277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c6745cf-908e-4741-9367-980ed710a49b-cilium-config-path podName:0c6745cf-908e-4741-9367-980ed710a49b nodeName:}" failed. No retries permitted until 2025-05-14 23:50:17.113717038 +0000 UTC m=+14.258888442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0c6745cf-908e-4741-9367-980ed710a49b-cilium-config-path") pod "cilium-dcjl7" (UID: "0c6745cf-908e-4741-9367-980ed710a49b") : failed to sync configmap cache: timed out waiting for the condition May 14 23:50:16.614085 kubelet[3277]: E0514 23:50:16.613792 3277 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 14 23:50:16.614085 kubelet[3277]: E0514 23:50:16.613811 3277 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-dcjl7: failed to sync secret cache: timed out waiting for the condition May 14 23:50:16.614085 kubelet[3277]: E0514 23:50:16.613863 3277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-hubble-tls podName:0c6745cf-908e-4741-9367-980ed710a49b nodeName:}" failed. No retries permitted until 2025-05-14 23:50:17.113848282 +0000 UTC m=+14.259019674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-hubble-tls") pod "cilium-dcjl7" (UID: "0c6745cf-908e-4741-9367-980ed710a49b") : failed to sync secret cache: timed out waiting for the condition May 14 23:50:17.239904 containerd[1955]: time="2025-05-14T23:50:17.239831273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcjl7,Uid:0c6745cf-908e-4741-9367-980ed710a49b,Namespace:kube-system,Attempt:0,}" May 14 23:50:17.288721 containerd[1955]: time="2025-05-14T23:50:17.288502937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:17.288721 containerd[1955]: time="2025-05-14T23:50:17.288622229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:17.289091 containerd[1955]: time="2025-05-14T23:50:17.288752501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.290771 containerd[1955]: time="2025-05-14T23:50:17.290624933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.325352 containerd[1955]: time="2025-05-14T23:50:17.324859073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hdpqz,Uid:ae07a3c6-148f-4676-9a10-4f983071aeb6,Namespace:kube-system,Attempt:0,}" May 14 23:50:17.332428 systemd[1]: Started cri-containerd-bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513.scope - libcontainer container bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513. May 14 23:50:17.393539 containerd[1955]: time="2025-05-14T23:50:17.393475182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcjl7,Uid:0c6745cf-908e-4741-9367-980ed710a49b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\"" May 14 23:50:17.399797 containerd[1955]: time="2025-05-14T23:50:17.398420826Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:50:17.407692 containerd[1955]: time="2025-05-14T23:50:17.407438478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:17.408350 containerd[1955]: time="2025-05-14T23:50:17.408270078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:17.408462 containerd[1955]: time="2025-05-14T23:50:17.408342150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.408821 containerd[1955]: time="2025-05-14T23:50:17.408740082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.439421 systemd[1]: Started cri-containerd-3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae.scope - libcontainer container 3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae. May 14 23:50:17.501767 containerd[1955]: time="2025-05-14T23:50:17.500499870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hdpqz,Uid:ae07a3c6-148f-4676-9a10-4f983071aeb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\"" May 14 23:50:24.155968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509205290.mount: Deactivated successfully. May 14 23:50:26.653363 containerd[1955]: time="2025-05-14T23:50:26.653282956Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:26.655158 containerd[1955]: time="2025-05-14T23:50:26.655049764Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:50:26.657441 containerd[1955]: time="2025-05-14T23:50:26.657331900Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:26.662841 containerd[1955]: time="2025-05-14T23:50:26.662632204Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.264146014s" May 14 23:50:26.662841 containerd[1955]: time="2025-05-14T23:50:26.662690860Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:50:26.666734 containerd[1955]: time="2025-05-14T23:50:26.666386812Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:50:26.669712 containerd[1955]: time="2025-05-14T23:50:26.669546928Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:50:26.786408 containerd[1955]: time="2025-05-14T23:50:26.786341032Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\"" May 14 23:50:26.788010 containerd[1955]: time="2025-05-14T23:50:26.787910596Z" level=info msg="StartContainer for \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\"" May 14 23:50:26.852448 systemd[1]: Started cri-containerd-a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08.scope - libcontainer container a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08. May 14 23:50:26.922007 systemd[1]: cri-containerd-a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08.scope: Deactivated successfully. May 14 23:50:26.928069 containerd[1955]: time="2025-05-14T23:50:26.927783677Z" level=info msg="StartContainer for \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\" returns successfully" May 14 23:50:27.415588 kubelet[3277]: I0514 23:50:27.415489 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5xrsw" podStartSLOduration=12.415464615 podStartE2EDuration="12.415464615s" podCreationTimestamp="2025-05-14 23:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:16.391479701 +0000 UTC m=+13.536651117" watchObservedRunningTime="2025-05-14 23:50:27.415464615 +0000 UTC m=+24.560635995" May 14 23:50:27.734074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08-rootfs.mount: Deactivated successfully. May 14 23:50:27.865998 containerd[1955]: time="2025-05-14T23:50:27.865895874Z" level=info msg="shim disconnected" id=a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08 namespace=k8s.io May 14 23:50:27.865998 containerd[1955]: time="2025-05-14T23:50:27.865975746Z" level=warning msg="cleaning up after shim disconnected" id=a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08 namespace=k8s.io May 14 23:50:27.865998 containerd[1955]: time="2025-05-14T23:50:27.865996842Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:28.401603 containerd[1955]: time="2025-05-14T23:50:28.401247808Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:50:28.440838 containerd[1955]: time="2025-05-14T23:50:28.440705104Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\"" May 14 23:50:28.441893 containerd[1955]: time="2025-05-14T23:50:28.441628780Z" level=info msg="StartContainer for \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\"" May 14 23:50:28.500405 systemd[1]: Started cri-containerd-f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341.scope - libcontainer container f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341. May 14 23:50:28.572539 containerd[1955]: time="2025-05-14T23:50:28.572399453Z" level=info msg="StartContainer for \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\" returns successfully" May 14 23:50:28.600887 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:50:28.601432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:28.602450 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:50:28.611773 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:50:28.625488 systemd[1]: cri-containerd-f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341.scope: Deactivated successfully. May 14 23:50:28.664320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:28.702142 containerd[1955]: time="2025-05-14T23:50:28.702020358Z" level=info msg="shim disconnected" id=f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341 namespace=k8s.io May 14 23:50:28.702142 containerd[1955]: time="2025-05-14T23:50:28.702120462Z" level=warning msg="cleaning up after shim disconnected" id=f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341 namespace=k8s.io May 14 23:50:28.702142 containerd[1955]: time="2025-05-14T23:50:28.702144630Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:28.740539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341-rootfs.mount: Deactivated successfully. May 14 23:50:29.418131 containerd[1955]: time="2025-05-14T23:50:29.417446633Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:50:29.475628 containerd[1955]: time="2025-05-14T23:50:29.474777018Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\"" May 14 23:50:29.477125 containerd[1955]: time="2025-05-14T23:50:29.477046998Z" level=info msg="StartContainer for \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\"" May 14 23:50:29.559421 systemd[1]: Started cri-containerd-7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891.scope - libcontainer container 7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891. May 14 23:50:29.634994 containerd[1955]: time="2025-05-14T23:50:29.634904094Z" level=info msg="StartContainer for \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\" returns successfully" May 14 23:50:29.645211 systemd[1]: cri-containerd-7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891.scope: Deactivated successfully. May 14 23:50:29.737316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891-rootfs.mount: Deactivated successfully. May 14 23:50:29.769207 containerd[1955]: time="2025-05-14T23:50:29.768908263Z" level=info msg="shim disconnected" id=7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891 namespace=k8s.io May 14 23:50:29.769207 containerd[1955]: time="2025-05-14T23:50:29.769017307Z" level=warning msg="cleaning up after shim disconnected" id=7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891 namespace=k8s.io May 14 23:50:29.769207 containerd[1955]: time="2025-05-14T23:50:29.769039831Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:30.022166 containerd[1955]: time="2025-05-14T23:50:30.021940108Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:30.024013 containerd[1955]: time="2025-05-14T23:50:30.023921260Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:50:30.026431 containerd[1955]: time="2025-05-14T23:50:30.026358400Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:30.029679 containerd[1955]: time="2025-05-14T23:50:30.029378740Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.362932588s" May 14 23:50:30.029679 containerd[1955]: time="2025-05-14T23:50:30.029449672Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:50:30.034531 containerd[1955]: time="2025-05-14T23:50:30.034452112Z" level=info msg="CreateContainer within sandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:50:30.069938 containerd[1955]: time="2025-05-14T23:50:30.069863657Z" level=info msg="CreateContainer within sandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\"" May 14 23:50:30.070844 containerd[1955]: time="2025-05-14T23:50:30.070805837Z" level=info msg="StartContainer for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\"" May 14 23:50:30.130389 systemd[1]: Started cri-containerd-8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21.scope - libcontainer container 8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21. May 14 23:50:30.178678 containerd[1955]: time="2025-05-14T23:50:30.178490153Z" level=info msg="StartContainer for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" returns successfully" May 14 23:50:30.429403 containerd[1955]: time="2025-05-14T23:50:30.429351522Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:50:30.460926 containerd[1955]: time="2025-05-14T23:50:30.460157971Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\"" May 14 23:50:30.461226 containerd[1955]: time="2025-05-14T23:50:30.461158891Z" level=info msg="StartContainer for \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\"" May 14 23:50:30.561460 systemd[1]: Started cri-containerd-846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f.scope - libcontainer container 846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f. May 14 23:50:30.581256 kubelet[3277]: I0514 23:50:30.580891 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hdpqz" podStartSLOduration=3.053932145 podStartE2EDuration="15.580869979s" podCreationTimestamp="2025-05-14 23:50:15 +0000 UTC" firstStartedPulling="2025-05-14 23:50:17.504080694 +0000 UTC m=+14.649252074" lastFinishedPulling="2025-05-14 23:50:30.031018528 +0000 UTC m=+27.176189908" observedRunningTime="2025-05-14 23:50:30.476329219 +0000 UTC m=+27.621500647" watchObservedRunningTime="2025-05-14 23:50:30.580869979 +0000 UTC m=+27.726041395" May 14 23:50:30.696588 systemd[1]: cri-containerd-846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f.scope: Deactivated successfully. May 14 23:50:30.702138 containerd[1955]: time="2025-05-14T23:50:30.699225788Z" level=info msg="StartContainer for \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\" returns successfully" May 14 23:50:30.776877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f-rootfs.mount: Deactivated successfully. May 14 23:50:30.815524 containerd[1955]: time="2025-05-14T23:50:30.815144060Z" level=info msg="shim disconnected" id=846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f namespace=k8s.io May 14 23:50:30.815524 containerd[1955]: time="2025-05-14T23:50:30.815229416Z" level=warning msg="cleaning up after shim disconnected" id=846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f namespace=k8s.io May 14 23:50:30.815524 containerd[1955]: time="2025-05-14T23:50:30.815251088Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:31.444960 containerd[1955]: time="2025-05-14T23:50:31.444771019Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:50:31.482849 containerd[1955]: time="2025-05-14T23:50:31.482761964Z" level=info msg="CreateContainer within sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\"" May 14 23:50:31.487410 containerd[1955]: time="2025-05-14T23:50:31.485697008Z" level=info msg="StartContainer for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\"" May 14 23:50:31.582410 systemd[1]: Started cri-containerd-ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a.scope - libcontainer container ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a. May 14 23:50:31.713069 containerd[1955]: time="2025-05-14T23:50:31.712934997Z" level=info msg="StartContainer for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" returns successfully" May 14 23:50:32.098933 kubelet[3277]: I0514 23:50:32.096385 3277 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 23:50:32.139820 kubelet[3277]: I0514 23:50:32.139722 3277 topology_manager.go:215] "Topology Admit Handler" podUID="7a9484dc-7893-489b-be6b-a888b4e25bfd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dsjq4" May 14 23:50:32.145792 kubelet[3277]: I0514 23:50:32.145587 3277 topology_manager.go:215] "Topology Admit Handler" podUID="0778d2bb-471a-4ad3-a0fc-2bff2e6769ae" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6bcg9" May 14 23:50:32.160265 systemd[1]: Created slice kubepods-burstable-pod7a9484dc_7893_489b_be6b_a888b4e25bfd.slice - libcontainer container kubepods-burstable-pod7a9484dc_7893_489b_be6b_a888b4e25bfd.slice. May 14 23:50:32.177823 systemd[1]: Created slice kubepods-burstable-pod0778d2bb_471a_4ad3_a0fc_2bff2e6769ae.slice - libcontainer container kubepods-burstable-pod0778d2bb_471a_4ad3_a0fc_2bff2e6769ae.slice. May 14 23:50:32.234607 kubelet[3277]: I0514 23:50:32.234542 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257v8\" (UniqueName: \"kubernetes.io/projected/7a9484dc-7893-489b-be6b-a888b4e25bfd-kube-api-access-257v8\") pod \"coredns-7db6d8ff4d-dsjq4\" (UID: \"7a9484dc-7893-489b-be6b-a888b4e25bfd\") " pod="kube-system/coredns-7db6d8ff4d-dsjq4" May 14 23:50:32.234767 kubelet[3277]: I0514 23:50:32.234618 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0778d2bb-471a-4ad3-a0fc-2bff2e6769ae-config-volume\") pod \"coredns-7db6d8ff4d-6bcg9\" (UID: \"0778d2bb-471a-4ad3-a0fc-2bff2e6769ae\") " pod="kube-system/coredns-7db6d8ff4d-6bcg9" May 14 23:50:32.234767 kubelet[3277]: I0514 23:50:32.234672 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpcc\" (UniqueName: \"kubernetes.io/projected/0778d2bb-471a-4ad3-a0fc-2bff2e6769ae-kube-api-access-2vpcc\") pod \"coredns-7db6d8ff4d-6bcg9\" (UID: \"0778d2bb-471a-4ad3-a0fc-2bff2e6769ae\") " pod="kube-system/coredns-7db6d8ff4d-6bcg9" May 14 23:50:32.234767 kubelet[3277]: I0514 23:50:32.234708 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9484dc-7893-489b-be6b-a888b4e25bfd-config-volume\") pod \"coredns-7db6d8ff4d-dsjq4\" (UID: \"7a9484dc-7893-489b-be6b-a888b4e25bfd\") " pod="kube-system/coredns-7db6d8ff4d-dsjq4" May 14 23:50:32.473469 containerd[1955]: time="2025-05-14T23:50:32.472783617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dsjq4,Uid:7a9484dc-7893-489b-be6b-a888b4e25bfd,Namespace:kube-system,Attempt:0,}" May 14 23:50:32.488806 containerd[1955]: time="2025-05-14T23:50:32.488736189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6bcg9,Uid:0778d2bb-471a-4ad3-a0fc-2bff2e6769ae,Namespace:kube-system,Attempt:0,}" May 14 23:50:34.882996 (udev-worker)[4256]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:34.883755 (udev-worker)[4255]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:34.886036 systemd-networkd[1773]: cilium_host: Link UP May 14 23:50:34.886903 systemd-networkd[1773]: cilium_net: Link UP May 14 23:50:34.890040 systemd-networkd[1773]: cilium_net: Gained carrier May 14 23:50:34.890942 systemd-networkd[1773]: cilium_host: Gained carrier May 14 23:50:35.054498 (udev-worker)[4303]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:35.070194 systemd-networkd[1773]: cilium_vxlan: Link UP May 14 23:50:35.070207 systemd-networkd[1773]: cilium_vxlan: Gained carrier May 14 23:50:35.161524 systemd-networkd[1773]: cilium_host: Gained IPv6LL May 14 23:50:35.566137 kernel: NET: Registered PF_ALG protocol family May 14 23:50:35.569487 systemd-networkd[1773]: cilium_net: Gained IPv6LL May 14 23:50:36.210264 systemd-networkd[1773]: cilium_vxlan: Gained IPv6LL May 14 23:50:36.890770 systemd-networkd[1773]: lxc_health: Link UP May 14 23:50:36.893227 systemd-networkd[1773]: lxc_health: Gained carrier May 14 23:50:37.307635 kubelet[3277]: I0514 23:50:37.307541 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dcjl7" podStartSLOduration=13.038804355 podStartE2EDuration="22.307520233s" podCreationTimestamp="2025-05-14 23:50:15 +0000 UTC" firstStartedPulling="2025-05-14 23:50:17.396957822 +0000 UTC m=+14.542129214" lastFinishedPulling="2025-05-14 23:50:26.6656737 +0000 UTC m=+23.810845092" observedRunningTime="2025-05-14 23:50:32.509524437 +0000 UTC m=+29.654695841" watchObservedRunningTime="2025-05-14 23:50:37.307520233 +0000 UTC m=+34.452691625" May 14 23:50:37.581316 systemd-networkd[1773]: lxc375aa36573de: Link UP May 14 23:50:37.593262 kernel: eth0: renamed from tmp17464 May 14 23:50:37.615693 kernel: eth0: renamed from tmp23731 May 14 23:50:37.614331 systemd-networkd[1773]: lxc264c5a1e12dc: Link UP May 14 23:50:37.619879 systemd-networkd[1773]: lxc375aa36573de: Gained carrier May 14 23:50:37.626715 systemd-networkd[1773]: lxc264c5a1e12dc: Gained carrier May 14 23:50:37.628464 (udev-worker)[4300]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:38.897521 systemd-networkd[1773]: lxc_health: Gained IPv6LL May 14 23:50:39.345391 systemd-networkd[1773]: lxc375aa36573de: Gained IPv6LL May 14 23:50:39.601325 systemd-networkd[1773]: lxc264c5a1e12dc: Gained IPv6LL May 14 23:50:42.235362 ntpd[1922]: Listen normally on 7 cilium_host 192.168.0.110:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 7 cilium_host 192.168.0.110:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 8 cilium_net [fe80::50f8:89ff:febd:70c6%4]:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 9 cilium_host [fe80::8884:a6ff:fea2:8d04%5]:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 10 cilium_vxlan [fe80::5069:57ff:fe24:420d%6]:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 11 lxc_health [fe80::1892:71ff:fe10:ec0d%8]:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 12 lxc375aa36573de [fe80::5426:b6ff:fe28:312%10]:123 May 14 23:50:42.236555 ntpd[1922]: 14 May 23:50:42 ntpd[1922]: Listen normally on 13 lxc264c5a1e12dc [fe80::8888:78ff:fef0:6484%12]:123 May 14 23:50:42.235493 ntpd[1922]: Listen normally on 8 cilium_net [fe80::50f8:89ff:febd:70c6%4]:123 May 14 23:50:42.235574 ntpd[1922]: Listen normally on 9 cilium_host [fe80::8884:a6ff:fea2:8d04%5]:123 May 14 23:50:42.235665 ntpd[1922]: Listen normally on 10 cilium_vxlan [fe80::5069:57ff:fe24:420d%6]:123 May 14 23:50:42.235738 ntpd[1922]: Listen normally on 11 lxc_health [fe80::1892:71ff:fe10:ec0d%8]:123 May 14 23:50:42.235806 ntpd[1922]: Listen normally on 12 lxc375aa36573de [fe80::5426:b6ff:fe28:312%10]:123 May 14 23:50:42.235876 ntpd[1922]: Listen normally on 13 lxc264c5a1e12dc [fe80::8888:78ff:fef0:6484%12]:123 May 14 23:50:45.796367 containerd[1955]: time="2025-05-14T23:50:45.796139495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:45.798268 containerd[1955]: time="2025-05-14T23:50:45.797183603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:45.798268 containerd[1955]: time="2025-05-14T23:50:45.797850431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:45.801443 containerd[1955]: time="2025-05-14T23:50:45.801208127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:45.813237 containerd[1955]: time="2025-05-14T23:50:45.812525831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:45.813237 containerd[1955]: time="2025-05-14T23:50:45.812629487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:45.813237 containerd[1955]: time="2025-05-14T23:50:45.812666723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:45.813237 containerd[1955]: time="2025-05-14T23:50:45.812823203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:45.891364 systemd[1]: Started cri-containerd-17464fc40aa71c4c7e13eb910f37dba80f3392a61170dce3d17a4c2405880f22.scope - libcontainer container 17464fc40aa71c4c7e13eb910f37dba80f3392a61170dce3d17a4c2405880f22. May 14 23:50:45.897483 systemd[1]: Started cri-containerd-2373195e56879f86a7be35fa3ed7b1d1168e7d5dbae32322e6344a17d7834906.scope - libcontainer container 2373195e56879f86a7be35fa3ed7b1d1168e7d5dbae32322e6344a17d7834906. May 14 23:50:46.032937 containerd[1955]: time="2025-05-14T23:50:46.032887508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dsjq4,Uid:7a9484dc-7893-489b-be6b-a888b4e25bfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"17464fc40aa71c4c7e13eb910f37dba80f3392a61170dce3d17a4c2405880f22\"" May 14 23:50:46.044708 containerd[1955]: time="2025-05-14T23:50:46.044457308Z" level=info msg="CreateContainer within sandbox \"17464fc40aa71c4c7e13eb910f37dba80f3392a61170dce3d17a4c2405880f22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:50:46.054647 containerd[1955]: time="2025-05-14T23:50:46.054492896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6bcg9,Uid:0778d2bb-471a-4ad3-a0fc-2bff2e6769ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2373195e56879f86a7be35fa3ed7b1d1168e7d5dbae32322e6344a17d7834906\"" May 14 23:50:46.066591 containerd[1955]: time="2025-05-14T23:50:46.065722376Z" level=info msg="CreateContainer within sandbox \"2373195e56879f86a7be35fa3ed7b1d1168e7d5dbae32322e6344a17d7834906\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:50:46.103791 containerd[1955]: time="2025-05-14T23:50:46.100941164Z" level=info msg="CreateContainer within sandbox \"17464fc40aa71c4c7e13eb910f37dba80f3392a61170dce3d17a4c2405880f22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8b6427f77d585ed093cec0b8f83309fe786dd8487583fc67ff33c9f590574d5\"" May 14 23:50:46.103791 containerd[1955]: time="2025-05-14T23:50:46.102942404Z" level=info msg="StartContainer for \"b8b6427f77d585ed093cec0b8f83309fe786dd8487583fc67ff33c9f590574d5\"" May 14 23:50:46.124908 containerd[1955]: time="2025-05-14T23:50:46.124840160Z" level=info msg="CreateContainer within sandbox \"2373195e56879f86a7be35fa3ed7b1d1168e7d5dbae32322e6344a17d7834906\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"785fa16bc493a01495d8631b9f43c2a9836eaf287c99d2775e96c4ecbfe32b01\"" May 14 23:50:46.127856 containerd[1955]: time="2025-05-14T23:50:46.127678760Z" level=info msg="StartContainer for \"785fa16bc493a01495d8631b9f43c2a9836eaf287c99d2775e96c4ecbfe32b01\"" May 14 23:50:46.202793 systemd[1]: Started cri-containerd-b8b6427f77d585ed093cec0b8f83309fe786dd8487583fc67ff33c9f590574d5.scope - libcontainer container b8b6427f77d585ed093cec0b8f83309fe786dd8487583fc67ff33c9f590574d5. May 14 23:50:46.237395 systemd[1]: Started cri-containerd-785fa16bc493a01495d8631b9f43c2a9836eaf287c99d2775e96c4ecbfe32b01.scope - libcontainer container 785fa16bc493a01495d8631b9f43c2a9836eaf287c99d2775e96c4ecbfe32b01. May 14 23:50:46.294043 containerd[1955]: time="2025-05-14T23:50:46.293867865Z" level=info msg="StartContainer for \"b8b6427f77d585ed093cec0b8f83309fe786dd8487583fc67ff33c9f590574d5\" returns successfully" May 14 23:50:46.315555 containerd[1955]: time="2025-05-14T23:50:46.314636121Z" level=info msg="StartContainer for \"785fa16bc493a01495d8631b9f43c2a9836eaf287c99d2775e96c4ecbfe32b01\" returns successfully" May 14 23:50:46.541136 kubelet[3277]: I0514 23:50:46.539913 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6bcg9" podStartSLOduration=31.539888986 podStartE2EDuration="31.539888986s" podCreationTimestamp="2025-05-14 23:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:46.515861698 +0000 UTC m=+43.661033102" watchObservedRunningTime="2025-05-14 23:50:46.539888986 +0000 UTC m=+43.685060378" May 14 23:50:50.808644 systemd[1]: Started sshd@7-172.31.28.25:22-139.178.89.65:50532.service - OpenSSH per-connection server daemon (139.178.89.65:50532). May 14 23:50:51.003825 sshd[4840]: Accepted publickey for core from 139.178.89.65 port 50532 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:50:51.006584 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:51.015185 systemd-logind[1930]: New session 8 of user core. May 14 23:50:51.022423 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:50:51.285587 sshd[4842]: Connection closed by 139.178.89.65 port 50532 May 14 23:50:51.286474 sshd-session[4840]: pam_unix(sshd:session): session closed for user core May 14 23:50:51.293080 systemd[1]: sshd@7-172.31.28.25:22-139.178.89.65:50532.service: Deactivated successfully. May 14 23:50:51.297159 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:50:51.300393 systemd-logind[1930]: Session 8 logged out. Waiting for processes to exit. May 14 23:50:51.302532 systemd-logind[1930]: Removed session 8. May 14 23:50:56.334234 systemd[1]: Started sshd@8-172.31.28.25:22-139.178.89.65:50540.service - OpenSSH per-connection server daemon (139.178.89.65:50540). May 14 23:50:56.509312 sshd[4857]: Accepted publickey for core from 139.178.89.65 port 50540 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:50:56.511773 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:56.520031 systemd-logind[1930]: New session 9 of user core. May 14 23:50:56.527370 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:50:56.766719 sshd[4859]: Connection closed by 139.178.89.65 port 50540 May 14 23:50:56.768828 sshd-session[4857]: pam_unix(sshd:session): session closed for user core May 14 23:50:56.775392 systemd[1]: sshd@8-172.31.28.25:22-139.178.89.65:50540.service: Deactivated successfully. May 14 23:50:56.780393 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:50:56.782120 systemd-logind[1930]: Session 9 logged out. Waiting for processes to exit. May 14 23:50:56.784745 systemd-logind[1930]: Removed session 9. May 14 23:51:01.812621 systemd[1]: Started sshd@9-172.31.28.25:22-139.178.89.65:40876.service - OpenSSH per-connection server daemon (139.178.89.65:40876). May 14 23:51:01.990149 sshd[4873]: Accepted publickey for core from 139.178.89.65 port 40876 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:01.992739 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:02.001679 systemd-logind[1930]: New session 10 of user core. May 14 23:51:02.007400 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:51:02.241484 sshd[4875]: Connection closed by 139.178.89.65 port 40876 May 14 23:51:02.243065 sshd-session[4873]: pam_unix(sshd:session): session closed for user core May 14 23:51:02.249369 systemd[1]: sshd@9-172.31.28.25:22-139.178.89.65:40876.service: Deactivated successfully. May 14 23:51:02.253420 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:51:02.254866 systemd-logind[1930]: Session 10 logged out. Waiting for processes to exit. May 14 23:51:02.257202 systemd-logind[1930]: Removed session 10. May 14 23:51:07.285619 systemd[1]: Started sshd@10-172.31.28.25:22-139.178.89.65:47900.service - OpenSSH per-connection server daemon (139.178.89.65:47900). May 14 23:51:07.480308 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 47900 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:07.482735 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:07.491528 systemd-logind[1930]: New session 11 of user core. May 14 23:51:07.500597 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:51:07.757205 sshd[4892]: Connection closed by 139.178.89.65 port 47900 May 14 23:51:07.758955 sshd-session[4890]: pam_unix(sshd:session): session closed for user core May 14 23:51:07.768587 systemd[1]: sshd@10-172.31.28.25:22-139.178.89.65:47900.service: Deactivated successfully. May 14 23:51:07.774625 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:51:07.776694 systemd-logind[1930]: Session 11 logged out. Waiting for processes to exit. May 14 23:51:07.778508 systemd-logind[1930]: Removed session 11. May 14 23:51:07.803707 systemd[1]: Started sshd@11-172.31.28.25:22-139.178.89.65:47916.service - OpenSSH per-connection server daemon (139.178.89.65:47916). May 14 23:51:07.998054 sshd[4905]: Accepted publickey for core from 139.178.89.65 port 47916 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:08.000694 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:08.009980 systemd-logind[1930]: New session 12 of user core. May 14 23:51:08.016358 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:51:08.340130 sshd[4907]: Connection closed by 139.178.89.65 port 47916 May 14 23:51:08.342019 sshd-session[4905]: pam_unix(sshd:session): session closed for user core May 14 23:51:08.352038 systemd[1]: sshd@11-172.31.28.25:22-139.178.89.65:47916.service: Deactivated successfully. May 14 23:51:08.359853 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:51:08.364480 systemd-logind[1930]: Session 12 logged out. Waiting for processes to exit. May 14 23:51:08.392798 systemd[1]: Started sshd@12-172.31.28.25:22-139.178.89.65:47922.service - OpenSSH per-connection server daemon (139.178.89.65:47922). May 14 23:51:08.397041 systemd-logind[1930]: Removed session 12. May 14 23:51:08.583287 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 47922 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:08.585793 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:08.594317 systemd-logind[1930]: New session 13 of user core. May 14 23:51:08.605473 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:51:08.860588 sshd[4919]: Connection closed by 139.178.89.65 port 47922 May 14 23:51:08.862211 sshd-session[4916]: pam_unix(sshd:session): session closed for user core May 14 23:51:08.868616 systemd[1]: sshd@12-172.31.28.25:22-139.178.89.65:47922.service: Deactivated successfully. May 14 23:51:08.872898 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:51:08.874714 systemd-logind[1930]: Session 13 logged out. Waiting for processes to exit. May 14 23:51:08.876622 systemd-logind[1930]: Removed session 13. May 14 23:51:13.904624 systemd[1]: Started sshd@13-172.31.28.25:22-139.178.89.65:47934.service - OpenSSH per-connection server daemon (139.178.89.65:47934). May 14 23:51:14.086038 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 47934 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:14.088748 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:14.097436 systemd-logind[1930]: New session 14 of user core. May 14 23:51:14.104390 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:51:14.351923 sshd[4933]: Connection closed by 139.178.89.65 port 47934 May 14 23:51:14.350298 sshd-session[4931]: pam_unix(sshd:session): session closed for user core May 14 23:51:14.358682 systemd[1]: sshd@13-172.31.28.25:22-139.178.89.65:47934.service: Deactivated successfully. May 14 23:51:14.363984 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:51:14.366015 systemd-logind[1930]: Session 14 logged out. Waiting for processes to exit. May 14 23:51:14.368885 systemd-logind[1930]: Removed session 14. May 14 23:51:19.395648 systemd[1]: Started sshd@14-172.31.28.25:22-139.178.89.65:35976.service - OpenSSH per-connection server daemon (139.178.89.65:35976). May 14 23:51:19.586890 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 35976 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:19.589534 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:19.598267 systemd-logind[1930]: New session 15 of user core. May 14 23:51:19.606345 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:51:19.857875 sshd[4951]: Connection closed by 139.178.89.65 port 35976 May 14 23:51:19.858763 sshd-session[4949]: pam_unix(sshd:session): session closed for user core May 14 23:51:19.864850 systemd[1]: sshd@14-172.31.28.25:22-139.178.89.65:35976.service: Deactivated successfully. May 14 23:51:19.869142 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:51:19.871305 systemd-logind[1930]: Session 15 logged out. Waiting for processes to exit. May 14 23:51:19.874341 systemd-logind[1930]: Removed session 15. May 14 23:51:24.904618 systemd[1]: Started sshd@15-172.31.28.25:22-139.178.89.65:35988.service - OpenSSH per-connection server daemon (139.178.89.65:35988). May 14 23:51:25.090222 sshd[4964]: Accepted publickey for core from 139.178.89.65 port 35988 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:25.092696 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:25.101225 systemd-logind[1930]: New session 16 of user core. May 14 23:51:25.106400 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:51:25.349889 sshd[4967]: Connection closed by 139.178.89.65 port 35988 May 14 23:51:25.350848 sshd-session[4964]: pam_unix(sshd:session): session closed for user core May 14 23:51:25.356517 systemd-logind[1930]: Session 16 logged out. Waiting for processes to exit. May 14 23:51:25.359142 systemd[1]: sshd@15-172.31.28.25:22-139.178.89.65:35988.service: Deactivated successfully. May 14 23:51:25.363078 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:51:25.366269 systemd-logind[1930]: Removed session 16. May 14 23:51:25.389640 systemd[1]: Started sshd@16-172.31.28.25:22-139.178.89.65:35996.service - OpenSSH per-connection server daemon (139.178.89.65:35996). May 14 23:51:25.577529 sshd[4979]: Accepted publickey for core from 139.178.89.65 port 35996 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:25.580001 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:25.589573 systemd-logind[1930]: New session 17 of user core. May 14 23:51:25.597341 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:51:25.898669 sshd[4981]: Connection closed by 139.178.89.65 port 35996 May 14 23:51:25.899269 sshd-session[4979]: pam_unix(sshd:session): session closed for user core May 14 23:51:25.905853 systemd[1]: sshd@16-172.31.28.25:22-139.178.89.65:35996.service: Deactivated successfully. May 14 23:51:25.912024 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:51:25.914714 systemd-logind[1930]: Session 17 logged out. Waiting for processes to exit. May 14 23:51:25.917240 systemd-logind[1930]: Removed session 17. May 14 23:51:25.939693 systemd[1]: Started sshd@17-172.31.28.25:22-139.178.89.65:36002.service - OpenSSH per-connection server daemon (139.178.89.65:36002). May 14 23:51:26.127004 sshd[4990]: Accepted publickey for core from 139.178.89.65 port 36002 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:26.129527 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:26.138660 systemd-logind[1930]: New session 18 of user core. May 14 23:51:26.143380 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:51:28.789935 sshd[4992]: Connection closed by 139.178.89.65 port 36002 May 14 23:51:28.790843 sshd-session[4990]: pam_unix(sshd:session): session closed for user core May 14 23:51:28.802897 systemd[1]: sshd@17-172.31.28.25:22-139.178.89.65:36002.service: Deactivated successfully. May 14 23:51:28.810913 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:51:28.814508 systemd-logind[1930]: Session 18 logged out. Waiting for processes to exit. May 14 23:51:28.840609 systemd[1]: Started sshd@18-172.31.28.25:22-139.178.89.65:39744.service - OpenSSH per-connection server daemon (139.178.89.65:39744). May 14 23:51:28.843675 systemd-logind[1930]: Removed session 18. May 14 23:51:29.030960 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 39744 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:29.033813 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:29.042823 systemd-logind[1930]: New session 19 of user core. May 14 23:51:29.050425 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:51:29.533757 sshd[5011]: Connection closed by 139.178.89.65 port 39744 May 14 23:51:29.534559 sshd-session[5008]: pam_unix(sshd:session): session closed for user core May 14 23:51:29.541993 systemd[1]: sshd@18-172.31.28.25:22-139.178.89.65:39744.service: Deactivated successfully. May 14 23:51:29.546881 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:51:29.548633 systemd-logind[1930]: Session 19 logged out. Waiting for processes to exit. May 14 23:51:29.550299 systemd-logind[1930]: Removed session 19. May 14 23:51:29.572619 systemd[1]: Started sshd@19-172.31.28.25:22-139.178.89.65:39748.service - OpenSSH per-connection server daemon (139.178.89.65:39748). May 14 23:51:29.756279 sshd[5021]: Accepted publickey for core from 139.178.89.65 port 39748 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:29.760409 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:29.773143 systemd-logind[1930]: New session 20 of user core. May 14 23:51:29.778899 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:51:30.019597 sshd[5023]: Connection closed by 139.178.89.65 port 39748 May 14 23:51:30.019399 sshd-session[5021]: pam_unix(sshd:session): session closed for user core May 14 23:51:30.027935 systemd[1]: sshd@19-172.31.28.25:22-139.178.89.65:39748.service: Deactivated successfully. May 14 23:51:30.033318 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:51:30.035261 systemd-logind[1930]: Session 20 logged out. Waiting for processes to exit. May 14 23:51:30.037925 systemd-logind[1930]: Removed session 20. May 14 23:51:35.066614 systemd[1]: Started sshd@20-172.31.28.25:22-139.178.89.65:39758.service - OpenSSH per-connection server daemon (139.178.89.65:39758). May 14 23:51:35.263215 sshd[5035]: Accepted publickey for core from 139.178.89.65 port 39758 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:35.265759 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:35.275304 systemd-logind[1930]: New session 21 of user core. May 14 23:51:35.283368 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:51:35.526262 sshd[5037]: Connection closed by 139.178.89.65 port 39758 May 14 23:51:35.527423 sshd-session[5035]: pam_unix(sshd:session): session closed for user core May 14 23:51:35.534622 systemd[1]: sshd@20-172.31.28.25:22-139.178.89.65:39758.service: Deactivated successfully. May 14 23:51:35.539889 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:51:35.541574 systemd-logind[1930]: Session 21 logged out. Waiting for processes to exit. May 14 23:51:35.543250 systemd-logind[1930]: Removed session 21. May 14 23:51:40.575641 systemd[1]: Started sshd@21-172.31.28.25:22-139.178.89.65:36374.service - OpenSSH per-connection server daemon (139.178.89.65:36374). May 14 23:51:40.763861 sshd[5051]: Accepted publickey for core from 139.178.89.65 port 36374 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:40.766495 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:40.776371 systemd-logind[1930]: New session 22 of user core. May 14 23:51:40.783371 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:51:41.021626 sshd[5053]: Connection closed by 139.178.89.65 port 36374 May 14 23:51:41.022016 sshd-session[5051]: pam_unix(sshd:session): session closed for user core May 14 23:51:41.029369 systemd[1]: sshd@21-172.31.28.25:22-139.178.89.65:36374.service: Deactivated successfully. May 14 23:51:41.033665 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:51:41.036603 systemd-logind[1930]: Session 22 logged out. Waiting for processes to exit. May 14 23:51:41.038950 systemd-logind[1930]: Removed session 22. May 14 23:51:46.065617 systemd[1]: Started sshd@22-172.31.28.25:22-139.178.89.65:36386.service - OpenSSH per-connection server daemon (139.178.89.65:36386). May 14 23:51:46.245580 sshd[5064]: Accepted publickey for core from 139.178.89.65 port 36386 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:46.248033 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:46.256504 systemd-logind[1930]: New session 23 of user core. May 14 23:51:46.268405 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:51:46.506522 sshd[5068]: Connection closed by 139.178.89.65 port 36386 May 14 23:51:46.506318 sshd-session[5064]: pam_unix(sshd:session): session closed for user core May 14 23:51:46.511884 systemd[1]: sshd@22-172.31.28.25:22-139.178.89.65:36386.service: Deactivated successfully. May 14 23:51:46.514973 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:51:46.518847 systemd-logind[1930]: Session 23 logged out. Waiting for processes to exit. May 14 23:51:46.521670 systemd-logind[1930]: Removed session 23. May 14 23:51:51.552896 systemd[1]: Started sshd@23-172.31.28.25:22-139.178.89.65:34138.service - OpenSSH per-connection server daemon (139.178.89.65:34138). May 14 23:51:51.744652 sshd[5080]: Accepted publickey for core from 139.178.89.65 port 34138 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:51.747172 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:51.755704 systemd-logind[1930]: New session 24 of user core. May 14 23:51:51.761397 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:51:52.016345 sshd[5082]: Connection closed by 139.178.89.65 port 34138 May 14 23:51:52.017249 sshd-session[5080]: pam_unix(sshd:session): session closed for user core May 14 23:51:52.022912 systemd-logind[1930]: Session 24 logged out. Waiting for processes to exit. May 14 23:51:52.023910 systemd[1]: sshd@23-172.31.28.25:22-139.178.89.65:34138.service: Deactivated successfully. May 14 23:51:52.027971 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:51:52.032564 systemd-logind[1930]: Removed session 24. May 14 23:51:52.059594 systemd[1]: Started sshd@24-172.31.28.25:22-139.178.89.65:34148.service - OpenSSH per-connection server daemon (139.178.89.65:34148). May 14 23:51:52.251419 sshd[5094]: Accepted publickey for core from 139.178.89.65 port 34148 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:52.253851 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:52.264475 systemd-logind[1930]: New session 25 of user core. May 14 23:51:52.271357 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:51:54.586130 kubelet[3277]: I0514 23:51:54.584568 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dsjq4" podStartSLOduration=99.584542564 podStartE2EDuration="1m39.584542564s" podCreationTimestamp="2025-05-14 23:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:46.564858671 +0000 UTC m=+43.710030087" watchObservedRunningTime="2025-05-14 23:51:54.584542564 +0000 UTC m=+111.729713956" May 14 23:51:54.643496 systemd[1]: run-containerd-runc-k8s.io-ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a-runc.9wpt6a.mount: Deactivated successfully. May 14 23:51:54.652210 containerd[1955]: time="2025-05-14T23:51:54.652062017Z" level=info msg="StopContainer for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" with timeout 30 (s)" May 14 23:51:54.656541 containerd[1955]: time="2025-05-14T23:51:54.655074197Z" level=info msg="Stop container \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" with signal terminated" May 14 23:51:54.678195 containerd[1955]: time="2025-05-14T23:51:54.678054221Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:51:54.687282 systemd[1]: cri-containerd-8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21.scope: Deactivated successfully. May 14 23:51:54.695036 containerd[1955]: time="2025-05-14T23:51:54.694981865Z" level=info msg="StopContainer for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" with timeout 2 (s)" May 14 23:51:54.695711 containerd[1955]: time="2025-05-14T23:51:54.695670773Z" level=info msg="Stop container \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" with signal terminated" May 14 23:51:54.715930 systemd-networkd[1773]: lxc_health: Link DOWN May 14 23:51:54.715946 systemd-networkd[1773]: lxc_health: Lost carrier May 14 23:51:54.749519 systemd[1]: cri-containerd-ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a.scope: Deactivated successfully. May 14 23:51:54.750612 systemd[1]: cri-containerd-ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a.scope: Consumed 14.127s CPU time, 124.1M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:51:54.771480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21-rootfs.mount: Deactivated successfully. May 14 23:51:54.794347 containerd[1955]: time="2025-05-14T23:51:54.793939169Z" level=info msg="shim disconnected" id=8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21 namespace=k8s.io May 14 23:51:54.794347 containerd[1955]: time="2025-05-14T23:51:54.794184881Z" level=warning msg="cleaning up after shim disconnected" id=8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21 namespace=k8s.io May 14 23:51:54.794347 containerd[1955]: time="2025-05-14T23:51:54.794210537Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:54.806848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a-rootfs.mount: Deactivated successfully. May 14 23:51:54.820405 containerd[1955]: time="2025-05-14T23:51:54.820307190Z" level=info msg="shim disconnected" id=ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a namespace=k8s.io May 14 23:51:54.820405 containerd[1955]: time="2025-05-14T23:51:54.820401126Z" level=warning msg="cleaning up after shim disconnected" id=ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a namespace=k8s.io May 14 23:51:54.821005 containerd[1955]: time="2025-05-14T23:51:54.820422882Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:54.838603 containerd[1955]: time="2025-05-14T23:51:54.838324638Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:51:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:51:54.845042 containerd[1955]: time="2025-05-14T23:51:54.844944630Z" level=info msg="StopContainer for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" returns successfully" May 14 23:51:54.846631 containerd[1955]: time="2025-05-14T23:51:54.846580278Z" level=info msg="StopPodSandbox for \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\"" May 14 23:51:54.847268 containerd[1955]: time="2025-05-14T23:51:54.847196370Z" level=info msg="Container to stop \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:54.852723 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae-shm.mount: Deactivated successfully. May 14 23:51:54.869885 containerd[1955]: time="2025-05-14T23:51:54.869828742Z" level=info msg="StopContainer for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" returns successfully" May 14 23:51:54.871158 containerd[1955]: time="2025-05-14T23:51:54.870696198Z" level=info msg="StopPodSandbox for \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\"" May 14 23:51:54.871158 containerd[1955]: time="2025-05-14T23:51:54.870754890Z" level=info msg="Container to stop \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:54.871158 containerd[1955]: time="2025-05-14T23:51:54.870779034Z" level=info msg="Container to stop \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:54.871158 containerd[1955]: time="2025-05-14T23:51:54.870800274Z" level=info msg="Container to stop \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:54.871158 containerd[1955]: time="2025-05-14T23:51:54.870826602Z" level=info msg="Container to stop \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:54.871158 containerd[1955]: time="2025-05-14T23:51:54.870846294Z" level=info msg="Container to stop \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:54.873950 systemd[1]: cri-containerd-3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae.scope: Deactivated successfully. May 14 23:51:54.890985 systemd[1]: cri-containerd-bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513.scope: Deactivated successfully. May 14 23:51:54.952854 containerd[1955]: time="2025-05-14T23:51:54.952747050Z" level=info msg="shim disconnected" id=bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513 namespace=k8s.io May 14 23:51:54.954938 containerd[1955]: time="2025-05-14T23:51:54.954625170Z" level=warning msg="cleaning up after shim disconnected" id=bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513 namespace=k8s.io May 14 23:51:54.954938 containerd[1955]: time="2025-05-14T23:51:54.954671334Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:54.955490 containerd[1955]: time="2025-05-14T23:51:54.953328594Z" level=info msg="shim disconnected" id=3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae namespace=k8s.io May 14 23:51:54.955490 containerd[1955]: time="2025-05-14T23:51:54.955088214Z" level=warning msg="cleaning up after shim disconnected" id=3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae namespace=k8s.io May 14 23:51:54.955490 containerd[1955]: time="2025-05-14T23:51:54.955152786Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:54.982493 containerd[1955]: time="2025-05-14T23:51:54.982435758Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:51:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:51:54.984082 containerd[1955]: time="2025-05-14T23:51:54.983880618Z" level=info msg="TearDown network for sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" successfully" May 14 23:51:54.984082 containerd[1955]: time="2025-05-14T23:51:54.983928894Z" level=info msg="StopPodSandbox for \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" returns successfully" May 14 23:51:54.986699 containerd[1955]: time="2025-05-14T23:51:54.986548446Z" level=info msg="TearDown network for sandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" successfully" May 14 23:51:54.986699 containerd[1955]: time="2025-05-14T23:51:54.986617818Z" level=info msg="StopPodSandbox for \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" returns successfully" May 14 23:51:55.097078 kubelet[3277]: I0514 23:51:55.096917 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-net\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.097078 kubelet[3277]: I0514 23:51:55.096984 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-run\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100142 kubelet[3277]: I0514 23:51:55.097032 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grh4g\" (UniqueName: \"kubernetes.io/projected/ae07a3c6-148f-4676-9a10-4f983071aeb6-kube-api-access-grh4g\") pod \"ae07a3c6-148f-4676-9a10-4f983071aeb6\" (UID: \"ae07a3c6-148f-4676-9a10-4f983071aeb6\") " May 14 23:51:55.100142 kubelet[3277]: I0514 23:51:55.097968 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-lib-modules\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100142 kubelet[3277]: I0514 23:51:55.098044 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae07a3c6-148f-4676-9a10-4f983071aeb6-cilium-config-path\") pod \"ae07a3c6-148f-4676-9a10-4f983071aeb6\" (UID: \"ae07a3c6-148f-4676-9a10-4f983071aeb6\") " May 14 23:51:55.100142 kubelet[3277]: I0514 23:51:55.098258 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-etc-cni-netd\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100142 kubelet[3277]: I0514 23:51:55.098340 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-kernel\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100142 kubelet[3277]: I0514 23:51:55.098385 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-hubble-tls\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100556 kubelet[3277]: I0514 23:51:55.098454 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz2jr\" (UniqueName: \"kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-kube-api-access-bz2jr\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100556 kubelet[3277]: I0514 23:51:55.098525 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c6745cf-908e-4741-9367-980ed710a49b-clustermesh-secrets\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100556 kubelet[3277]: I0514 23:51:55.098562 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-cgroup\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100556 kubelet[3277]: I0514 23:51:55.098621 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-xtables-lock\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100556 kubelet[3277]: I0514 23:51:55.098686 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c6745cf-908e-4741-9367-980ed710a49b-cilium-config-path\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100556 kubelet[3277]: I0514 23:51:55.098726 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-hostproc\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100871 kubelet[3277]: I0514 23:51:55.098787 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cni-path\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100871 kubelet[3277]: I0514 23:51:55.098824 3277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-bpf-maps\") pod \"0c6745cf-908e-4741-9367-980ed710a49b\" (UID: \"0c6745cf-908e-4741-9367-980ed710a49b\") " May 14 23:51:55.100871 kubelet[3277]: I0514 23:51:55.097423 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.100871 kubelet[3277]: I0514 23:51:55.097461 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.100871 kubelet[3277]: I0514 23:51:55.098965 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.101177 kubelet[3277]: I0514 23:51:55.099146 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.107713 kubelet[3277]: I0514 23:51:55.105300 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae07a3c6-148f-4676-9a10-4f983071aeb6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae07a3c6-148f-4676-9a10-4f983071aeb6" (UID: "ae07a3c6-148f-4676-9a10-4f983071aeb6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:51:55.107713 kubelet[3277]: I0514 23:51:55.105427 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.107713 kubelet[3277]: I0514 23:51:55.105468 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.110132 kubelet[3277]: I0514 23:51:55.108253 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae07a3c6-148f-4676-9a10-4f983071aeb6-kube-api-access-grh4g" (OuterVolumeSpecName: "kube-api-access-grh4g") pod "ae07a3c6-148f-4676-9a10-4f983071aeb6" (UID: "ae07a3c6-148f-4676-9a10-4f983071aeb6"). InnerVolumeSpecName "kube-api-access-grh4g". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:51:55.110461 kubelet[3277]: I0514 23:51:55.110416 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.116216 kubelet[3277]: I0514 23:51:55.113742 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:51:55.118970 kubelet[3277]: I0514 23:51:55.118917 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-kube-api-access-bz2jr" (OuterVolumeSpecName: "kube-api-access-bz2jr") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "kube-api-access-bz2jr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:51:55.123026 kubelet[3277]: I0514 23:51:55.122948 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6745cf-908e-4741-9367-980ed710a49b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:51:55.123248 kubelet[3277]: I0514 23:51:55.123049 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.123248 kubelet[3277]: I0514 23:51:55.123137 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.123248 kubelet[3277]: I0514 23:51:55.123181 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:55.126720 kubelet[3277]: I0514 23:51:55.126646 3277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6745cf-908e-4741-9367-980ed710a49b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c6745cf-908e-4741-9367-980ed710a49b" (UID: "0c6745cf-908e-4741-9367-980ed710a49b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 23:51:55.177560 systemd[1]: Removed slice kubepods-besteffort-podae07a3c6_148f_4676_9a10_4f983071aeb6.slice - libcontainer container kubepods-besteffort-podae07a3c6_148f_4676_9a10_4f983071aeb6.slice. May 14 23:51:55.181065 systemd[1]: Removed slice kubepods-burstable-pod0c6745cf_908e_4741_9367_980ed710a49b.slice - libcontainer container kubepods-burstable-pod0c6745cf_908e_4741_9367_980ed710a49b.slice. May 14 23:51:55.181956 systemd[1]: kubepods-burstable-pod0c6745cf_908e_4741_9367_980ed710a49b.slice: Consumed 14.286s CPU time, 124.5M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:51:55.199483 kubelet[3277]: I0514 23:51:55.199363 3277 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cni-path\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199483 kubelet[3277]: I0514 23:51:55.199406 3277 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-bpf-maps\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199483 kubelet[3277]: I0514 23:51:55.199428 3277 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-net\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199483 kubelet[3277]: I0514 23:51:55.199452 3277 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-run\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199483 kubelet[3277]: I0514 23:51:55.199474 3277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-grh4g\" (UniqueName: \"kubernetes.io/projected/ae07a3c6-148f-4676-9a10-4f983071aeb6-kube-api-access-grh4g\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199497 3277 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae07a3c6-148f-4676-9a10-4f983071aeb6-cilium-config-path\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199537 3277 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-etc-cni-netd\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199560 3277 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-lib-modules\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199581 3277 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-host-proc-sys-kernel\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199600 3277 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-hubble-tls\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199622 3277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bz2jr\" (UniqueName: \"kubernetes.io/projected/0c6745cf-908e-4741-9367-980ed710a49b-kube-api-access-bz2jr\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199642 3277 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c6745cf-908e-4741-9367-980ed710a49b-clustermesh-secrets\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.199857 kubelet[3277]: I0514 23:51:55.199662 3277 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-cilium-cgroup\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.200293 kubelet[3277]: I0514 23:51:55.199683 3277 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-xtables-lock\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.200293 kubelet[3277]: I0514 23:51:55.199702 3277 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c6745cf-908e-4741-9367-980ed710a49b-cilium-config-path\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.200293 kubelet[3277]: I0514 23:51:55.199721 3277 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c6745cf-908e-4741-9367-980ed710a49b-hostproc\") on node \"ip-172-31-28-25\" DevicePath \"\"" May 14 23:51:55.627872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae-rootfs.mount: Deactivated successfully. May 14 23:51:55.628042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513-rootfs.mount: Deactivated successfully. May 14 23:51:55.628209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513-shm.mount: Deactivated successfully. May 14 23:51:55.628365 systemd[1]: var-lib-kubelet-pods-0c6745cf\x2d908e\x2d4741\x2d9367\x2d980ed710a49b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:51:55.628501 systemd[1]: var-lib-kubelet-pods-0c6745cf\x2d908e\x2d4741\x2d9367\x2d980ed710a49b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:51:55.628641 systemd[1]: var-lib-kubelet-pods-ae07a3c6\x2d148f\x2d4676\x2d9a10\x2d4f983071aeb6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrh4g.mount: Deactivated successfully. May 14 23:51:55.628803 systemd[1]: var-lib-kubelet-pods-0c6745cf\x2d908e\x2d4741\x2d9367\x2d980ed710a49b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbz2jr.mount: Deactivated successfully. May 14 23:51:55.684596 kubelet[3277]: I0514 23:51:55.684460 3277 scope.go:117] "RemoveContainer" containerID="ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a" May 14 23:51:55.691885 containerd[1955]: time="2025-05-14T23:51:55.691282434Z" level=info msg="RemoveContainer for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\"" May 14 23:51:55.701799 containerd[1955]: time="2025-05-14T23:51:55.701686434Z" level=info msg="RemoveContainer for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" returns successfully" May 14 23:51:55.702979 kubelet[3277]: I0514 23:51:55.702341 3277 scope.go:117] "RemoveContainer" containerID="846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f" May 14 23:51:55.705131 containerd[1955]: time="2025-05-14T23:51:55.704792490Z" level=info msg="RemoveContainer for \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\"" May 14 23:51:55.712202 containerd[1955]: time="2025-05-14T23:51:55.712136142Z" level=info msg="RemoveContainer for \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\" returns successfully" May 14 23:51:55.712912 kubelet[3277]: I0514 23:51:55.712700 3277 scope.go:117] "RemoveContainer" containerID="7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891" May 14 23:51:55.718045 containerd[1955]: time="2025-05-14T23:51:55.716351610Z" level=info msg="RemoveContainer for \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\"" May 14 23:51:55.730730 containerd[1955]: time="2025-05-14T23:51:55.730659942Z" level=info msg="RemoveContainer for \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\" returns successfully" May 14 23:51:55.731425 kubelet[3277]: I0514 23:51:55.731390 3277 scope.go:117] "RemoveContainer" containerID="f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341" May 14 23:51:55.737455 containerd[1955]: time="2025-05-14T23:51:55.737402130Z" level=info msg="RemoveContainer for \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\"" May 14 23:51:55.744093 containerd[1955]: time="2025-05-14T23:51:55.744041334Z" level=info msg="RemoveContainer for \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\" returns successfully" May 14 23:51:55.744707 kubelet[3277]: I0514 23:51:55.744671 3277 scope.go:117] "RemoveContainer" containerID="a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08" May 14 23:51:55.750328 containerd[1955]: time="2025-05-14T23:51:55.750149430Z" level=info msg="RemoveContainer for \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\"" May 14 23:51:55.757515 containerd[1955]: time="2025-05-14T23:51:55.757451814Z" level=info msg="RemoveContainer for \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\" returns successfully" May 14 23:51:55.758200 kubelet[3277]: I0514 23:51:55.757920 3277 scope.go:117] "RemoveContainer" containerID="ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a" May 14 23:51:55.759008 containerd[1955]: time="2025-05-14T23:51:55.758883342Z" level=error msg="ContainerStatus for \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\": not found" May 14 23:51:55.759873 kubelet[3277]: E0514 23:51:55.759496 3277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\": not found" containerID="ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a" May 14 23:51:55.759873 kubelet[3277]: I0514 23:51:55.759550 3277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a"} err="failed to get container status \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba45e5bd587756c792b866004160609c03b1b2b737757614948644c10a1a9c9a\": not found" May 14 23:51:55.759873 kubelet[3277]: I0514 23:51:55.759675 3277 scope.go:117] "RemoveContainer" containerID="846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f" May 14 23:51:55.760232 containerd[1955]: time="2025-05-14T23:51:55.760033794Z" level=error msg="ContainerStatus for \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\": not found" May 14 23:51:55.760693 kubelet[3277]: E0514 23:51:55.760632 3277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\": not found" containerID="846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f" May 14 23:51:55.760908 kubelet[3277]: I0514 23:51:55.760784 3277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f"} err="failed to get container status \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\": rpc error: code = NotFound desc = an error occurred when try to find container \"846a5b314fc68dc8a4ffbf95fbe47b1cf463924f740ccc58206294ae6fd8c51f\": not found" May 14 23:51:55.761156 kubelet[3277]: I0514 23:51:55.760825 3277 scope.go:117] "RemoveContainer" containerID="7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891" May 14 23:51:55.761777 containerd[1955]: time="2025-05-14T23:51:55.761696130Z" level=error msg="ContainerStatus for \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\": not found" May 14 23:51:55.762246 kubelet[3277]: E0514 23:51:55.762199 3277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\": not found" containerID="7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891" May 14 23:51:55.762373 kubelet[3277]: I0514 23:51:55.762291 3277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891"} err="failed to get container status \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f7a159677d3a903148fd6fd9cba124919cbdd8e8c8d3b73750fb476544f8891\": not found" May 14 23:51:55.762373 kubelet[3277]: I0514 23:51:55.762349 3277 scope.go:117] "RemoveContainer" containerID="f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341" May 14 23:51:55.762752 containerd[1955]: time="2025-05-14T23:51:55.762700410Z" level=error msg="ContainerStatus for \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\": not found" May 14 23:51:55.763317 kubelet[3277]: E0514 23:51:55.763135 3277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\": not found" containerID="f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341" May 14 23:51:55.763317 kubelet[3277]: I0514 23:51:55.763188 3277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341"} err="failed to get container status \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3d5277cf92ec1dca3c908a50b6bbcb26d67b6c058dd4663bc09ae6b49ad4341\": not found" May 14 23:51:55.763317 kubelet[3277]: I0514 23:51:55.763223 3277 scope.go:117] "RemoveContainer" containerID="a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08" May 14 23:51:55.764315 containerd[1955]: time="2025-05-14T23:51:55.764138010Z" level=error msg="ContainerStatus for \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\": not found" May 14 23:51:55.764951 kubelet[3277]: E0514 23:51:55.764810 3277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\": not found" containerID="a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08" May 14 23:51:55.764951 kubelet[3277]: I0514 23:51:55.764885 3277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08"} err="failed to get container status \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8ecd5b46dc3de840ebcc8d97f7c5efab5797d43a364266bd5692b8e7eaf1d08\": not found" May 14 23:51:55.765153 kubelet[3277]: I0514 23:51:55.764917 3277 scope.go:117] "RemoveContainer" containerID="8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21" May 14 23:51:55.767780 containerd[1955]: time="2025-05-14T23:51:55.767360250Z" level=info msg="RemoveContainer for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\"" May 14 23:51:55.773845 containerd[1955]: time="2025-05-14T23:51:55.773770218Z" level=info msg="RemoveContainer for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" returns successfully" May 14 23:51:55.774278 kubelet[3277]: I0514 23:51:55.774123 3277 scope.go:117] "RemoveContainer" containerID="8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21" May 14 23:51:55.774780 containerd[1955]: time="2025-05-14T23:51:55.774653634Z" level=error msg="ContainerStatus for \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\": not found" May 14 23:51:55.775052 kubelet[3277]: E0514 23:51:55.774990 3277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\": not found" containerID="8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21" May 14 23:51:55.775157 kubelet[3277]: I0514 23:51:55.775044 3277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21"} err="failed to get container status \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ced7a2cefb09c8509d7e80de7ad3daff62b6368c6aa6ad16ad571d2a34d9e21\": not found" May 14 23:51:56.538391 sshd[5096]: Connection closed by 139.178.89.65 port 34148 May 14 23:51:56.538803 sshd-session[5094]: pam_unix(sshd:session): session closed for user core May 14 23:51:56.544775 systemd[1]: sshd@24-172.31.28.25:22-139.178.89.65:34148.service: Deactivated successfully. May 14 23:51:56.548453 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:51:56.548935 systemd[1]: session-25.scope: Consumed 1.575s CPU time, 23.6M memory peak. May 14 23:51:56.552352 systemd-logind[1930]: Session 25 logged out. Waiting for processes to exit. May 14 23:51:56.554282 systemd-logind[1930]: Removed session 25. May 14 23:51:56.577413 systemd[1]: Started sshd@25-172.31.28.25:22-139.178.89.65:45160.service - OpenSSH per-connection server daemon (139.178.89.65:45160). May 14 23:51:56.778696 sshd[5257]: Accepted publickey for core from 139.178.89.65 port 45160 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:56.781268 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:56.790620 systemd-logind[1930]: New session 26 of user core. May 14 23:51:56.796356 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:51:57.172853 kubelet[3277]: I0514 23:51:57.171688 3277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6745cf-908e-4741-9367-980ed710a49b" path="/var/lib/kubelet/pods/0c6745cf-908e-4741-9367-980ed710a49b/volumes" May 14 23:51:57.174739 kubelet[3277]: I0514 23:51:57.174279 3277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae07a3c6-148f-4676-9a10-4f983071aeb6" path="/var/lib/kubelet/pods/ae07a3c6-148f-4676-9a10-4f983071aeb6/volumes" May 14 23:51:57.235291 ntpd[1922]: Deleting interface #11 lxc_health, fe80::1892:71ff:fe10:ec0d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs May 14 23:51:57.235793 ntpd[1922]: 14 May 23:51:57 ntpd[1922]: Deleting interface #11 lxc_health, fe80::1892:71ff:fe10:ec0d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=75 secs May 14 23:51:58.426610 kubelet[3277]: I0514 23:51:58.424061 3277 topology_manager.go:215] "Topology Admit Handler" podUID="1c185d4d-79a7-48c3-8131-04b5c9ad3eff" podNamespace="kube-system" podName="cilium-bwr4k" May 14 23:51:58.426610 kubelet[3277]: E0514 23:51:58.424182 3277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c6745cf-908e-4741-9367-980ed710a49b" containerName="clean-cilium-state" May 14 23:51:58.426610 kubelet[3277]: E0514 23:51:58.424204 3277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c6745cf-908e-4741-9367-980ed710a49b" containerName="mount-cgroup" May 14 23:51:58.426610 kubelet[3277]: E0514 23:51:58.424219 3277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c6745cf-908e-4741-9367-980ed710a49b" containerName="apply-sysctl-overwrites" May 14 23:51:58.426610 kubelet[3277]: E0514 23:51:58.424235 3277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c6745cf-908e-4741-9367-980ed710a49b" containerName="mount-bpf-fs" May 14 23:51:58.426610 kubelet[3277]: E0514 23:51:58.424249 3277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae07a3c6-148f-4676-9a10-4f983071aeb6" containerName="cilium-operator" May 14 23:51:58.426610 kubelet[3277]: E0514 23:51:58.424264 3277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c6745cf-908e-4741-9367-980ed710a49b" containerName="cilium-agent" May 14 23:51:58.426610 kubelet[3277]: I0514 23:51:58.424307 3277 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6745cf-908e-4741-9367-980ed710a49b" containerName="cilium-agent" May 14 23:51:58.426610 kubelet[3277]: I0514 23:51:58.424323 3277 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae07a3c6-148f-4676-9a10-4f983071aeb6" containerName="cilium-operator" May 14 23:51:58.426416 sshd-session[5257]: pam_unix(sshd:session): session closed for user core May 14 23:51:58.427964 sshd[5259]: Connection closed by 139.178.89.65 port 45160 May 14 23:51:58.439525 systemd[1]: sshd@25-172.31.28.25:22-139.178.89.65:45160.service: Deactivated successfully. May 14 23:51:58.447146 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:51:58.447950 systemd[1]: session-26.scope: Consumed 1.430s CPU time, 25.1M memory peak. May 14 23:51:58.463875 systemd-logind[1930]: Session 26 logged out. Waiting for processes to exit. May 14 23:51:58.478651 systemd[1]: Started sshd@26-172.31.28.25:22-139.178.89.65:45174.service - OpenSSH per-connection server daemon (139.178.89.65:45174). May 14 23:51:58.484684 systemd-logind[1930]: Removed session 26. May 14 23:51:58.499978 systemd[1]: Created slice kubepods-burstable-pod1c185d4d_79a7_48c3_8131_04b5c9ad3eff.slice - libcontainer container kubepods-burstable-pod1c185d4d_79a7_48c3_8131_04b5c9ad3eff.slice. May 14 23:51:58.504603 kubelet[3277]: E0514 23:51:58.504509 3277 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:51:58.521444 kubelet[3277]: I0514 23:51:58.521348 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-bpf-maps\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.522080 kubelet[3277]: I0514 23:51:58.522043 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-hostproc\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.522838 kubelet[3277]: I0514 23:51:58.522564 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-host-proc-sys-net\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525133 kubelet[3277]: I0514 23:51:58.523279 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-cilium-ipsec-secrets\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525133 kubelet[3277]: I0514 23:51:58.523336 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-etc-cni-netd\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525133 kubelet[3277]: I0514 23:51:58.523382 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-cilium-run\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525133 kubelet[3277]: I0514 23:51:58.523416 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-cilium-cgroup\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525133 kubelet[3277]: I0514 23:51:58.523453 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-host-proc-sys-kernel\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525133 kubelet[3277]: I0514 23:51:58.523486 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-hubble-tls\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525546 kubelet[3277]: I0514 23:51:58.523523 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-cni-path\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525546 kubelet[3277]: I0514 23:51:58.523563 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-lib-modules\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525546 kubelet[3277]: I0514 23:51:58.523599 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-cilium-config-path\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525546 kubelet[3277]: I0514 23:51:58.523641 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fctc\" (UniqueName: \"kubernetes.io/projected/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-kube-api-access-6fctc\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525546 kubelet[3277]: I0514 23:51:58.523683 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-xtables-lock\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.525546 kubelet[3277]: I0514 23:51:58.523738 3277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c185d4d-79a7-48c3-8131-04b5c9ad3eff-clustermesh-secrets\") pod \"cilium-bwr4k\" (UID: \"1c185d4d-79a7-48c3-8131-04b5c9ad3eff\") " pod="kube-system/cilium-bwr4k" May 14 23:51:58.721019 sshd[5269]: Accepted publickey for core from 139.178.89.65 port 45174 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:58.731051 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:58.744206 systemd-logind[1930]: New session 27 of user core. May 14 23:51:58.752454 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:51:58.810684 containerd[1955]: time="2025-05-14T23:51:58.810214053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bwr4k,Uid:1c185d4d-79a7-48c3-8131-04b5c9ad3eff,Namespace:kube-system,Attempt:0,}" May 14 23:51:58.856230 containerd[1955]: time="2025-05-14T23:51:58.855958606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:58.856230 containerd[1955]: time="2025-05-14T23:51:58.856049374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:58.856230 containerd[1955]: time="2025-05-14T23:51:58.856085458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:58.856650 containerd[1955]: time="2025-05-14T23:51:58.856285090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:58.879381 sshd[5276]: Connection closed by 139.178.89.65 port 45174 May 14 23:51:58.880419 sshd-session[5269]: pam_unix(sshd:session): session closed for user core May 14 23:51:58.889479 systemd[1]: sshd@26-172.31.28.25:22-139.178.89.65:45174.service: Deactivated successfully. May 14 23:51:58.895628 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:51:58.900802 systemd-logind[1930]: Session 27 logged out. Waiting for processes to exit. May 14 23:51:58.926444 systemd[1]: Started cri-containerd-820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc.scope - libcontainer container 820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc. May 14 23:51:58.930730 systemd[1]: Started sshd@27-172.31.28.25:22-139.178.89.65:45188.service - OpenSSH per-connection server daemon (139.178.89.65:45188). May 14 23:51:58.936264 systemd-logind[1930]: Removed session 27. May 14 23:51:58.988525 containerd[1955]: time="2025-05-14T23:51:58.988016950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bwr4k,Uid:1c185d4d-79a7-48c3-8131-04b5c9ad3eff,Namespace:kube-system,Attempt:0,} returns sandbox id \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\"" May 14 23:51:58.994056 containerd[1955]: time="2025-05-14T23:51:58.993920794Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:51:59.020236 containerd[1955]: time="2025-05-14T23:51:59.020139582Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388\"" May 14 23:51:59.022886 containerd[1955]: time="2025-05-14T23:51:59.021323478Z" level=info msg="StartContainer for \"ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388\"" May 14 23:51:59.069448 systemd[1]: Started cri-containerd-ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388.scope - libcontainer container ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388. May 14 23:51:59.121523 containerd[1955]: time="2025-05-14T23:51:59.121372591Z" level=info msg="StartContainer for \"ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388\" returns successfully" May 14 23:51:59.141567 sshd[5309]: Accepted publickey for core from 139.178.89.65 port 45188 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:59.146144 sshd-session[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:59.151795 systemd[1]: cri-containerd-ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388.scope: Deactivated successfully. May 14 23:51:59.161332 systemd-logind[1930]: New session 28 of user core. May 14 23:51:59.167964 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 23:51:59.233568 containerd[1955]: time="2025-05-14T23:51:59.233472247Z" level=info msg="shim disconnected" id=ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388 namespace=k8s.io May 14 23:51:59.233568 containerd[1955]: time="2025-05-14T23:51:59.233552923Z" level=warning msg="cleaning up after shim disconnected" id=ff6458356dcdb426ba6913b82854e74370c93f7db62afbeb59f5e89555dc8388 namespace=k8s.io May 14 23:51:59.233895 containerd[1955]: time="2025-05-14T23:51:59.233576275Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:59.724499 containerd[1955]: time="2025-05-14T23:51:59.724425850Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:51:59.757824 containerd[1955]: time="2025-05-14T23:51:59.756486658Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832\"" May 14 23:51:59.759034 containerd[1955]: time="2025-05-14T23:51:59.758576398Z" level=info msg="StartContainer for \"7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832\"" May 14 23:51:59.836813 systemd[1]: Started cri-containerd-7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832.scope - libcontainer container 7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832. May 14 23:51:59.893217 containerd[1955]: time="2025-05-14T23:51:59.892876763Z" level=info msg="StartContainer for \"7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832\" returns successfully" May 14 23:51:59.905468 systemd[1]: cri-containerd-7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832.scope: Deactivated successfully. May 14 23:51:59.947730 containerd[1955]: time="2025-05-14T23:51:59.947606483Z" level=info msg="shim disconnected" id=7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832 namespace=k8s.io May 14 23:51:59.948030 containerd[1955]: time="2025-05-14T23:51:59.947681759Z" level=warning msg="cleaning up after shim disconnected" id=7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832 namespace=k8s.io May 14 23:51:59.948030 containerd[1955]: time="2025-05-14T23:51:59.947814971Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:00.648554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f22af02f32d7e0a1501687af5c8429ecf62aeff4de2a4a26b3fbd0962730832-rootfs.mount: Deactivated successfully. May 14 23:52:00.726827 containerd[1955]: time="2025-05-14T23:52:00.726652787Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:52:00.759549 containerd[1955]: time="2025-05-14T23:52:00.759447659Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b\"" May 14 23:52:00.762260 containerd[1955]: time="2025-05-14T23:52:00.761475023Z" level=info msg="StartContainer for \"1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b\"" May 14 23:52:00.828922 systemd[1]: Started cri-containerd-1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b.scope - libcontainer container 1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b. May 14 23:52:00.894580 containerd[1955]: time="2025-05-14T23:52:00.894475068Z" level=info msg="StartContainer for \"1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b\" returns successfully" May 14 23:52:00.899928 systemd[1]: cri-containerd-1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b.scope: Deactivated successfully. May 14 23:52:00.950956 containerd[1955]: time="2025-05-14T23:52:00.950702124Z" level=info msg="shim disconnected" id=1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b namespace=k8s.io May 14 23:52:00.950956 containerd[1955]: time="2025-05-14T23:52:00.950775324Z" level=warning msg="cleaning up after shim disconnected" id=1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b namespace=k8s.io May 14 23:52:00.950956 containerd[1955]: time="2025-05-14T23:52:00.950795724Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:01.648458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f3b119c53d6f09957f44c8ecb3d8ab8d8021e8e48dcfb6663e87a5a92f09e5b-rootfs.mount: Deactivated successfully. May 14 23:52:01.734026 containerd[1955]: time="2025-05-14T23:52:01.733158492Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:52:01.764084 containerd[1955]: time="2025-05-14T23:52:01.764025864Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91\"" May 14 23:52:01.765281 containerd[1955]: time="2025-05-14T23:52:01.765141276Z" level=info msg="StartContainer for \"85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91\"" May 14 23:52:01.825454 systemd[1]: Started cri-containerd-85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91.scope - libcontainer container 85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91. May 14 23:52:01.905533 systemd[1]: cri-containerd-85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91.scope: Deactivated successfully. May 14 23:52:01.909603 containerd[1955]: time="2025-05-14T23:52:01.908983957Z" level=info msg="StartContainer for \"85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91\" returns successfully" May 14 23:52:01.953737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91-rootfs.mount: Deactivated successfully. May 14 23:52:01.963707 containerd[1955]: time="2025-05-14T23:52:01.963581257Z" level=info msg="shim disconnected" id=85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91 namespace=k8s.io May 14 23:52:01.963707 containerd[1955]: time="2025-05-14T23:52:01.963696673Z" level=warning msg="cleaning up after shim disconnected" id=85e9d527a6ec548ea5c6c6844f58e7556b3176f99d16c5af1bdb8c082bc91b91 namespace=k8s.io May 14 23:52:01.964254 containerd[1955]: time="2025-05-14T23:52:01.963718453Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:02.743300 containerd[1955]: time="2025-05-14T23:52:02.743145097Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:52:02.773149 containerd[1955]: time="2025-05-14T23:52:02.773053633Z" level=info msg="CreateContainer within sandbox \"820a3c114e51ec435df9d32d1da047caf912ca23089f1cc9ffc310267f356fcc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c\"" May 14 23:52:02.775288 containerd[1955]: time="2025-05-14T23:52:02.774411025Z" level=info msg="StartContainer for \"ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c\"" May 14 23:52:02.836801 systemd[1]: run-containerd-runc-k8s.io-ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c-runc.I7RLc9.mount: Deactivated successfully. May 14 23:52:02.851627 systemd[1]: Started cri-containerd-ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c.scope - libcontainer container ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c. May 14 23:52:02.918875 containerd[1955]: time="2025-05-14T23:52:02.918775322Z" level=info msg="StartContainer for \"ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c\" returns successfully" May 14 23:52:03.133567 containerd[1955]: time="2025-05-14T23:52:03.132964307Z" level=info msg="StopPodSandbox for \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\"" May 14 23:52:03.133567 containerd[1955]: time="2025-05-14T23:52:03.133141943Z" level=info msg="TearDown network for sandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" successfully" May 14 23:52:03.133567 containerd[1955]: time="2025-05-14T23:52:03.133166015Z" level=info msg="StopPodSandbox for \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" returns successfully" May 14 23:52:03.136133 containerd[1955]: time="2025-05-14T23:52:03.134454455Z" level=info msg="RemovePodSandbox for \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\"" May 14 23:52:03.136133 containerd[1955]: time="2025-05-14T23:52:03.134522651Z" level=info msg="Forcibly stopping sandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\"" May 14 23:52:03.136133 containerd[1955]: time="2025-05-14T23:52:03.134626271Z" level=info msg="TearDown network for sandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" successfully" May 14 23:52:03.142987 containerd[1955]: time="2025-05-14T23:52:03.142932035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:52:03.143324 containerd[1955]: time="2025-05-14T23:52:03.143290595Z" level=info msg="RemovePodSandbox \"3ecbd22f9cd6b58eb4c5c1efd06c03a335e080c67931befd0c330b9702ac52ae\" returns successfully" May 14 23:52:03.144741 containerd[1955]: time="2025-05-14T23:52:03.144694127Z" level=info msg="StopPodSandbox for \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\"" May 14 23:52:03.145414 containerd[1955]: time="2025-05-14T23:52:03.145378439Z" level=info msg="TearDown network for sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" successfully" May 14 23:52:03.145609 containerd[1955]: time="2025-05-14T23:52:03.145557527Z" level=info msg="StopPodSandbox for \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" returns successfully" May 14 23:52:03.146952 containerd[1955]: time="2025-05-14T23:52:03.146875451Z" level=info msg="RemovePodSandbox for \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\"" May 14 23:52:03.147208 containerd[1955]: time="2025-05-14T23:52:03.147071807Z" level=info msg="Forcibly stopping sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\"" May 14 23:52:03.147475 containerd[1955]: time="2025-05-14T23:52:03.147351119Z" level=info msg="TearDown network for sandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" successfully" May 14 23:52:03.154327 containerd[1955]: time="2025-05-14T23:52:03.154276103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:52:03.154659 containerd[1955]: time="2025-05-14T23:52:03.154613375Z" level=info msg="RemovePodSandbox \"bf1816cdcb22df23ceb94352157953c263c09c210d938412df92aea7da823513\" returns successfully" May 14 23:52:03.714585 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 23:52:05.703506 systemd[1]: run-containerd-runc-k8s.io-ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c-runc.p7DB9x.mount: Deactivated successfully. May 14 23:52:07.965584 systemd-networkd[1773]: lxc_health: Link UP May 14 23:52:07.990948 (udev-worker)[6118]: Network interface NamePolicy= disabled on kernel command line. May 14 23:52:07.992651 systemd-networkd[1773]: lxc_health: Gained carrier May 14 23:52:08.850661 kubelet[3277]: I0514 23:52:08.848660 3277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bwr4k" podStartSLOduration=10.848637811 podStartE2EDuration="10.848637811s" podCreationTimestamp="2025-05-14 23:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:52:03.787084622 +0000 UTC m=+120.932256038" watchObservedRunningTime="2025-05-14 23:52:08.848637811 +0000 UTC m=+125.993809203" May 14 23:52:09.906411 systemd-networkd[1773]: lxc_health: Gained IPv6LL May 14 23:52:12.235443 ntpd[1922]: Listen normally on 14 lxc_health [fe80::ec50:53ff:fe4e:fec6%14]:123 May 14 23:52:12.235995 ntpd[1922]: 14 May 23:52:12 ntpd[1922]: Listen normally on 14 lxc_health [fe80::ec50:53ff:fe4e:fec6%14]:123 May 14 23:52:12.762977 systemd[1]: run-containerd-runc-k8s.io-ea3cb18c6350ed84fa352fca3e88b729a5e031f9dcc52cbb740dc87ea7f2fe4c-runc.vBqu9K.mount: Deactivated successfully. May 14 23:52:15.146168 sshd[5364]: Connection closed by 139.178.89.65 port 45188 May 14 23:52:15.147281 sshd-session[5309]: pam_unix(sshd:session): session closed for user core May 14 23:52:15.155570 systemd[1]: sshd@27-172.31.28.25:22-139.178.89.65:45188.service: Deactivated successfully. May 14 23:52:15.162676 systemd[1]: session-28.scope: Deactivated successfully. May 14 23:52:15.167650 systemd-logind[1930]: Session 28 logged out. Waiting for processes to exit. May 14 23:52:15.171810 systemd-logind[1930]: Removed session 28. May 14 23:52:29.608698 systemd[1]: cri-containerd-ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e.scope: Deactivated successfully. May 14 23:52:29.611800 systemd[1]: cri-containerd-ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e.scope: Consumed 6.456s CPU time, 59.5M memory peak. May 14 23:52:29.652687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e-rootfs.mount: Deactivated successfully. May 14 23:52:29.657615 containerd[1955]: time="2025-05-14T23:52:29.657520251Z" level=info msg="shim disconnected" id=ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e namespace=k8s.io May 14 23:52:29.658510 containerd[1955]: time="2025-05-14T23:52:29.657616491Z" level=warning msg="cleaning up after shim disconnected" id=ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e namespace=k8s.io May 14 23:52:29.658510 containerd[1955]: time="2025-05-14T23:52:29.657638679Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:29.829624 kubelet[3277]: I0514 23:52:29.829472 3277 scope.go:117] "RemoveContainer" containerID="ac6e0301a6b20cacd1422dffcd393adb1646bfd83adef3168213e8f7859c440e" May 14 23:52:29.833866 containerd[1955]: time="2025-05-14T23:52:29.833810979Z" level=info msg="CreateContainer within sandbox \"bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 23:52:29.860187 containerd[1955]: time="2025-05-14T23:52:29.860014012Z" level=info msg="CreateContainer within sandbox \"bd11a910d23d34f941d32383992a19208e562347ce9a72963b21fc80e2f570a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"427d46098de25ac7865915166211c305f86b83588966878299a2d48b890a9c1c\"" May 14 23:52:29.861403 containerd[1955]: time="2025-05-14T23:52:29.861203032Z" level=info msg="StartContainer for \"427d46098de25ac7865915166211c305f86b83588966878299a2d48b890a9c1c\"" May 14 23:52:29.918819 systemd[1]: run-containerd-runc-k8s.io-427d46098de25ac7865915166211c305f86b83588966878299a2d48b890a9c1c-runc.D45BPi.mount: Deactivated successfully. May 14 23:52:29.937459 systemd[1]: Started cri-containerd-427d46098de25ac7865915166211c305f86b83588966878299a2d48b890a9c1c.scope - libcontainer container 427d46098de25ac7865915166211c305f86b83588966878299a2d48b890a9c1c. May 14 23:52:30.008559 containerd[1955]: time="2025-05-14T23:52:30.008487600Z" level=info msg="StartContainer for \"427d46098de25ac7865915166211c305f86b83588966878299a2d48b890a9c1c\" returns successfully" May 14 23:52:35.166907 systemd[1]: cri-containerd-b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3.scope: Deactivated successfully. May 14 23:52:35.167623 systemd[1]: cri-containerd-b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3.scope: Consumed 4.362s CPU time, 22.5M memory peak. May 14 23:52:35.211040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3-rootfs.mount: Deactivated successfully. May 14 23:52:35.223630 containerd[1955]: time="2025-05-14T23:52:35.223334706Z" level=info msg="shim disconnected" id=b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3 namespace=k8s.io May 14 23:52:35.223630 containerd[1955]: time="2025-05-14T23:52:35.223408530Z" level=warning msg="cleaning up after shim disconnected" id=b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3 namespace=k8s.io May 14 23:52:35.223630 containerd[1955]: time="2025-05-14T23:52:35.223431222Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:35.253044 kubelet[3277]: E0514 23:52:35.252900 3277 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-25?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 14 23:52:35.851961 kubelet[3277]: I0514 23:52:35.851893 3277 scope.go:117] "RemoveContainer" containerID="b7b595ddcbef7b0a843bbf4827c6ebd02e93bdc4e01667874ccb39b514cf15a3" May 14 23:52:35.856009 containerd[1955]: time="2025-05-14T23:52:35.855951669Z" level=info msg="CreateContainer within sandbox \"e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 14 23:52:35.890694 containerd[1955]: time="2025-05-14T23:52:35.890503342Z" level=info msg="CreateContainer within sandbox \"e9a524c6ffc9279aa03cbc431ae6219cabfb85b30fd68e0cc03a835d8b7d1601\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"787e6f719bc6010147572c3e2d7d2a8bcfe5fcbbc8dcbbaf5302cf65eaec5552\"" May 14 23:52:35.891481 containerd[1955]: time="2025-05-14T23:52:35.891176062Z" level=info msg="StartContainer for \"787e6f719bc6010147572c3e2d7d2a8bcfe5fcbbc8dcbbaf5302cf65eaec5552\"" May 14 23:52:35.945401 systemd[1]: Started cri-containerd-787e6f719bc6010147572c3e2d7d2a8bcfe5fcbbc8dcbbaf5302cf65eaec5552.scope - libcontainer container 787e6f719bc6010147572c3e2d7d2a8bcfe5fcbbc8dcbbaf5302cf65eaec5552. May 14 23:52:36.007153 containerd[1955]: time="2025-05-14T23:52:36.007057998Z" level=info msg="StartContainer for \"787e6f719bc6010147572c3e2d7d2a8bcfe5fcbbc8dcbbaf5302cf65eaec5552\" returns successfully" May 14 23:52:45.254153 kubelet[3277]: E0514 23:52:45.254010 3277 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-25?timeout=10s\": context deadline exceeded"