Feb 13 16:05:16.165793 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 16:05:16.165837 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025 Feb 13 16:05:16.165863 kernel: KASLR disabled due to lack of seed Feb 13 16:05:16.165879 kernel: efi: EFI v2.7 by EDK II Feb 13 16:05:16.165895 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 16:05:16.165910 kernel: ACPI: Early table checksum verification disabled Feb 13 16:05:16.165927 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 16:05:16.165960 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 16:05:16.165977 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 16:05:16.165993 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 16:05:16.166015 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 16:05:16.166031 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 16:05:16.166046 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 16:05:16.166062 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 16:05:16.166080 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 16:05:16.166101 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 16:05:16.166119 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 16:05:16.166164 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 16:05:16.166186 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 16:05:16.166203 kernel: printk: bootconsole [uart0] enabled Feb 13 16:05:16.166219 kernel: NUMA: Failed to initialise from firmware Feb 13 16:05:16.166235 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:05:16.166252 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 16:05:16.166268 kernel: Zone ranges: Feb 13 16:05:16.166284 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 16:05:16.166300 kernel: DMA32 empty Feb 13 16:05:16.166322 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 16:05:16.166339 kernel: Movable zone start for each node Feb 13 16:05:16.166355 kernel: Early memory node ranges Feb 13 16:05:16.166371 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 16:05:16.166387 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 16:05:16.166403 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 16:05:16.166419 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 16:05:16.166435 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 16:05:16.166451 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 16:05:16.166467 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 16:05:16.166483 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 16:05:16.166499 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:05:16.166520 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 16:05:16.166537 kernel: psci: probing for conduit method from ACPI. Feb 13 16:05:16.166561 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 16:05:16.166578 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 16:05:16.166596 kernel: psci: Trusted OS migration not required Feb 13 16:05:16.166617 kernel: psci: SMC Calling Convention v1.1 Feb 13 16:05:16.166636 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 16:05:16.166653 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 16:05:16.166670 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 16:05:16.166687 kernel: Detected PIPT I-cache on CPU0 Feb 13 16:05:16.166705 kernel: CPU features: detected: GIC system register CPU interface Feb 13 16:05:16.166722 kernel: CPU features: detected: Spectre-v2 Feb 13 16:05:16.166739 kernel: CPU features: detected: Spectre-v3a Feb 13 16:05:16.166756 kernel: CPU features: detected: Spectre-BHB Feb 13 16:05:16.166773 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 16:05:16.166791 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 16:05:16.166812 kernel: alternatives: applying boot alternatives Feb 13 16:05:16.166832 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:05:16.166851 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:05:16.166868 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:05:16.166885 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:05:16.166903 kernel: Fallback order for Node 0: 0 Feb 13 16:05:16.166920 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 16:05:16.166937 kernel: Policy zone: Normal Feb 13 16:05:16.166954 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:05:16.166971 kernel: software IO TLB: area num 2. Feb 13 16:05:16.166989 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 16:05:16.167011 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 16:05:16.167029 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:05:16.167046 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:05:16.167064 kernel: rcu: RCU event tracing is enabled. Feb 13 16:05:16.167082 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:05:16.167099 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:05:16.167117 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:05:16.167149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:05:16.167172 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:05:16.167190 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 16:05:16.167207 kernel: GICv3: 96 SPIs implemented Feb 13 16:05:16.167230 kernel: GICv3: 0 Extended SPIs implemented Feb 13 16:05:16.167248 kernel: Root IRQ handler: gic_handle_irq Feb 13 16:05:16.167265 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 16:05:16.167282 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 16:05:16.167299 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 16:05:16.167316 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 16:05:16.167334 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 16:05:16.167351 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 16:05:16.167368 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 16:05:16.167385 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 16:05:16.167402 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:05:16.167420 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 16:05:16.167442 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 16:05:16.167459 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 16:05:16.167477 kernel: Console: colour dummy device 80x25 Feb 13 16:05:16.167494 kernel: printk: console [tty1] enabled Feb 13 16:05:16.167512 kernel: ACPI: Core revision 20230628 Feb 13 16:05:16.167530 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 16:05:16.167547 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:05:16.167565 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:05:16.167583 kernel: landlock: Up and running. Feb 13 16:05:16.167605 kernel: SELinux: Initializing. Feb 13 16:05:16.167623 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:05:16.167640 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:05:16.167658 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:05:16.167675 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:05:16.167693 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:05:16.167711 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:05:16.167729 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 16:05:16.167746 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 16:05:16.167768 kernel: Remapping and enabling EFI services. Feb 13 16:05:16.167786 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:05:16.167803 kernel: Detected PIPT I-cache on CPU1 Feb 13 16:05:16.167821 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 16:05:16.167838 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 16:05:16.167856 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 16:05:16.167873 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:05:16.167890 kernel: SMP: Total of 2 processors activated. Feb 13 16:05:16.167908 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 16:05:16.167930 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 16:05:16.167948 kernel: CPU features: detected: CRC32 instructions Feb 13 16:05:16.167965 kernel: CPU: All CPU(s) started at EL1 Feb 13 16:05:16.167995 kernel: alternatives: applying system-wide alternatives Feb 13 16:05:16.168018 kernel: devtmpfs: initialized Feb 13 16:05:16.168037 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:05:16.168055 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:05:16.168073 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:05:16.168091 kernel: SMBIOS 3.0.0 present. Feb 13 16:05:16.168109 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 16:05:16.168132 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:05:16.168179 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 16:05:16.168198 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 16:05:16.168216 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 16:05:16.168235 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:05:16.168253 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Feb 13 16:05:16.168271 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:05:16.168296 kernel: cpuidle: using governor menu Feb 13 16:05:16.168314 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 16:05:16.168333 kernel: ASID allocator initialised with 65536 entries Feb 13 16:05:16.168351 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:05:16.168369 kernel: Serial: AMBA PL011 UART driver Feb 13 16:05:16.168388 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 16:05:16.168406 kernel: Modules: 509040 pages in range for PLT usage Feb 13 16:05:16.168424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:05:16.168443 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:05:16.168466 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 16:05:16.168484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 16:05:16.168503 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:05:16.168521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:05:16.168539 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 16:05:16.168557 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 16:05:16.168576 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:05:16.168594 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:05:16.168612 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:05:16.168655 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:05:16.168676 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:05:16.168694 kernel: ACPI: Interpreter enabled Feb 13 16:05:16.168713 kernel: ACPI: Using GIC for interrupt routing Feb 13 16:05:16.168731 kernel: ACPI: MCFG table detected, 1 entries Feb 13 16:05:16.168749 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 16:05:16.169044 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:05:16.170203 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 16:05:16.170436 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 16:05:16.170677 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 16:05:16.170889 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 16:05:16.170916 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 16:05:16.170935 kernel: acpiphp: Slot [1] registered Feb 13 16:05:16.170954 kernel: acpiphp: Slot [2] registered Feb 13 16:05:16.170973 kernel: acpiphp: Slot [3] registered Feb 13 16:05:16.170991 kernel: acpiphp: Slot [4] registered Feb 13 16:05:16.171016 kernel: acpiphp: Slot [5] registered Feb 13 16:05:16.171035 kernel: acpiphp: Slot [6] registered Feb 13 16:05:16.171053 kernel: acpiphp: Slot [7] registered Feb 13 16:05:16.171071 kernel: acpiphp: Slot [8] registered Feb 13 16:05:16.171090 kernel: acpiphp: Slot [9] registered Feb 13 16:05:16.171108 kernel: acpiphp: Slot [10] registered Feb 13 16:05:16.171126 kernel: acpiphp: Slot [11] registered Feb 13 16:05:16.171172 kernel: acpiphp: Slot [12] registered Feb 13 16:05:16.171193 kernel: acpiphp: Slot [13] registered Feb 13 16:05:16.171212 kernel: acpiphp: Slot [14] registered Feb 13 16:05:16.171237 kernel: acpiphp: Slot [15] registered Feb 13 16:05:16.171256 kernel: acpiphp: Slot [16] registered Feb 13 16:05:16.171275 kernel: acpiphp: Slot [17] registered Feb 13 16:05:16.171293 kernel: acpiphp: Slot [18] registered Feb 13 16:05:16.171311 kernel: acpiphp: Slot [19] registered Feb 13 16:05:16.171330 kernel: acpiphp: Slot [20] registered Feb 13 16:05:16.171348 kernel: acpiphp: Slot [21] registered Feb 13 16:05:16.171366 kernel: acpiphp: Slot [22] registered Feb 13 16:05:16.171385 kernel: acpiphp: Slot [23] registered Feb 13 16:05:16.171429 kernel: acpiphp: Slot [24] registered Feb 13 16:05:16.171449 kernel: acpiphp: Slot [25] registered Feb 13 16:05:16.171467 kernel: acpiphp: Slot [26] registered Feb 13 16:05:16.171486 kernel: acpiphp: Slot [27] registered Feb 13 16:05:16.171504 kernel: acpiphp: Slot [28] registered Feb 13 16:05:16.171522 kernel: acpiphp: Slot [29] registered Feb 13 16:05:16.171540 kernel: acpiphp: Slot [30] registered Feb 13 16:05:16.171558 kernel: acpiphp: Slot [31] registered Feb 13 16:05:16.171576 kernel: PCI host bridge to bus 0000:00 Feb 13 16:05:16.171811 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 16:05:16.172003 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 16:05:16.177110 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 16:05:16.177383 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 16:05:16.177636 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 16:05:16.177871 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 16:05:16.178080 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 16:05:16.181513 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 16:05:16.181747 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 16:05:16.181966 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:05:16.183502 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 16:05:16.183837 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 16:05:16.184043 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 16:05:16.185360 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 16:05:16.185585 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:05:16.185789 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 16:05:16.185995 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 16:05:16.188315 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 16:05:16.188574 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 16:05:16.188811 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 16:05:16.189017 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 16:05:16.189228 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 16:05:16.189417 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 16:05:16.189443 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 16:05:16.189463 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 16:05:16.189495 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 16:05:16.189516 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 16:05:16.189535 kernel: iommu: Default domain type: Translated Feb 13 16:05:16.189554 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 16:05:16.189580 kernel: efivars: Registered efivars operations Feb 13 16:05:16.189598 kernel: vgaarb: loaded Feb 13 16:05:16.189617 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 16:05:16.189635 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:05:16.189653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:05:16.189672 kernel: pnp: PnP ACPI init Feb 13 16:05:16.189889 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 16:05:16.189917 kernel: pnp: PnP ACPI: found 1 devices Feb 13 16:05:16.189941 kernel: NET: Registered PF_INET protocol family Feb 13 16:05:16.189961 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:05:16.189980 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 16:05:16.189999 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:05:16.190017 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:05:16.190036 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 16:05:16.190054 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 16:05:16.190073 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:05:16.190091 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:05:16.190115 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:05:16.190163 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:05:16.190189 kernel: kvm [1]: HYP mode not available Feb 13 16:05:16.190208 kernel: Initialise system trusted keyrings Feb 13 16:05:16.190227 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 16:05:16.190246 kernel: Key type asymmetric registered Feb 13 16:05:16.190266 kernel: Asymmetric key parser 'x509' registered Feb 13 16:05:16.190285 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 16:05:16.190303 kernel: io scheduler mq-deadline registered Feb 13 16:05:16.190329 kernel: io scheduler kyber registered Feb 13 16:05:16.190349 kernel: io scheduler bfq registered Feb 13 16:05:16.190597 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 16:05:16.190630 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 16:05:16.190649 kernel: ACPI: button: Power Button [PWRB] Feb 13 16:05:16.190668 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 16:05:16.190687 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 16:05:16.190705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:05:16.190732 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 16:05:16.190948 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 16:05:16.190975 kernel: printk: console [ttyS0] disabled Feb 13 16:05:16.190994 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 16:05:16.191013 kernel: printk: console [ttyS0] enabled Feb 13 16:05:16.191031 kernel: printk: bootconsole [uart0] disabled Feb 13 16:05:16.191050 kernel: thunder_xcv, ver 1.0 Feb 13 16:05:16.191069 kernel: thunder_bgx, ver 1.0 Feb 13 16:05:16.191087 kernel: nicpf, ver 1.0 Feb 13 16:05:16.191111 kernel: nicvf, ver 1.0 Feb 13 16:05:16.193457 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 16:05:16.193679 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:05:15 UTC (1739462715) Feb 13 16:05:16.193706 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:05:16.193725 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 16:05:16.193744 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 16:05:16.193763 kernel: watchdog: Hard watchdog permanently disabled Feb 13 16:05:16.193782 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:05:16.193810 kernel: Segment Routing with IPv6 Feb 13 16:05:16.193830 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:05:16.193848 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:05:16.193867 kernel: Key type dns_resolver registered Feb 13 16:05:16.193885 kernel: registered taskstats version 1 Feb 13 16:05:16.193903 kernel: Loading compiled-in X.509 certificates Feb 13 16:05:16.193922 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5' Feb 13 16:05:16.193940 kernel: Key type .fscrypt registered Feb 13 16:05:16.193958 kernel: Key type fscrypt-provisioning registered Feb 13 16:05:16.193982 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:05:16.194001 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:05:16.194019 kernel: ima: No architecture policies found Feb 13 16:05:16.194037 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 16:05:16.194055 kernel: clk: Disabling unused clocks Feb 13 16:05:16.194073 kernel: Freeing unused kernel memory: 39360K Feb 13 16:05:16.194092 kernel: Run /init as init process Feb 13 16:05:16.194110 kernel: with arguments: Feb 13 16:05:16.194128 kernel: /init Feb 13 16:05:16.194188 kernel: with environment: Feb 13 16:05:16.194215 kernel: HOME=/ Feb 13 16:05:16.194234 kernel: TERM=linux Feb 13 16:05:16.194252 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:05:16.194275 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:05:16.194298 systemd[1]: Detected virtualization amazon. Feb 13 16:05:16.194319 systemd[1]: Detected architecture arm64. Feb 13 16:05:16.194339 systemd[1]: Running in initrd. Feb 13 16:05:16.194364 systemd[1]: No hostname configured, using default hostname. Feb 13 16:05:16.194385 systemd[1]: Hostname set to . Feb 13 16:05:16.194406 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:05:16.194426 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:05:16.194446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:16.194467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:16.194488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:05:16.194509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:05:16.194534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:05:16.194555 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:05:16.194579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:05:16.194600 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:05:16.194620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:16.194640 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:16.194660 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:05:16.194686 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:05:16.194706 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:05:16.194727 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:05:16.194747 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:05:16.194768 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:05:16.194789 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:05:16.194810 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:05:16.194830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:16.194850 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:16.194876 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:16.194896 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:05:16.194917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:05:16.194938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:05:16.194958 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:05:16.194978 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:05:16.194998 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:05:16.195018 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:05:16.195043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:16.195107 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 16:05:16.195369 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:05:16.195392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:16.195419 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:05:16.195441 systemd-journald[251]: Journal started Feb 13 16:05:16.195479 systemd-journald[251]: Runtime Journal (/run/log/journal/ec297de3a81e88edca699200a00f3d48) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:05:16.177271 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 16:05:16.210327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:05:16.210398 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:05:16.220198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:16.227595 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:05:16.227635 kernel: Bridge firewalling registered Feb 13 16:05:16.226345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:16.227120 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 16:05:16.235176 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:16.248456 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:16.262490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:16.267320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:05:16.279372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:05:16.302921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:16.308953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:16.330175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:16.333516 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:16.348517 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:05:16.357815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:05:16.377192 dracut-cmdline[286]: dracut-dracut-053 Feb 13 16:05:16.384128 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:05:16.445462 systemd-resolved[288]: Positive Trust Anchors: Feb 13 16:05:16.445498 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:05:16.445559 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:05:16.526168 kernel: SCSI subsystem initialized Feb 13 16:05:16.532177 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:05:16.544177 kernel: iscsi: registered transport (tcp) Feb 13 16:05:16.566617 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:05:16.566691 kernel: QLogic iSCSI HBA Driver Feb 13 16:05:16.665172 kernel: random: crng init done Feb 13 16:05:16.665347 systemd-resolved[288]: Defaulting to hostname 'linux'. Feb 13 16:05:16.668698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:05:16.671386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:16.698218 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:05:16.707419 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:05:16.746899 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:05:16.746974 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:05:16.748724 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:05:16.830160 kernel: raid6: neonx8 gen() 6738 MB/s Feb 13 16:05:16.832180 kernel: raid6: neonx4 gen() 6514 MB/s Feb 13 16:05:16.848169 kernel: raid6: neonx2 gen() 5412 MB/s Feb 13 16:05:16.865170 kernel: raid6: neonx1 gen() 3937 MB/s Feb 13 16:05:16.882169 kernel: raid6: int64x8 gen() 3808 MB/s Feb 13 16:05:16.899168 kernel: raid6: int64x4 gen() 3694 MB/s Feb 13 16:05:16.916170 kernel: raid6: int64x2 gen() 3563 MB/s Feb 13 16:05:16.933922 kernel: raid6: int64x1 gen() 2772 MB/s Feb 13 16:05:16.933955 kernel: raid6: using algorithm neonx8 gen() 6738 MB/s Feb 13 16:05:16.951920 kernel: raid6: .... xor() 4880 MB/s, rmw enabled Feb 13 16:05:16.951963 kernel: raid6: using neon recovery algorithm Feb 13 16:05:16.960322 kernel: xor: measuring software checksum speed Feb 13 16:05:16.960380 kernel: 8regs : 10976 MB/sec Feb 13 16:05:16.961411 kernel: 32regs : 11949 MB/sec Feb 13 16:05:16.962595 kernel: arm64_neon : 9581 MB/sec Feb 13 16:05:16.962627 kernel: xor: using function: 32regs (11949 MB/sec) Feb 13 16:05:17.048181 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:05:17.066981 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:05:17.076536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:17.115911 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 16:05:17.124371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:17.136958 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:05:17.168640 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Feb 13 16:05:17.223272 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:05:17.230441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:05:17.342022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:17.364444 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:05:17.415244 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:05:17.427947 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:05:17.433624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:17.437683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:05:17.460418 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:05:17.501211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:05:17.534176 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 16:05:17.534246 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 16:05:17.580819 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 16:05:17.581425 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 16:05:17.581668 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:dd:0a:a5:70:d3 Feb 13 16:05:17.558075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:05:17.558347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:17.561197 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:17.564247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:05:17.564525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:17.580856 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:17.587061 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:17.605599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:17.633182 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 16:05:17.635165 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 16:05:17.642588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:17.648338 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 16:05:17.653581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:17.666244 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:05:17.666310 kernel: GPT:9289727 != 16777215 Feb 13 16:05:17.666345 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:05:17.666371 kernel: GPT:9289727 != 16777215 Feb 13 16:05:17.667881 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:05:17.667934 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:17.688411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:17.743296 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (522) Feb 13 16:05:17.771459 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (530) Feb 13 16:05:17.865131 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 16:05:17.882067 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 16:05:17.910411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:05:17.923475 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 16:05:17.923845 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 16:05:17.940511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:05:17.952817 disk-uuid[661]: Primary Header is updated. Feb 13 16:05:17.952817 disk-uuid[661]: Secondary Entries is updated. Feb 13 16:05:17.952817 disk-uuid[661]: Secondary Header is updated. Feb 13 16:05:17.967167 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:17.975165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:18.982290 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:18.985174 disk-uuid[662]: The operation has completed successfully. Feb 13 16:05:19.165221 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:05:19.165772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:05:19.216470 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:05:19.235318 sh[920]: Success Feb 13 16:05:19.259239 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 16:05:19.373883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:05:19.380737 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:05:19.388279 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:05:19.422471 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19 Feb 13 16:05:19.422545 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:19.422573 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:05:19.424157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:05:19.425336 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:05:19.550180 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 16:05:19.574061 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:05:19.575669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:05:19.595487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:05:19.598412 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:05:19.637471 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:19.637545 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:19.639255 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:19.647179 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:19.666923 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:05:19.669195 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:19.681062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:05:19.693600 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:05:19.779260 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:05:19.795418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:05:19.841011 systemd-networkd[1112]: lo: Link UP Feb 13 16:05:19.841034 systemd-networkd[1112]: lo: Gained carrier Feb 13 16:05:19.845994 systemd-networkd[1112]: Enumeration completed Feb 13 16:05:19.847840 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:05:19.852076 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:19.852095 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:05:19.860055 systemd-networkd[1112]: eth0: Link UP Feb 13 16:05:19.860075 systemd-networkd[1112]: eth0: Gained carrier Feb 13 16:05:19.860092 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:19.860353 systemd[1]: Reached target network.target - Network. Feb 13 16:05:19.881228 systemd-networkd[1112]: eth0: DHCPv4 address 172.31.24.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:05:20.031208 ignition[1037]: Ignition 2.19.0 Feb 13 16:05:20.031707 ignition[1037]: Stage: fetch-offline Feb 13 16:05:20.032283 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:20.032307 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:20.032874 ignition[1037]: Ignition finished successfully Feb 13 16:05:20.042129 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:05:20.051416 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:05:20.087340 ignition[1123]: Ignition 2.19.0 Feb 13 16:05:20.087361 ignition[1123]: Stage: fetch Feb 13 16:05:20.088499 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:20.088524 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:20.088710 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:20.101818 ignition[1123]: PUT result: OK Feb 13 16:05:20.104579 ignition[1123]: parsed url from cmdline: "" Feb 13 16:05:20.104609 ignition[1123]: no config URL provided Feb 13 16:05:20.104628 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:05:20.104653 ignition[1123]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:05:20.104686 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:20.106573 ignition[1123]: PUT result: OK Feb 13 16:05:20.118485 unknown[1123]: fetched base config from "system" Feb 13 16:05:20.107576 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 16:05:20.118502 unknown[1123]: fetched base config from "system" Feb 13 16:05:20.110415 ignition[1123]: GET result: OK Feb 13 16:05:20.118516 unknown[1123]: fetched user config from "aws" Feb 13 16:05:20.110563 ignition[1123]: parsing config with SHA512: 70c9bc37ae0f86bf677cf1e421bafb2d704fb680a2ccecde6493bf5fd7b247cc1dae7ccd68f4ab874049f41b9464e00665441fa78b3643e38181b6c4a46c6b0f Feb 13 16:05:20.129241 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:05:20.122035 ignition[1123]: fetch: fetch complete Feb 13 16:05:20.122049 ignition[1123]: fetch: fetch passed Feb 13 16:05:20.122180 ignition[1123]: Ignition finished successfully Feb 13 16:05:20.154425 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:05:20.181866 ignition[1129]: Ignition 2.19.0 Feb 13 16:05:20.181894 ignition[1129]: Stage: kargs Feb 13 16:05:20.182568 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:20.182593 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:20.182747 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:20.185893 ignition[1129]: PUT result: OK Feb 13 16:05:20.196099 ignition[1129]: kargs: kargs passed Feb 13 16:05:20.196438 ignition[1129]: Ignition finished successfully Feb 13 16:05:20.201522 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:05:20.218393 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:05:20.239665 ignition[1135]: Ignition 2.19.0 Feb 13 16:05:20.239693 ignition[1135]: Stage: disks Feb 13 16:05:20.240848 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:20.240874 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:20.241022 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:20.242780 ignition[1135]: PUT result: OK Feb 13 16:05:20.252090 ignition[1135]: disks: disks passed Feb 13 16:05:20.252290 ignition[1135]: Ignition finished successfully Feb 13 16:05:20.256547 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:05:20.261337 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:05:20.265585 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:05:20.267783 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:05:20.269585 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:05:20.271413 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:05:20.286462 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:05:20.327750 systemd-fsck[1143]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:05:20.333921 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:05:20.344427 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:05:20.439242 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none. Feb 13 16:05:20.441586 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:05:20.444900 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:05:20.468335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:05:20.483456 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:05:20.487910 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 16:05:20.488006 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:05:20.488053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:05:20.503887 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:05:20.517469 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:05:20.528186 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1162) Feb 13 16:05:20.532658 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:20.532709 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:20.532736 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:20.539172 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:20.542325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:05:20.927064 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:05:20.947095 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:05:20.955740 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:05:20.964510 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:05:21.229359 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:05:21.239387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:05:21.243394 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:05:21.267199 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:05:21.269485 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:21.304658 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:05:21.314847 ignition[1276]: INFO : Ignition 2.19.0 Feb 13 16:05:21.314847 ignition[1276]: INFO : Stage: mount Feb 13 16:05:21.318086 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:21.318086 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:21.318086 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:21.324872 ignition[1276]: INFO : PUT result: OK Feb 13 16:05:21.329008 ignition[1276]: INFO : mount: mount passed Feb 13 16:05:21.341429 ignition[1276]: INFO : Ignition finished successfully Feb 13 16:05:21.331996 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:05:21.349494 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:05:21.360310 systemd-networkd[1112]: eth0: Gained IPv6LL Feb 13 16:05:21.447614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:05:21.482177 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1288) Feb 13 16:05:21.485958 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:21.486006 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:21.486033 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:21.492172 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:21.495555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:05:21.529736 ignition[1305]: INFO : Ignition 2.19.0 Feb 13 16:05:21.532568 ignition[1305]: INFO : Stage: files Feb 13 16:05:21.532568 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:21.532568 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:21.532568 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:21.540704 ignition[1305]: INFO : PUT result: OK Feb 13 16:05:21.544599 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:05:21.549194 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:05:21.549194 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:05:21.567095 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:05:21.569784 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:05:21.569784 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:05:21.569467 unknown[1305]: wrote ssh authorized keys file for user: core Feb 13 16:05:21.576622 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 16:05:21.579888 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 16:05:21.584194 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:05:21.584194 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 16:05:21.682411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 16:05:21.859334 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:05:21.862846 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 16:05:21.862846 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 16:05:22.338516 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 16:05:22.499362 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 16:05:22.502701 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:05:22.506081 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:05:22.509310 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:05:22.512661 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:05:22.515768 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:05:22.521411 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 16:05:22.972626 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Feb 13 16:05:23.271602 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 16:05:23.275647 ignition[1305]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 13 16:05:23.275647 ignition[1305]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 16:05:23.275647 ignition[1305]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 16:05:23.275647 ignition[1305]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 13 16:05:23.275647 ignition[1305]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 13 16:05:23.275647 ignition[1305]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:05:23.304819 ignition[1305]: INFO : files: files passed Feb 13 16:05:23.304819 ignition[1305]: INFO : Ignition finished successfully Feb 13 16:05:23.289206 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:05:23.316925 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:05:23.331650 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:05:23.351800 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:05:23.354009 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:05:23.366007 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:23.369381 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:23.372316 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:23.376824 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:05:23.381545 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:05:23.396375 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:05:23.457008 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:05:23.457260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:05:23.461742 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:05:23.464448 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:05:23.466406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:05:23.486513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:05:23.513114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:05:23.528560 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:05:23.551358 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:23.555670 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:23.557270 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:05:23.557520 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:05:23.557746 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:05:23.558411 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:05:23.558723 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:05:23.559015 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:05:23.559343 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:05:23.559606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:05:23.559908 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:05:23.560217 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:05:23.560798 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:05:23.561103 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:05:23.561674 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:05:23.561918 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:05:23.562116 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:05:23.563182 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:23.563490 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:23.563709 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:05:23.584068 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:23.584301 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:05:23.584514 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:05:23.590788 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:05:23.591047 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:05:23.595119 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:05:23.595429 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:05:23.620639 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:05:23.649447 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:05:23.655302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:05:23.656004 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:23.661872 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:05:23.662111 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:05:23.672689 ignition[1358]: INFO : Ignition 2.19.0 Feb 13 16:05:23.672689 ignition[1358]: INFO : Stage: umount Feb 13 16:05:23.677608 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:23.677608 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:23.677608 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:23.684694 ignition[1358]: INFO : PUT result: OK Feb 13 16:05:23.689758 ignition[1358]: INFO : umount: umount passed Feb 13 16:05:23.692332 ignition[1358]: INFO : Ignition finished successfully Feb 13 16:05:23.690787 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:05:23.698256 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:05:23.701086 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:05:23.701304 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:05:23.704597 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:05:23.704770 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:05:23.709846 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:05:23.711519 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:05:23.715258 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:05:23.715360 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:05:23.723698 systemd[1]: Stopped target network.target - Network. Feb 13 16:05:23.726860 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:05:23.728694 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:05:23.731345 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:05:23.743017 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:05:23.746422 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:23.755734 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:05:23.757386 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:05:23.759185 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:05:23.759268 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:05:23.761096 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:05:23.761188 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:05:23.763034 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:05:23.763119 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:05:23.765509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:05:23.765991 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:05:23.771220 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:05:23.773752 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:05:23.794201 systemd-networkd[1112]: eth0: DHCPv6 lease lost Feb 13 16:05:23.797910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:05:23.799677 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:05:23.799895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:05:23.804863 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:05:23.805601 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:05:23.810930 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:05:23.813616 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:05:23.820533 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:05:23.820684 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:23.823431 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:05:23.823543 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:05:23.838453 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:05:23.842431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:05:23.842564 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:05:23.851792 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:05:23.851898 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:23.854508 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:05:23.854589 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:23.857076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:05:23.857174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:23.859805 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:23.892875 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:05:23.895187 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:23.903248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:05:23.903543 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:23.911533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:05:23.911624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:23.915249 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:05:23.915601 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:05:23.919307 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:05:23.919396 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:05:23.923063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:05:23.923175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:23.945527 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:05:23.947839 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:05:23.947953 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:23.950355 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:05:23.950438 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:23.952689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:05:23.952767 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:23.955053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:05:23.955128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:23.957925 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:05:23.958235 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:05:23.977700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:05:23.977882 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:05:23.983087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:05:24.013452 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:05:24.055937 systemd[1]: Switching root. Feb 13 16:05:24.092066 systemd-journald[251]: Journal stopped Feb 13 16:05:26.605273 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 16:05:26.605401 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:05:26.605452 kernel: SELinux: policy capability open_perms=1 Feb 13 16:05:26.605493 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:05:26.605530 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:05:26.605562 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:05:26.605595 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:05:26.605627 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:05:26.605659 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:05:26.605687 kernel: audit: type=1403 audit(1739462724.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:05:26.605728 systemd[1]: Successfully loaded SELinux policy in 58.109ms. Feb 13 16:05:26.605782 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.664ms. Feb 13 16:05:26.605815 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:05:26.605851 systemd[1]: Detected virtualization amazon. Feb 13 16:05:26.605892 systemd[1]: Detected architecture arm64. Feb 13 16:05:26.605924 systemd[1]: Detected first boot. Feb 13 16:05:26.605958 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:05:26.605990 zram_generator::config[1419]: No configuration found. Feb 13 16:05:26.606026 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:05:26.606057 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:05:26.606090 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 16:05:26.606127 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:05:26.608268 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:05:26.608312 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:05:26.608347 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:05:26.608382 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:05:26.608415 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:05:26.608446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:05:26.608478 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:05:26.608510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:26.608549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:26.608601 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:05:26.608635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:05:26.608669 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:05:26.608702 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:05:26.608732 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:05:26.608764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:26.608795 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:05:26.608827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:26.608864 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:05:26.608898 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:05:26.608931 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:05:26.608962 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:05:26.608993 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:05:26.609025 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:05:26.609055 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:05:26.609087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:26.609121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:26.609175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:26.609208 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:05:26.609238 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:05:26.609268 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:05:26.609298 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:05:26.609335 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:05:26.609365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:05:26.609395 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:05:26.609432 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:05:26.609465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:26.609498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:05:26.609527 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:05:26.609557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:26.609587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:05:26.609616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:26.609646 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:05:26.609677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:26.609711 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:05:26.609747 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 16:05:26.609780 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 16:05:26.609809 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:05:26.609838 kernel: loop: module loaded Feb 13 16:05:26.609867 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:05:26.609897 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:05:26.609926 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:05:26.609960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:05:26.609991 kernel: fuse: init (API version 7.39) Feb 13 16:05:26.610023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:05:26.610052 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:05:26.610084 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:05:26.610113 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:05:26.612652 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:05:26.612702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:05:26.612733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:26.612770 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:05:26.612801 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:05:26.612886 systemd-journald[1519]: Collecting audit messages is disabled. Feb 13 16:05:26.612939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:26.612969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:26.613000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:26.613029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:26.613065 systemd-journald[1519]: Journal started Feb 13 16:05:26.613114 systemd-journald[1519]: Runtime Journal (/run/log/journal/ec297de3a81e88edca699200a00f3d48) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:05:26.617249 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:05:26.617317 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:05:26.624862 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:05:26.628702 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:26.629064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:26.633131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:26.636983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:05:26.640104 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:05:26.659780 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:05:26.667183 kernel: ACPI: bus type drm_connector registered Feb 13 16:05:26.669733 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:05:26.675551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:05:26.689446 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:05:26.701354 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:05:26.712475 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:05:26.717314 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:05:26.734609 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:05:26.740646 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:05:26.745184 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:05:26.754708 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:05:26.756892 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:05:26.772506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:26.783097 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:05:26.793466 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:05:26.799345 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:05:26.834016 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:05:26.837711 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:05:26.849303 systemd-journald[1519]: Time spent on flushing to /var/log/journal/ec297de3a81e88edca699200a00f3d48 is 101.460ms for 901 entries. Feb 13 16:05:26.849303 systemd-journald[1519]: System Journal (/var/log/journal/ec297de3a81e88edca699200a00f3d48) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:05:26.970312 systemd-journald[1519]: Received client request to flush runtime journal. Feb 13 16:05:26.889950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:26.903464 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:05:26.909338 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Feb 13 16:05:26.909365 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Feb 13 16:05:26.928043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:26.940468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:05:26.947927 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:26.966540 udevadm[1580]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 16:05:26.972867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:05:27.019898 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:05:27.032512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:05:27.064327 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Feb 13 16:05:27.064933 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Feb 13 16:05:27.078423 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:27.768241 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:05:27.778831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:27.842381 systemd-udevd[1599]: Using default interface naming scheme 'v255'. Feb 13 16:05:27.891335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:27.903388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:05:27.933392 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:05:28.026186 (udev-worker)[1600]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:28.087666 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 16:05:28.116384 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:05:28.288456 systemd-networkd[1603]: lo: Link UP Feb 13 16:05:28.289444 systemd-networkd[1603]: lo: Gained carrier Feb 13 16:05:28.293224 systemd-networkd[1603]: Enumeration completed Feb 13 16:05:28.294158 systemd-networkd[1603]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:28.294268 systemd-networkd[1603]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:05:28.296060 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:05:28.299048 systemd-networkd[1603]: eth0: Link UP Feb 13 16:05:28.299708 systemd-networkd[1603]: eth0: Gained carrier Feb 13 16:05:28.299747 systemd-networkd[1603]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:28.311555 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:05:28.328445 systemd-networkd[1603]: eth0: DHCPv4 address 172.31.24.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:05:28.332881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:28.339516 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1616) Feb 13 16:05:28.522738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:28.553787 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:05:28.584560 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:05:28.594430 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:05:28.631184 lvm[1728]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:05:28.669893 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:05:28.672887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:28.683441 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:05:28.701734 lvm[1731]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:05:28.742820 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:05:28.746746 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:05:28.749435 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:05:28.749659 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:05:28.751946 systemd[1]: Reached target machines.target - Containers. Feb 13 16:05:28.755955 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:05:28.764463 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:05:28.775505 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:05:28.778697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:28.783426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:05:28.795601 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:05:28.811521 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:05:28.821209 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:05:28.848661 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:05:28.856023 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 16:05:28.870298 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:05:28.872041 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:05:28.960553 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:05:28.994182 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 16:05:29.090180 kernel: loop2: detected capacity change from 0 to 194512 Feb 13 16:05:29.195208 kernel: loop3: detected capacity change from 0 to 52536 Feb 13 16:05:29.245187 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 16:05:29.257188 kernel: loop5: detected capacity change from 0 to 114328 Feb 13 16:05:29.271206 kernel: loop6: detected capacity change from 0 to 194512 Feb 13 16:05:29.304187 kernel: loop7: detected capacity change from 0 to 52536 Feb 13 16:05:29.321257 (sd-merge)[1752]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 16:05:29.322192 (sd-merge)[1752]: Merged extensions into '/usr'. Feb 13 16:05:29.330855 systemd[1]: Reloading requested from client PID 1739 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:05:29.330890 systemd[1]: Reloading... Feb 13 16:05:29.449192 zram_generator::config[1783]: No configuration found. Feb 13 16:05:29.721568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:29.871345 systemd[1]: Reloading finished in 539 ms. Feb 13 16:05:29.899101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:05:29.914658 systemd[1]: Starting ensure-sysext.service... Feb 13 16:05:29.926645 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:05:29.948358 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:05:29.948392 systemd[1]: Reloading... Feb 13 16:05:29.992373 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:05:29.993096 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:05:29.995856 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:05:29.996775 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Feb 13 16:05:29.997930 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Feb 13 16:05:30.017294 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:05:30.017484 systemd-tmpfiles[1838]: Skipping /boot Feb 13 16:05:30.043394 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:05:30.043422 systemd-tmpfiles[1838]: Skipping /boot Feb 13 16:05:30.152166 zram_generator::config[1872]: No configuration found. Feb 13 16:05:30.195706 ldconfig[1735]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:05:30.257353 systemd-networkd[1603]: eth0: Gained IPv6LL Feb 13 16:05:30.393557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:30.541911 systemd[1]: Reloading finished in 592 ms. Feb 13 16:05:30.571350 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:05:30.574716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:05:30.583161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:30.604547 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:05:30.611458 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:05:30.622433 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:05:30.636385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:05:30.648061 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:05:30.672007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:30.681855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:30.702579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:30.720226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:30.724973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:30.729543 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:05:30.738004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:30.743175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:30.757854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:30.759021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:30.772582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:30.783817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:30.792986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:30.795432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:30.810760 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:05:30.821675 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:30.822054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:30.833377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:30.834924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:30.844778 augenrules[1966]: No rules Feb 13 16:05:30.847881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:05:30.865489 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:05:30.870742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:30.876651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:30.892024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:05:30.910596 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:30.912939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:30.913596 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:05:30.920430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:30.920804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:30.928305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:05:30.948803 systemd[1]: Finished ensure-sysext.service. Feb 13 16:05:30.957536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:30.957903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:30.966804 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:05:30.969197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:05:30.988705 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:30.989292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:30.994077 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:05:31.007976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:05:31.008172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:05:31.008249 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:05:31.021223 systemd-resolved[1936]: Positive Trust Anchors: Feb 13 16:05:31.021253 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:05:31.021329 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:05:31.030619 systemd-resolved[1936]: Defaulting to hostname 'linux'. Feb 13 16:05:31.034044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:05:31.036271 systemd[1]: Reached target network.target - Network. Feb 13 16:05:31.037972 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:05:31.040131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:31.042366 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:05:31.044504 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:05:31.046873 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:05:31.049545 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:05:31.051756 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:05:31.054089 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:05:31.056517 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:05:31.056591 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:05:31.058257 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:05:31.061359 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:05:31.065982 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:05:31.070600 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:05:31.081027 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:05:31.085879 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:05:31.088795 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:05:31.091042 systemd[1]: System is tainted: cgroupsv1 Feb 13 16:05:31.091113 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:05:31.091302 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:05:31.098392 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:05:31.103946 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:05:31.118393 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:05:31.124831 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:05:31.139518 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:05:31.142106 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:05:31.148388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:31.157999 jq[2005]: false Feb 13 16:05:31.164089 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:05:31.182894 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 16:05:31.194710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:05:31.209948 dbus-daemon[2004]: [system] SELinux support is enabled Feb 13 16:05:31.216315 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:05:31.225053 dbus-daemon[2004]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1603 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 16:05:31.243288 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 16:05:31.263076 extend-filesystems[2006]: Found loop4 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found loop5 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found loop6 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found loop7 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p1 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p2 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p3 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found usr Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p4 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p6 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p7 Feb 13 16:05:31.272411 extend-filesystems[2006]: Found nvme0n1p9 Feb 13 16:05:31.272411 extend-filesystems[2006]: Checking size of /dev/nvme0n1p9 Feb 13 16:05:31.266436 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:05:31.320660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:05:31.338270 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:05:31.342099 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:05:31.362878 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:05:31.368579 ntpd[2011]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: ---------------------------------------------------- Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: corporation. Support and training for ntp-4 are Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: available at https://www.nwtime.org/support Feb 13 16:05:31.371764 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: ---------------------------------------------------- Feb 13 16:05:31.368639 ntpd[2011]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: proto: precision = 0.096 usec (-23) Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: basedate set to 2025-02-01 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: gps base set to 2025-02-02 (week 2352) Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listen normally on 3 eth0 172.31.24.10:123 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listen normally on 4 lo [::1]:123 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listen normally on 5 eth0 [fe80::4dd:aff:fea5:70d3%2]:123 Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: Listening on routing socket on fd #22 for interface updates Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:31.420753 ntpd[2011]: 13 Feb 16:05:31 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:31.391205 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:05:31.368660 ntpd[2011]: ---------------------------------------------------- Feb 13 16:05:31.398599 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:05:31.368680 ntpd[2011]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:05:31.368698 ntpd[2011]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:05:31.368717 ntpd[2011]: corporation. Support and training for ntp-4 are Feb 13 16:05:31.368736 ntpd[2011]: available at https://www.nwtime.org/support Feb 13 16:05:31.368754 ntpd[2011]: ---------------------------------------------------- Feb 13 16:05:31.386878 ntpd[2011]: proto: precision = 0.096 usec (-23) Feb 13 16:05:31.388785 ntpd[2011]: basedate set to 2025-02-01 Feb 13 16:05:31.436385 extend-filesystems[2006]: Resized partition /dev/nvme0n1p9 Feb 13 16:05:31.388816 ntpd[2011]: gps base set to 2025-02-02 (week 2352) Feb 13 16:05:31.393780 ntpd[2011]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:05:31.393854 ntpd[2011]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:05:31.394118 ntpd[2011]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:05:31.395337 ntpd[2011]: Listen normally on 3 eth0 172.31.24.10:123 Feb 13 16:05:31.395421 ntpd[2011]: Listen normally on 4 lo [::1]:123 Feb 13 16:05:31.395508 ntpd[2011]: Listen normally on 5 eth0 [fe80::4dd:aff:fea5:70d3%2]:123 Feb 13 16:05:31.395582 ntpd[2011]: Listening on routing socket on fd #22 for interface updates Feb 13 16:05:31.401869 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:31.401919 ntpd[2011]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:31.448725 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:05:31.455382 extend-filesystems[2045]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:05:31.473263 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 16:05:31.455893 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:05:31.461414 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:05:31.461942 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:05:31.479261 coreos-metadata[2002]: Feb 13 16:05:31.476 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:05:31.479261 coreos-metadata[2002]: Feb 13 16:05:31.476 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 16:05:31.479261 coreos-metadata[2002]: Feb 13 16:05:31.476 INFO Fetch successful Feb 13 16:05:31.479261 coreos-metadata[2002]: Feb 13 16:05:31.476 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 16:05:31.479261 coreos-metadata[2002]: Feb 13 16:05:31.476 INFO Fetch successful Feb 13 16:05:31.479261 coreos-metadata[2002]: Feb 13 16:05:31.476 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 16:05:31.478660 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:05:31.498168 coreos-metadata[2002]: Feb 13 16:05:31.480 INFO Fetch successful Feb 13 16:05:31.498168 coreos-metadata[2002]: Feb 13 16:05:31.480 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 16:05:31.498168 coreos-metadata[2002]: Feb 13 16:05:31.495 INFO Fetch successful Feb 13 16:05:31.498168 coreos-metadata[2002]: Feb 13 16:05:31.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 16:05:31.498168 coreos-metadata[2002]: Feb 13 16:05:31.495 INFO Fetch failed with 404: resource not found Feb 13 16:05:31.498168 coreos-metadata[2002]: Feb 13 16:05:31.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 16:05:31.486192 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:05:31.498686 jq[2036]: true Feb 13 16:05:31.513017 coreos-metadata[2002]: Feb 13 16:05:31.501 INFO Fetch successful Feb 13 16:05:31.513017 coreos-metadata[2002]: Feb 13 16:05:31.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 16:05:31.513017 coreos-metadata[2002]: Feb 13 16:05:31.504 INFO Fetch successful Feb 13 16:05:31.513017 coreos-metadata[2002]: Feb 13 16:05:31.504 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 16:05:31.513017 coreos-metadata[2002]: Feb 13 16:05:31.512 INFO Fetch successful Feb 13 16:05:31.513017 coreos-metadata[2002]: Feb 13 16:05:31.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 16:05:31.515747 coreos-metadata[2002]: Feb 13 16:05:31.514 INFO Fetch successful Feb 13 16:05:31.515747 coreos-metadata[2002]: Feb 13 16:05:31.514 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 16:05:31.518754 coreos-metadata[2002]: Feb 13 16:05:31.516 INFO Fetch successful Feb 13 16:05:31.534733 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:05:31.548496 (ntainerd)[2052]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:05:31.635474 jq[2057]: true Feb 13 16:05:31.640998 tar[2048]: linux-arm64/helm Feb 13 16:05:31.638560 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:05:31.638644 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:05:31.641239 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:05:31.641287 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:05:31.654632 dbus-daemon[2004]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 16:05:31.697736 update_engine[2033]: I20250213 16:05:31.696986 2033 main.cc:92] Flatcar Update Engine starting Feb 13 16:05:31.712637 update_engine[2033]: I20250213 16:05:31.712548 2033 update_check_scheduler.cc:74] Next update check in 9m47s Feb 13 16:05:31.724298 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 16:05:31.720210 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 16:05:31.724864 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 16:05:31.727251 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:05:31.753102 extend-filesystems[2045]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 16:05:31.753102 extend-filesystems[2045]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 16:05:31.753102 extend-filesystems[2045]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 16:05:31.764688 extend-filesystems[2006]: Resized filesystem in /dev/nvme0n1p9 Feb 13 16:05:31.758429 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 16:05:31.775665 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:05:31.783708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:05:31.790220 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:05:31.790777 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:05:31.864812 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:05:31.868878 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:05:31.888939 systemd-logind[2030]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 16:05:31.888990 systemd-logind[2030]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 16:05:31.891411 systemd-logind[2030]: New seat seat0. Feb 13 16:05:31.900116 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:05:32.017216 bash[2122]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:05:32.025050 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:05:32.046772 systemd[1]: Starting sshkeys.service... Feb 13 16:05:32.118749 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2105) Feb 13 16:05:32.151219 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:05:32.160668 locksmithd[2091]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:05:32.163412 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:05:32.296071 amazon-ssm-agent[2089]: Initializing new seelog logger Feb 13 16:05:32.296071 amazon-ssm-agent[2089]: New Seelog Logger Creation Complete Feb 13 16:05:32.296071 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.296071 amazon-ssm-agent[2089]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.296071 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 processing appconfig overrides Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 processing appconfig overrides Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 processing appconfig overrides Feb 13 16:05:32.311132 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO Proxy environment variables: Feb 13 16:05:32.317382 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.317382 amazon-ssm-agent[2089]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:05:32.317382 amazon-ssm-agent[2089]: 2025/02/13 16:05:32 processing appconfig overrides Feb 13 16:05:32.319274 containerd[2052]: time="2025-02-13T16:05:32.318102791Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 16:05:32.424068 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO https_proxy: Feb 13 16:05:32.496121 containerd[2052]: time="2025-02-13T16:05:32.495642912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.502385 containerd[2052]: time="2025-02-13T16:05:32.502295352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:32.502385 containerd[2052]: time="2025-02-13T16:05:32.502373064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:05:32.502571 containerd[2052]: time="2025-02-13T16:05:32.502409232Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:05:32.504173 containerd[2052]: time="2025-02-13T16:05:32.503670684Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:05:32.504173 containerd[2052]: time="2025-02-13T16:05:32.503942268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.505673 containerd[2052]: time="2025-02-13T16:05:32.504893580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:32.505673 containerd[2052]: time="2025-02-13T16:05:32.504973956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.508340 containerd[2052]: time="2025-02-13T16:05:32.506578776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:32.508340 containerd[2052]: time="2025-02-13T16:05:32.506640588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.508340 containerd[2052]: time="2025-02-13T16:05:32.506793924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:32.508340 containerd[2052]: time="2025-02-13T16:05:32.506933352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.508340 containerd[2052]: time="2025-02-13T16:05:32.507523656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.511431 containerd[2052]: time="2025-02-13T16:05:32.509822664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:05:32.511431 containerd[2052]: time="2025-02-13T16:05:32.510793020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:05:32.511431 containerd[2052]: time="2025-02-13T16:05:32.510836712Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:05:32.514623 containerd[2052]: time="2025-02-13T16:05:32.513312924Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:05:32.514623 containerd[2052]: time="2025-02-13T16:05:32.513463224Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:05:32.520638 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO http_proxy: Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.522010332Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.522128136Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.522191424Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.522227904Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.522271128Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.522573540Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524353560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524659416Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524697312Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524728332Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524760648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524791548Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524823768Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526175 containerd[2052]: time="2025-02-13T16:05:32.524856060Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.524890188Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.524923848Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.524954796Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.524983260Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.525023532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.525069636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.526845 containerd[2052]: time="2025-02-13T16:05:32.525100440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.525131604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527331672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527369580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527403900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527435400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527468340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527505600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527534604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527568900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527598336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527645400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527694588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527724252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.529866 containerd[2052]: time="2025-02-13T16:05:32.527767776Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.527876364Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.527912412Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.527939208Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.527970900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.527995968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.529543224Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.529573464Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:05:32.530543 containerd[2052]: time="2025-02-13T16:05:32.529604520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:05:32.537218 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.530128668Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.532816188Z" level=info msg="Connect containerd service" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.532937772Z" level=info msg="using legacy CRI server" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.532959180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.533109312Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.534062676Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536336952Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536447472Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536647092Z" level=info msg="Start subscribing containerd event" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536713284Z" level=info msg="Start recovering state" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536839524Z" level=info msg="Start event monitor" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536863836Z" level=info msg="Start snapshots syncer" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536885460Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:05:32.542299 containerd[2052]: time="2025-02-13T16:05:32.536903796Z" level=info msg="Start streaming server" Feb 13 16:05:32.561173 containerd[2052]: time="2025-02-13T16:05:32.558342516Z" level=info msg="containerd successfully booted in 0.246251s" Feb 13 16:05:32.610241 coreos-metadata[2149]: Feb 13 16:05:32.610 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:05:32.618186 coreos-metadata[2149]: Feb 13 16:05:32.613 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 16:05:32.618186 coreos-metadata[2149]: Feb 13 16:05:32.617 INFO Fetch successful Feb 13 16:05:32.618186 coreos-metadata[2149]: Feb 13 16:05:32.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 16:05:32.620165 coreos-metadata[2149]: Feb 13 16:05:32.619 INFO Fetch successful Feb 13 16:05:32.621946 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO no_proxy: Feb 13 16:05:32.626149 unknown[2149]: wrote ssh authorized keys file for user: core Feb 13 16:05:32.699699 dbus-daemon[2004]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 16:05:32.699989 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 16:05:32.709902 dbus-daemon[2004]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2084 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 16:05:32.722165 update-ssh-keys[2230]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:05:32.722590 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO Checking if agent identity type OnPrem can be assumed Feb 13 16:05:32.724534 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:05:32.739674 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 16:05:32.746595 systemd[1]: Finished sshkeys.service. Feb 13 16:05:32.805112 polkitd[2236]: Started polkitd version 121 Feb 13 16:05:32.822462 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO Checking if agent identity type EC2 can be assumed Feb 13 16:05:32.841325 polkitd[2236]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 16:05:32.841629 polkitd[2236]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 16:05:32.844434 polkitd[2236]: Finished loading, compiling and executing 2 rules Feb 13 16:05:32.846993 dbus-daemon[2004]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 16:05:32.847312 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 16:05:32.850079 polkitd[2236]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 16:05:32.917922 systemd-hostnamed[2084]: Hostname set to (transient) Feb 13 16:05:32.918906 systemd-resolved[1936]: System hostname changed to 'ip-172-31-24-10'. Feb 13 16:05:32.923903 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO Agent will take identity from EC2 Feb 13 16:05:32.924011 sshd_keygen[2053]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:05:33.024222 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:05:33.056598 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:05:33.075750 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:05:33.119897 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:05:33.120444 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:05:33.123495 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:05:33.137689 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:05:33.206846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:05:33.225493 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:05:33.225774 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:05:33.239710 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:05:33.243383 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:05:33.323021 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 16:05:33.422357 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 16:05:33.524157 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 16:05:33.621523 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 16:05:33.621523 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [Registrar] Starting registrar module Feb 13 16:05:33.621523 amazon-ssm-agent[2089]: 2025-02-13 16:05:32 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 16:05:33.621523 amazon-ssm-agent[2089]: 2025-02-13 16:05:33 INFO [EC2Identity] EC2 registration was successful. Feb 13 16:05:33.621523 amazon-ssm-agent[2089]: 2025-02-13 16:05:33 INFO [CredentialRefresher] credentialRefresher has started Feb 13 16:05:33.621820 amazon-ssm-agent[2089]: 2025-02-13 16:05:33 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 16:05:33.622900 amazon-ssm-agent[2089]: 2025-02-13 16:05:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 16:05:33.623074 amazon-ssm-agent[2089]: 2025-02-13 16:05:33 INFO [CredentialRefresher] Next credential rotation will be in 32.158301180433334 minutes Feb 13 16:05:33.624928 tar[2048]: linux-arm64/LICENSE Feb 13 16:05:33.624928 tar[2048]: linux-arm64/README.md Feb 13 16:05:33.646629 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:05:33.920519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:33.923650 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:05:33.928847 systemd[1]: Startup finished in 10.148s (kernel) + 9.170s (userspace) = 19.319s. Feb 13 16:05:33.933923 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:34.662945 amazon-ssm-agent[2089]: 2025-02-13 16:05:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 16:05:34.764291 amazon-ssm-agent[2089]: 2025-02-13 16:05:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2303) started Feb 13 16:05:34.865593 amazon-ssm-agent[2089]: 2025-02-13 16:05:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 16:05:34.983046 kubelet[2292]: E0213 16:05:34.982788 2292 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:34.988305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:34.989276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:38.658203 systemd-resolved[1936]: Clock change detected. Flushing caches. Feb 13 16:05:39.466220 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:05:39.476557 systemd[1]: Started sshd@0-172.31.24.10:22-139.178.68.195:35348.service - OpenSSH per-connection server daemon (139.178.68.195:35348). Feb 13 16:05:39.703558 sshd[2316]: Accepted publickey for core from 139.178.68.195 port 35348 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:39.707702 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:39.724460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:05:39.739518 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:05:39.744214 systemd-logind[2030]: New session 1 of user core. Feb 13 16:05:39.765604 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:05:39.778192 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:05:39.791622 (systemd)[2322]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:05:40.003455 systemd[2322]: Queued start job for default target default.target. Feb 13 16:05:40.004151 systemd[2322]: Created slice app.slice - User Application Slice. Feb 13 16:05:40.004205 systemd[2322]: Reached target paths.target - Paths. Feb 13 16:05:40.004238 systemd[2322]: Reached target timers.target - Timers. Feb 13 16:05:40.012299 systemd[2322]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:05:40.028427 systemd[2322]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:05:40.028565 systemd[2322]: Reached target sockets.target - Sockets. Feb 13 16:05:40.028597 systemd[2322]: Reached target basic.target - Basic System. Feb 13 16:05:40.028690 systemd[2322]: Reached target default.target - Main User Target. Feb 13 16:05:40.028753 systemd[2322]: Startup finished in 226ms. Feb 13 16:05:40.029704 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:05:40.036969 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:05:40.186085 systemd[1]: Started sshd@1-172.31.24.10:22-139.178.68.195:35350.service - OpenSSH per-connection server daemon (139.178.68.195:35350). Feb 13 16:05:40.361524 sshd[2334]: Accepted publickey for core from 139.178.68.195 port 35350 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:40.364019 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:40.372902 systemd-logind[2030]: New session 2 of user core. Feb 13 16:05:40.379714 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:05:40.508409 sshd[2334]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:40.515616 systemd-logind[2030]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:05:40.516762 systemd[1]: sshd@1-172.31.24.10:22-139.178.68.195:35350.service: Deactivated successfully. Feb 13 16:05:40.521880 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:05:40.523688 systemd-logind[2030]: Removed session 2. Feb 13 16:05:40.540626 systemd[1]: Started sshd@2-172.31.24.10:22-139.178.68.195:35358.service - OpenSSH per-connection server daemon (139.178.68.195:35358). Feb 13 16:05:40.710550 sshd[2342]: Accepted publickey for core from 139.178.68.195 port 35358 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:40.713229 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:40.720592 systemd-logind[2030]: New session 3 of user core. Feb 13 16:05:40.733689 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:05:40.854429 sshd[2342]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:40.860559 systemd[1]: sshd@2-172.31.24.10:22-139.178.68.195:35358.service: Deactivated successfully. Feb 13 16:05:40.866881 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:05:40.868734 systemd-logind[2030]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:05:40.870878 systemd-logind[2030]: Removed session 3. Feb 13 16:05:40.887546 systemd[1]: Started sshd@3-172.31.24.10:22-139.178.68.195:35372.service - OpenSSH per-connection server daemon (139.178.68.195:35372). Feb 13 16:05:41.055429 sshd[2350]: Accepted publickey for core from 139.178.68.195 port 35372 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:41.058096 sshd[2350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:41.066193 systemd-logind[2030]: New session 4 of user core. Feb 13 16:05:41.075569 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:05:41.203570 sshd[2350]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:41.210339 systemd-logind[2030]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:05:41.211366 systemd[1]: sshd@3-172.31.24.10:22-139.178.68.195:35372.service: Deactivated successfully. Feb 13 16:05:41.216609 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:05:41.218158 systemd-logind[2030]: Removed session 4. Feb 13 16:05:41.233603 systemd[1]: Started sshd@4-172.31.24.10:22-139.178.68.195:35380.service - OpenSSH per-connection server daemon (139.178.68.195:35380). Feb 13 16:05:41.410566 sshd[2358]: Accepted publickey for core from 139.178.68.195 port 35380 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:41.412521 sshd[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:41.419888 systemd-logind[2030]: New session 5 of user core. Feb 13 16:05:41.432698 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:05:41.569322 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:05:41.570000 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:05:41.590030 sudo[2362]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:41.613281 sshd[2358]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:41.620814 systemd[1]: sshd@4-172.31.24.10:22-139.178.68.195:35380.service: Deactivated successfully. Feb 13 16:05:41.625923 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:05:41.626004 systemd-logind[2030]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:05:41.630179 systemd-logind[2030]: Removed session 5. Feb 13 16:05:41.644637 systemd[1]: Started sshd@5-172.31.24.10:22-139.178.68.195:35392.service - OpenSSH per-connection server daemon (139.178.68.195:35392). Feb 13 16:05:41.823008 sshd[2367]: Accepted publickey for core from 139.178.68.195 port 35392 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:41.825621 sshd[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:41.832904 systemd-logind[2030]: New session 6 of user core. Feb 13 16:05:41.843563 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:05:41.949327 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:05:41.949959 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:05:41.955818 sudo[2372]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:41.965744 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 16:05:41.966600 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:05:41.988631 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 16:05:42.004007 auditctl[2375]: No rules Feb 13 16:05:42.004840 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:05:42.005391 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 16:05:42.018779 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:05:42.061811 augenrules[2394]: No rules Feb 13 16:05:42.064839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:05:42.068590 sudo[2371]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:42.091867 sshd[2367]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:42.099919 systemd[1]: sshd@5-172.31.24.10:22-139.178.68.195:35392.service: Deactivated successfully. Feb 13 16:05:42.101512 systemd-logind[2030]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:05:42.105893 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:05:42.108088 systemd-logind[2030]: Removed session 6. Feb 13 16:05:42.131578 systemd[1]: Started sshd@6-172.31.24.10:22-139.178.68.195:35396.service - OpenSSH per-connection server daemon (139.178.68.195:35396). Feb 13 16:05:42.296077 sshd[2403]: Accepted publickey for core from 139.178.68.195 port 35396 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:05:42.298552 sshd[2403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:42.306947 systemd-logind[2030]: New session 7 of user core. Feb 13 16:05:42.313686 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:05:42.417742 sudo[2407]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:05:42.418481 sudo[2407]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:05:42.957976 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:05:42.959637 (dockerd)[2422]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:05:43.422477 dockerd[2422]: time="2025-02-13T16:05:43.422288047Z" level=info msg="Starting up" Feb 13 16:05:43.585982 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2512283843-merged.mount: Deactivated successfully. Feb 13 16:05:43.837168 dockerd[2422]: time="2025-02-13T16:05:43.837092013Z" level=info msg="Loading containers: start." Feb 13 16:05:44.040180 kernel: Initializing XFRM netlink socket Feb 13 16:05:44.120858 (udev-worker)[2444]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:44.206934 systemd-networkd[1603]: docker0: Link UP Feb 13 16:05:44.232438 dockerd[2422]: time="2025-02-13T16:05:44.232377067Z" level=info msg="Loading containers: done." Feb 13 16:05:44.262172 dockerd[2422]: time="2025-02-13T16:05:44.261495247Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:05:44.262172 dockerd[2422]: time="2025-02-13T16:05:44.261713491Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 16:05:44.262172 dockerd[2422]: time="2025-02-13T16:05:44.261900967Z" level=info msg="Daemon has completed initialization" Feb 13 16:05:44.311742 dockerd[2422]: time="2025-02-13T16:05:44.311641568Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:05:44.312122 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:05:45.440260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:05:45.452968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:45.552146 containerd[2052]: time="2025-02-13T16:05:45.551642914Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 16:05:45.801522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:45.814764 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:45.904744 kubelet[2580]: E0213 16:05:45.904646 2580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:45.914467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:45.915061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:46.284479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179247426.mount: Deactivated successfully. Feb 13 16:05:48.062784 containerd[2052]: time="2025-02-13T16:05:48.062703526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:48.064485 containerd[2052]: time="2025-02-13T16:05:48.064362634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861" Feb 13 16:05:48.065708 containerd[2052]: time="2025-02-13T16:05:48.065623210Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:48.071479 containerd[2052]: time="2025-02-13T16:05:48.071417758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:48.074789 containerd[2052]: time="2025-02-13T16:05:48.073922830Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.522209332s" Feb 13 16:05:48.074789 containerd[2052]: time="2025-02-13T16:05:48.073984474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 16:05:48.111177 containerd[2052]: time="2025-02-13T16:05:48.111082762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 16:05:50.382229 containerd[2052]: time="2025-02-13T16:05:50.382168946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:50.384593 containerd[2052]: time="2025-02-13T16:05:50.384536378Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091" Feb 13 16:05:50.385794 containerd[2052]: time="2025-02-13T16:05:50.385727642Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:50.392487 containerd[2052]: time="2025-02-13T16:05:50.392426798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:50.395853 containerd[2052]: time="2025-02-13T16:05:50.395798330Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 2.284350156s" Feb 13 16:05:50.396061 containerd[2052]: time="2025-02-13T16:05:50.395857490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 16:05:50.434264 containerd[2052]: time="2025-02-13T16:05:50.434150690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 16:05:51.734928 containerd[2052]: time="2025-02-13T16:05:51.734863324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:51.736983 containerd[2052]: time="2025-02-13T16:05:51.736931296Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980" Feb 13 16:05:51.737668 containerd[2052]: time="2025-02-13T16:05:51.737588236Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:51.743352 containerd[2052]: time="2025-02-13T16:05:51.743257612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:51.746011 containerd[2052]: time="2025-02-13T16:05:51.745642121Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.311308059s" Feb 13 16:05:51.746011 containerd[2052]: time="2025-02-13T16:05:51.745697213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 16:05:51.784947 containerd[2052]: time="2025-02-13T16:05:51.784816709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 16:05:53.072763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2413181412.mount: Deactivated successfully. Feb 13 16:05:53.552002 containerd[2052]: time="2025-02-13T16:05:53.551944757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:53.554319 containerd[2052]: time="2025-02-13T16:05:53.554263277Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375" Feb 13 16:05:53.555494 containerd[2052]: time="2025-02-13T16:05:53.555420869Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:53.559851 containerd[2052]: time="2025-02-13T16:05:53.559781586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:53.562483 containerd[2052]: time="2025-02-13T16:05:53.562423818Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.777547529s" Feb 13 16:05:53.562623 containerd[2052]: time="2025-02-13T16:05:53.562479762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 16:05:53.600281 containerd[2052]: time="2025-02-13T16:05:53.600223050Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:05:54.205713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827649517.mount: Deactivated successfully. Feb 13 16:05:55.670447 containerd[2052]: time="2025-02-13T16:05:55.670388540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:55.700230 containerd[2052]: time="2025-02-13T16:05:55.700155356Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 16:05:55.745365 containerd[2052]: time="2025-02-13T16:05:55.745297280Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:55.789130 containerd[2052]: time="2025-02-13T16:05:55.788878161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:55.791180 containerd[2052]: time="2025-02-13T16:05:55.790623081Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.190333359s" Feb 13 16:05:55.791180 containerd[2052]: time="2025-02-13T16:05:55.790683633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 16:05:55.835675 containerd[2052]: time="2025-02-13T16:05:55.835406253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 16:05:55.940269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:05:55.947467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:56.960621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:56.977752 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:57.075486 kubelet[2739]: E0213 16:05:57.075075 2739 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:57.082426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:57.083127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:57.091331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427404263.mount: Deactivated successfully. Feb 13 16:05:57.103193 containerd[2052]: time="2025-02-13T16:05:57.102354127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.104997 containerd[2052]: time="2025-02-13T16:05:57.104929903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 16:05:57.107640 containerd[2052]: time="2025-02-13T16:05:57.107567023Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.112778 containerd[2052]: time="2025-02-13T16:05:57.112686439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.114539 containerd[2052]: time="2025-02-13T16:05:57.114355555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.278890226s" Feb 13 16:05:57.114539 containerd[2052]: time="2025-02-13T16:05:57.114410251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 16:05:57.151351 containerd[2052]: time="2025-02-13T16:05:57.151016071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 16:05:57.751479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171776882.mount: Deactivated successfully. Feb 13 16:06:00.390877 containerd[2052]: time="2025-02-13T16:06:00.390817427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:00.393744 containerd[2052]: time="2025-02-13T16:06:00.393697043Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Feb 13 16:06:00.394311 containerd[2052]: time="2025-02-13T16:06:00.394273883Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:00.400382 containerd[2052]: time="2025-02-13T16:06:00.400332275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:00.402979 containerd[2052]: time="2025-02-13T16:06:00.402917568Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.251843441s" Feb 13 16:06:00.403076 containerd[2052]: time="2025-02-13T16:06:00.402975732Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 16:06:03.240460 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 16:06:07.190386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 16:06:07.199910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:07.519390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:07.534692 (kubelet)[2881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:07.623823 kubelet[2881]: E0213 16:06:07.623757 2881 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:07.631042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:07.631549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:07.668184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:07.678615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:07.727025 systemd[1]: Reloading requested from client PID 2898 ('systemctl') (unit session-7.scope)... Feb 13 16:06:07.727056 systemd[1]: Reloading... Feb 13 16:06:07.934162 zram_generator::config[2938]: No configuration found. Feb 13 16:06:08.206084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:06:08.372369 systemd[1]: Reloading finished in 644 ms. Feb 13 16:06:08.455872 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:06:08.456080 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:06:08.456816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:08.468039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:08.754445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:08.772780 (kubelet)[3013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:06:08.860117 kubelet[3013]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:08.860117 kubelet[3013]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:06:08.860117 kubelet[3013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:08.860712 kubelet[3013]: I0213 16:06:08.860221 3013 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:06:09.795579 kubelet[3013]: I0213 16:06:09.795520 3013 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:06:09.795579 kubelet[3013]: I0213 16:06:09.795571 3013 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:06:09.795932 kubelet[3013]: I0213 16:06:09.795894 3013 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:06:09.829530 kubelet[3013]: I0213 16:06:09.829287 3013 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:09.830416 kubelet[3013]: E0213 16:06:09.830295 3013 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.845669 kubelet[3013]: I0213 16:06:09.845183 3013 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:06:09.845962 kubelet[3013]: I0213 16:06:09.845903 3013 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:06:09.846375 kubelet[3013]: I0213 16:06:09.846329 3013 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:06:09.846549 kubelet[3013]: I0213 16:06:09.846381 3013 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:06:09.846549 kubelet[3013]: I0213 16:06:09.846404 3013 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:06:09.847653 kubelet[3013]: I0213 16:06:09.847604 3013 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:09.852984 kubelet[3013]: I0213 16:06:09.852935 3013 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:06:09.852984 kubelet[3013]: I0213 16:06:09.852988 3013 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:06:09.854199 kubelet[3013]: I0213 16:06:09.853045 3013 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:06:09.854199 kubelet[3013]: I0213 16:06:09.853078 3013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:06:09.857191 kubelet[3013]: W0213 16:06:09.856603 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.857191 kubelet[3013]: E0213 16:06:09.856683 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.857191 kubelet[3013]: W0213 16:06:09.857053 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-10&limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.857191 kubelet[3013]: E0213 16:06:09.857143 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-10&limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.858013 kubelet[3013]: I0213 16:06:09.857676 3013 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:06:09.858339 kubelet[3013]: I0213 16:06:09.858314 3013 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:06:09.858542 kubelet[3013]: W0213 16:06:09.858521 3013 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:06:09.860037 kubelet[3013]: I0213 16:06:09.859995 3013 server.go:1256] "Started kubelet" Feb 13 16:06:09.863821 kubelet[3013]: I0213 16:06:09.863422 3013 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:06:09.865225 kubelet[3013]: I0213 16:06:09.864796 3013 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:06:09.865751 kubelet[3013]: I0213 16:06:09.865706 3013 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:06:09.866275 kubelet[3013]: I0213 16:06:09.866249 3013 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:06:09.870318 kubelet[3013]: I0213 16:06:09.870258 3013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:06:09.872224 kubelet[3013]: E0213 16:06:09.872175 3013 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-10.1823d02c518513ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-10,UID:ip-172-31-24-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-10,},FirstTimestamp:2025-02-13 16:06:09.859957738 +0000 UTC m=+1.080111474,LastTimestamp:2025-02-13 16:06:09.859957738 +0000 UTC m=+1.080111474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-10,}" Feb 13 16:06:09.879998 kubelet[3013]: I0213 16:06:09.879850 3013 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:06:09.881215 kubelet[3013]: I0213 16:06:09.881075 3013 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:06:09.881505 kubelet[3013]: I0213 16:06:09.881482 3013 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:06:09.883242 kubelet[3013]: W0213 16:06:09.882559 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.883242 kubelet[3013]: E0213 16:06:09.882641 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.883242 kubelet[3013]: E0213 16:06:09.882770 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-10?timeout=10s\": dial tcp 172.31.24.10:6443: connect: connection refused" interval="200ms" Feb 13 16:06:09.883242 kubelet[3013]: E0213 16:06:09.882955 3013 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:06:09.885276 kubelet[3013]: I0213 16:06:09.884585 3013 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:06:09.887030 kubelet[3013]: I0213 16:06:09.886975 3013 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:06:09.887266 kubelet[3013]: I0213 16:06:09.887247 3013 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:06:09.914910 kubelet[3013]: I0213 16:06:09.914853 3013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:06:09.917713 kubelet[3013]: I0213 16:06:09.917281 3013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:06:09.917713 kubelet[3013]: I0213 16:06:09.917326 3013 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:06:09.917713 kubelet[3013]: I0213 16:06:09.917360 3013 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:06:09.917713 kubelet[3013]: E0213 16:06:09.917447 3013 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:06:09.935024 kubelet[3013]: W0213 16:06:09.934956 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.937245 kubelet[3013]: E0213 16:06:09.937211 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:09.947330 kubelet[3013]: I0213 16:06:09.947224 3013 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:06:09.947330 kubelet[3013]: I0213 16:06:09.947291 3013 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:06:09.947519 kubelet[3013]: I0213 16:06:09.947354 3013 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:09.950178 kubelet[3013]: I0213 16:06:09.950140 3013 policy_none.go:49] "None policy: Start" Feb 13 16:06:09.951196 kubelet[3013]: I0213 16:06:09.951170 3013 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:06:09.951942 kubelet[3013]: I0213 16:06:09.951505 3013 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:06:09.959541 kubelet[3013]: I0213 16:06:09.959503 3013 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:06:09.961158 kubelet[3013]: I0213 16:06:09.960022 3013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:06:09.970668 kubelet[3013]: E0213 16:06:09.970637 3013 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-10\" not found" Feb 13 16:06:09.983064 kubelet[3013]: I0213 16:06:09.983022 3013 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-10" Feb 13 16:06:09.983629 kubelet[3013]: E0213 16:06:09.983595 3013 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.10:6443/api/v1/nodes\": dial tcp 172.31.24.10:6443: connect: connection refused" node="ip-172-31-24-10" Feb 13 16:06:10.018177 kubelet[3013]: I0213 16:06:10.017825 3013 topology_manager.go:215] "Topology Admit Handler" podUID="1ea266933c2bbc5cb7d107c0212ecc7f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-10" Feb 13 16:06:10.020007 kubelet[3013]: I0213 16:06:10.019959 3013 topology_manager.go:215] "Topology Admit Handler" podUID="91e70ef5388fb1c9a46b50accf883fab" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:10.022593 kubelet[3013]: I0213 16:06:10.022539 3013 topology_manager.go:215] "Topology Admit Handler" podUID="78b6957c8db9c3a5a81fa97c052dc6b0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-10" Feb 13 16:06:10.082361 kubelet[3013]: I0213 16:06:10.082213 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ea266933c2bbc5cb7d107c0212ecc7f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-10\" (UID: \"1ea266933c2bbc5cb7d107c0212ecc7f\") " pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:10.082361 kubelet[3013]: I0213 16:06:10.082289 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:10.082361 kubelet[3013]: I0213 16:06:10.082337 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:10.082592 kubelet[3013]: I0213 16:06:10.082399 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:10.082592 kubelet[3013]: I0213 16:06:10.082449 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:10.082592 kubelet[3013]: I0213 16:06:10.082494 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ea266933c2bbc5cb7d107c0212ecc7f-ca-certs\") pod \"kube-apiserver-ip-172-31-24-10\" (UID: \"1ea266933c2bbc5cb7d107c0212ecc7f\") " pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:10.082592 kubelet[3013]: I0213 16:06:10.082540 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ea266933c2bbc5cb7d107c0212ecc7f-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-10\" (UID: \"1ea266933c2bbc5cb7d107c0212ecc7f\") " pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:10.082592 kubelet[3013]: I0213 16:06:10.082585 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:10.082834 kubelet[3013]: I0213 16:06:10.082628 3013 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b6957c8db9c3a5a81fa97c052dc6b0-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-10\" (UID: \"78b6957c8db9c3a5a81fa97c052dc6b0\") " pod="kube-system/kube-scheduler-ip-172-31-24-10" Feb 13 16:06:10.082834 kubelet[3013]: E0213 16:06:10.083306 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-10?timeout=10s\": dial tcp 172.31.24.10:6443: connect: connection refused" interval="400ms" Feb 13 16:06:10.186695 kubelet[3013]: I0213 16:06:10.186646 3013 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-10" Feb 13 16:06:10.187227 kubelet[3013]: E0213 16:06:10.187197 3013 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.10:6443/api/v1/nodes\": dial tcp 172.31.24.10:6443: connect: connection refused" node="ip-172-31-24-10" Feb 13 16:06:10.329849 containerd[2052]: time="2025-02-13T16:06:10.329777253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-10,Uid:1ea266933c2bbc5cb7d107c0212ecc7f,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:10.336683 containerd[2052]: time="2025-02-13T16:06:10.336556233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-10,Uid:91e70ef5388fb1c9a46b50accf883fab,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:10.340902 containerd[2052]: time="2025-02-13T16:06:10.340813425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-10,Uid:78b6957c8db9c3a5a81fa97c052dc6b0,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:10.484085 kubelet[3013]: E0213 16:06:10.484032 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-10?timeout=10s\": dial tcp 172.31.24.10:6443: connect: connection refused" interval="800ms" Feb 13 16:06:10.589914 kubelet[3013]: I0213 16:06:10.589771 3013 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-10" Feb 13 16:06:10.590520 kubelet[3013]: E0213 16:06:10.590463 3013 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.10:6443/api/v1/nodes\": dial tcp 172.31.24.10:6443: connect: connection refused" node="ip-172-31-24-10" Feb 13 16:06:10.724519 kubelet[3013]: W0213 16:06:10.724435 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-10&limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:10.724519 kubelet[3013]: E0213 16:06:10.724525 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-10&limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:10.825029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2187676769.mount: Deactivated successfully. Feb 13 16:06:10.833761 containerd[2052]: time="2025-02-13T16:06:10.833691671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:10.835508 containerd[2052]: time="2025-02-13T16:06:10.835432595Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:10.837525 containerd[2052]: time="2025-02-13T16:06:10.837291467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:06:10.837525 containerd[2052]: time="2025-02-13T16:06:10.837354755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 16:06:10.838255 containerd[2052]: time="2025-02-13T16:06:10.838188947Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:10.840129 containerd[2052]: time="2025-02-13T16:06:10.839934947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:06:10.840685 containerd[2052]: time="2025-02-13T16:06:10.840634247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:10.848538 containerd[2052]: time="2025-02-13T16:06:10.848454899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:10.850636 containerd[2052]: time="2025-02-13T16:06:10.850271699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.294954ms" Feb 13 16:06:10.853827 containerd[2052]: time="2025-02-13T16:06:10.853764791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.852346ms" Feb 13 16:06:10.861291 containerd[2052]: time="2025-02-13T16:06:10.861224555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.313986ms" Feb 13 16:06:11.020718 containerd[2052]: time="2025-02-13T16:06:11.020024780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.021876140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.021945992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.021988940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.022255640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.021756932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.021851612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.021877112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:11.022746 containerd[2052]: time="2025-02-13T16:06:11.022026632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:11.024145 containerd[2052]: time="2025-02-13T16:06:11.024047456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:11.024380 containerd[2052]: time="2025-02-13T16:06:11.024325904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:11.024969 containerd[2052]: time="2025-02-13T16:06:11.024814712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:11.155196 kubelet[3013]: W0213 16:06:11.154994 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:11.156362 kubelet[3013]: E0213 16:06:11.155993 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:11.168071 containerd[2052]: time="2025-02-13T16:06:11.168008565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-10,Uid:78b6957c8db9c3a5a81fa97c052dc6b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4742b6ae6b2c178c19016a1e1a0477b080a60d4099dcd460951d7f8efd929f51\"" Feb 13 16:06:11.176663 containerd[2052]: time="2025-02-13T16:06:11.176550801Z" level=info msg="CreateContainer within sandbox \"4742b6ae6b2c178c19016a1e1a0477b080a60d4099dcd460951d7f8efd929f51\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:06:11.189079 containerd[2052]: time="2025-02-13T16:06:11.189026469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-10,Uid:1ea266933c2bbc5cb7d107c0212ecc7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd112c2e5fcfe12cd441f3a9dc19a5d722274c68d38dafef4b77cb5633abc832\"" Feb 13 16:06:11.198924 containerd[2052]: time="2025-02-13T16:06:11.198861945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-10,Uid:91e70ef5388fb1c9a46b50accf883fab,Namespace:kube-system,Attempt:0,} returns sandbox id \"0564c672062320f739f865357a8819409135f636ddb1dc075f35a4e34efaa909\"" Feb 13 16:06:11.201215 containerd[2052]: time="2025-02-13T16:06:11.201124113Z" level=info msg="CreateContainer within sandbox \"dd112c2e5fcfe12cd441f3a9dc19a5d722274c68d38dafef4b77cb5633abc832\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:06:11.211140 containerd[2052]: time="2025-02-13T16:06:11.209468385Z" level=info msg="CreateContainer within sandbox \"4742b6ae6b2c178c19016a1e1a0477b080a60d4099dcd460951d7f8efd929f51\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ecd42d3e9acbf1c01da90a3fc1157f1b79d85ebea342689afc331839f3f2631c\"" Feb 13 16:06:11.211336 containerd[2052]: time="2025-02-13T16:06:11.211096917Z" level=info msg="StartContainer for \"ecd42d3e9acbf1c01da90a3fc1157f1b79d85ebea342689afc331839f3f2631c\"" Feb 13 16:06:11.211662 containerd[2052]: time="2025-02-13T16:06:11.211304577Z" level=info msg="CreateContainer within sandbox \"0564c672062320f739f865357a8819409135f636ddb1dc075f35a4e34efaa909\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:06:11.224925 containerd[2052]: time="2025-02-13T16:06:11.224832957Z" level=info msg="CreateContainer within sandbox \"dd112c2e5fcfe12cd441f3a9dc19a5d722274c68d38dafef4b77cb5633abc832\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9f5f867a253a3b42b572d5b88ab455d7adbf924f87f555e1afe1ce8345deba4\"" Feb 13 16:06:11.225970 containerd[2052]: time="2025-02-13T16:06:11.225925509Z" level=info msg="StartContainer for \"a9f5f867a253a3b42b572d5b88ab455d7adbf924f87f555e1afe1ce8345deba4\"" Feb 13 16:06:11.241296 containerd[2052]: time="2025-02-13T16:06:11.241224669Z" level=info msg="CreateContainer within sandbox \"0564c672062320f739f865357a8819409135f636ddb1dc075f35a4e34efaa909\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e6ff32b1ee03167cb67a72fd95dcf89d17945ec13ab54a460493c7f4304abc07\"" Feb 13 16:06:11.244624 containerd[2052]: time="2025-02-13T16:06:11.244526037Z" level=info msg="StartContainer for \"e6ff32b1ee03167cb67a72fd95dcf89d17945ec13ab54a460493c7f4304abc07\"" Feb 13 16:06:11.285299 kubelet[3013]: E0213 16:06:11.285257 3013 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-10?timeout=10s\": dial tcp 172.31.24.10:6443: connect: connection refused" interval="1.6s" Feb 13 16:06:11.349601 kubelet[3013]: W0213 16:06:11.349518 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:11.350320 kubelet[3013]: E0213 16:06:11.350199 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:11.396007 kubelet[3013]: I0213 16:06:11.395960 3013 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-10" Feb 13 16:06:11.400148 kubelet[3013]: E0213 16:06:11.399761 3013 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.10:6443/api/v1/nodes\": dial tcp 172.31.24.10:6443: connect: connection refused" node="ip-172-31-24-10" Feb 13 16:06:11.437632 containerd[2052]: time="2025-02-13T16:06:11.436742134Z" level=info msg="StartContainer for \"ecd42d3e9acbf1c01da90a3fc1157f1b79d85ebea342689afc331839f3f2631c\" returns successfully" Feb 13 16:06:11.446146 containerd[2052]: time="2025-02-13T16:06:11.445423078Z" level=info msg="StartContainer for \"a9f5f867a253a3b42b572d5b88ab455d7adbf924f87f555e1afe1ce8345deba4\" returns successfully" Feb 13 16:06:11.482147 kubelet[3013]: W0213 16:06:11.480925 3013 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:11.482147 kubelet[3013]: E0213 16:06:11.481032 3013 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.10:6443: connect: connection refused Feb 13 16:06:11.499988 kubelet[3013]: E0213 16:06:11.499921 3013 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.10:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-10.1823d02c518513ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-10,UID:ip-172-31-24-10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-10,},FirstTimestamp:2025-02-13 16:06:09.859957738 +0000 UTC m=+1.080111474,LastTimestamp:2025-02-13 16:06:09.859957738 +0000 UTC m=+1.080111474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-10,}" Feb 13 16:06:11.502540 containerd[2052]: time="2025-02-13T16:06:11.502470443Z" level=info msg="StartContainer for \"e6ff32b1ee03167cb67a72fd95dcf89d17945ec13ab54a460493c7f4304abc07\" returns successfully" Feb 13 16:06:13.003827 kubelet[3013]: I0213 16:06:13.003777 3013 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-10" Feb 13 16:06:14.364030 kubelet[3013]: E0213 16:06:14.363970 3013 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-10\" not found" node="ip-172-31-24-10" Feb 13 16:06:14.438254 kubelet[3013]: I0213 16:06:14.438192 3013 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-10" Feb 13 16:06:14.860543 kubelet[3013]: I0213 16:06:14.860472 3013 apiserver.go:52] "Watching apiserver" Feb 13 16:06:14.882977 kubelet[3013]: I0213 16:06:14.882406 3013 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:06:14.981007 kubelet[3013]: E0213 16:06:14.980871 3013 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:16.897710 update_engine[2033]: I20250213 16:06:16.897622 2033 update_attempter.cc:509] Updating boot flags... Feb 13 16:06:16.978174 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3299) Feb 13 16:06:17.223259 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3302) Feb 13 16:06:17.506944 systemd[1]: Reloading requested from client PID 3470 ('systemctl') (unit session-7.scope)... Feb 13 16:06:17.506971 systemd[1]: Reloading... Feb 13 16:06:17.583032 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3302) Feb 13 16:06:17.884689 zram_generator::config[3578]: No configuration found. Feb 13 16:06:18.216737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:06:18.405368 systemd[1]: Reloading finished in 897 ms. Feb 13 16:06:18.513009 kubelet[3013]: I0213 16:06:18.512090 3013 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:18.512488 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:18.544407 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:06:18.545062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:18.553638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:18.859843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:18.880875 (kubelet)[3663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:06:18.985906 kubelet[3663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:18.987131 kubelet[3663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:06:18.987131 kubelet[3663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:18.987131 kubelet[3663]: I0213 16:06:18.986568 3663 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:06:18.994909 kubelet[3663]: I0213 16:06:18.994846 3663 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:06:18.994909 kubelet[3663]: I0213 16:06:18.994898 3663 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:06:18.995477 kubelet[3663]: I0213 16:06:18.995435 3663 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:06:18.999761 kubelet[3663]: I0213 16:06:18.999329 3663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:06:19.003815 kubelet[3663]: I0213 16:06:19.003092 3663 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:19.026558 kubelet[3663]: I0213 16:06:19.026492 3663 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:06:19.027330 sudo[3677]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 16:06:19.028246 sudo[3677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 16:06:19.029001 kubelet[3663]: I0213 16:06:19.028305 3663 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:06:19.029001 kubelet[3663]: I0213 16:06:19.028637 3663 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:06:19.029001 kubelet[3663]: I0213 16:06:19.028704 3663 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:06:19.029001 kubelet[3663]: I0213 16:06:19.028733 3663 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:06:19.029001 kubelet[3663]: I0213 16:06:19.028799 3663 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:19.029626 kubelet[3663]: I0213 16:06:19.029601 3663 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:06:19.030356 kubelet[3663]: I0213 16:06:19.030328 3663 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:06:19.030618 kubelet[3663]: I0213 16:06:19.030597 3663 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:06:19.032322 kubelet[3663]: I0213 16:06:19.031185 3663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:06:19.034440 kubelet[3663]: I0213 16:06:19.034390 3663 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:06:19.034750 kubelet[3663]: I0213 16:06:19.034716 3663 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:06:19.040703 kubelet[3663]: I0213 16:06:19.040643 3663 server.go:1256] "Started kubelet" Feb 13 16:06:19.050321 kubelet[3663]: I0213 16:06:19.050262 3663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:06:19.059174 kubelet[3663]: I0213 16:06:19.059080 3663 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:06:19.066904 kubelet[3663]: I0213 16:06:19.066594 3663 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:06:19.075588 kubelet[3663]: I0213 16:06:19.075548 3663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:06:19.078016 kubelet[3663]: I0213 16:06:19.077978 3663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:06:19.078657 kubelet[3663]: I0213 16:06:19.078221 3663 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:06:19.078657 kubelet[3663]: I0213 16:06:19.078267 3663 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:06:19.078657 kubelet[3663]: E0213 16:06:19.078350 3663 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:06:19.082353 kubelet[3663]: I0213 16:06:19.082188 3663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:06:19.082547 kubelet[3663]: I0213 16:06:19.082510 3663 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:06:19.096586 kubelet[3663]: I0213 16:06:19.095812 3663 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:06:19.100083 kubelet[3663]: I0213 16:06:19.100006 3663 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:06:19.100418 kubelet[3663]: I0213 16:06:19.100385 3663 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:06:19.121900 kubelet[3663]: I0213 16:06:19.121386 3663 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:06:19.121900 kubelet[3663]: I0213 16:06:19.121529 3663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:06:19.131358 kubelet[3663]: I0213 16:06:19.129591 3663 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:06:19.143880 kubelet[3663]: E0213 16:06:19.143827 3663 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:06:19.179281 kubelet[3663]: E0213 16:06:19.179231 3663 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:06:19.212825 kubelet[3663]: I0213 16:06:19.212785 3663 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-10" Feb 13 16:06:19.237916 kubelet[3663]: I0213 16:06:19.237597 3663 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-10" Feb 13 16:06:19.237916 kubelet[3663]: I0213 16:06:19.237707 3663 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-10" Feb 13 16:06:19.313141 kubelet[3663]: I0213 16:06:19.313073 3663 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:06:19.313267 kubelet[3663]: I0213 16:06:19.313125 3663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:06:19.313267 kubelet[3663]: I0213 16:06:19.313197 3663 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:19.314289 kubelet[3663]: I0213 16:06:19.314256 3663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:06:19.314374 kubelet[3663]: I0213 16:06:19.314349 3663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:06:19.315161 kubelet[3663]: I0213 16:06:19.315091 3663 policy_none.go:49] "None policy: Start" Feb 13 16:06:19.318410 kubelet[3663]: I0213 16:06:19.316731 3663 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:06:19.318410 kubelet[3663]: I0213 16:06:19.316782 3663 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:06:19.318410 kubelet[3663]: I0213 16:06:19.317030 3663 state_mem.go:75] "Updated machine memory state" Feb 13 16:06:19.322487 kubelet[3663]: I0213 16:06:19.322438 3663 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:06:19.323903 kubelet[3663]: I0213 16:06:19.323873 3663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:06:19.381935 kubelet[3663]: I0213 16:06:19.381813 3663 topology_manager.go:215] "Topology Admit Handler" podUID="1ea266933c2bbc5cb7d107c0212ecc7f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-10" Feb 13 16:06:19.383065 kubelet[3663]: I0213 16:06:19.383039 3663 topology_manager.go:215] "Topology Admit Handler" podUID="91e70ef5388fb1c9a46b50accf883fab" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:19.383329 kubelet[3663]: I0213 16:06:19.383308 3663 topology_manager.go:215] "Topology Admit Handler" podUID="78b6957c8db9c3a5a81fa97c052dc6b0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-10" Feb 13 16:06:19.402821 kubelet[3663]: I0213 16:06:19.402613 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ea266933c2bbc5cb7d107c0212ecc7f-ca-certs\") pod \"kube-apiserver-ip-172-31-24-10\" (UID: \"1ea266933c2bbc5cb7d107c0212ecc7f\") " pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:19.403043 kubelet[3663]: I0213 16:06:19.403018 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ea266933c2bbc5cb7d107c0212ecc7f-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-10\" (UID: \"1ea266933c2bbc5cb7d107c0212ecc7f\") " pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:19.403255 kubelet[3663]: I0213 16:06:19.403232 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:19.403395 kubelet[3663]: I0213 16:06:19.403375 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:19.403528 kubelet[3663]: I0213 16:06:19.403510 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:19.403666 kubelet[3663]: I0213 16:06:19.403647 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:19.405605 kubelet[3663]: I0213 16:06:19.403818 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91e70ef5388fb1c9a46b50accf883fab-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-10\" (UID: \"91e70ef5388fb1c9a46b50accf883fab\") " pod="kube-system/kube-controller-manager-ip-172-31-24-10" Feb 13 16:06:19.405923 kubelet[3663]: I0213 16:06:19.405878 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b6957c8db9c3a5a81fa97c052dc6b0-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-10\" (UID: \"78b6957c8db9c3a5a81fa97c052dc6b0\") " pod="kube-system/kube-scheduler-ip-172-31-24-10" Feb 13 16:06:19.406130 kubelet[3663]: I0213 16:06:19.406086 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ea266933c2bbc5cb7d107c0212ecc7f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-10\" (UID: \"1ea266933c2bbc5cb7d107c0212ecc7f\") " pod="kube-system/kube-apiserver-ip-172-31-24-10" Feb 13 16:06:20.004716 sudo[3677]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:20.045922 kubelet[3663]: I0213 16:06:20.045559 3663 apiserver.go:52] "Watching apiserver" Feb 13 16:06:20.101380 kubelet[3663]: I0213 16:06:20.101267 3663 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:06:20.250130 kubelet[3663]: I0213 16:06:20.250061 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-10" podStartSLOduration=1.249999222 podStartE2EDuration="1.249999222s" podCreationTimestamp="2025-02-13 16:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:20.230634402 +0000 UTC m=+1.341824312" watchObservedRunningTime="2025-02-13 16:06:20.249999222 +0000 UTC m=+1.361189096" Feb 13 16:06:20.270268 kubelet[3663]: I0213 16:06:20.269359 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-10" podStartSLOduration=1.26929893 podStartE2EDuration="1.26929893s" podCreationTimestamp="2025-02-13 16:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:20.25351887 +0000 UTC m=+1.364708768" watchObservedRunningTime="2025-02-13 16:06:20.26929893 +0000 UTC m=+1.380488816" Feb 13 16:06:20.292131 kubelet[3663]: I0213 16:06:20.291328 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-10" podStartSLOduration=1.29127573 podStartE2EDuration="1.29127573s" podCreationTimestamp="2025-02-13 16:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:20.270978186 +0000 UTC m=+1.382168084" watchObservedRunningTime="2025-02-13 16:06:20.29127573 +0000 UTC m=+1.402465640" Feb 13 16:06:23.018770 sudo[2407]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:23.042279 sshd[2403]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:23.050915 systemd[1]: sshd@6-172.31.24.10:22-139.178.68.195:35396.service: Deactivated successfully. Feb 13 16:06:23.055927 systemd-logind[2030]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:06:23.057285 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:06:23.060916 systemd-logind[2030]: Removed session 7. Feb 13 16:06:30.890645 kubelet[3663]: I0213 16:06:30.890364 3663 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:06:30.897264 containerd[2052]: time="2025-02-13T16:06:30.892859443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:06:30.897882 kubelet[3663]: I0213 16:06:30.894353 3663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:06:31.700174 kubelet[3663]: I0213 16:06:31.700078 3663 topology_manager.go:215] "Topology Admit Handler" podUID="78c92f1b-de45-4229-b1bf-ab1dc4925b2c" podNamespace="kube-system" podName="kube-proxy-5fvvx" Feb 13 16:06:31.727521 kubelet[3663]: I0213 16:06:31.727447 3663 topology_manager.go:215] "Topology Admit Handler" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" podNamespace="kube-system" podName="cilium-t87xp" Feb 13 16:06:31.778411 kubelet[3663]: I0213 16:06:31.778347 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-kernel\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.778587 kubelet[3663]: I0213 16:06:31.778428 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-hubble-tls\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.778587 kubelet[3663]: I0213 16:06:31.778484 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hbw\" (UniqueName: \"kubernetes.io/projected/78c92f1b-de45-4229-b1bf-ab1dc4925b2c-kube-api-access-22hbw\") pod \"kube-proxy-5fvvx\" (UID: \"78c92f1b-de45-4229-b1bf-ab1dc4925b2c\") " pod="kube-system/kube-proxy-5fvvx" Feb 13 16:06:31.778587 kubelet[3663]: I0213 16:06:31.778531 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-lib-modules\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.778587 kubelet[3663]: I0213 16:06:31.778575 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/771a3d43-4f24-491b-8c40-ec927a06293c-clustermesh-secrets\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779580 kubelet[3663]: I0213 16:06:31.778632 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-xtables-lock\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779580 kubelet[3663]: I0213 16:06:31.778682 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5n4\" (UniqueName: \"kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-kube-api-access-qz5n4\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779580 kubelet[3663]: I0213 16:06:31.778727 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78c92f1b-de45-4229-b1bf-ab1dc4925b2c-kube-proxy\") pod \"kube-proxy-5fvvx\" (UID: \"78c92f1b-de45-4229-b1bf-ab1dc4925b2c\") " pod="kube-system/kube-proxy-5fvvx" Feb 13 16:06:31.779580 kubelet[3663]: I0213 16:06:31.778776 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-etc-cni-netd\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779580 kubelet[3663]: I0213 16:06:31.778826 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78c92f1b-de45-4229-b1bf-ab1dc4925b2c-lib-modules\") pod \"kube-proxy-5fvvx\" (UID: \"78c92f1b-de45-4229-b1bf-ab1dc4925b2c\") " pod="kube-system/kube-proxy-5fvvx" Feb 13 16:06:31.779580 kubelet[3663]: I0213 16:06:31.778870 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-bpf-maps\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779972 kubelet[3663]: I0213 16:06:31.778911 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cni-path\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779972 kubelet[3663]: I0213 16:06:31.778976 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78c92f1b-de45-4229-b1bf-ab1dc4925b2c-xtables-lock\") pod \"kube-proxy-5fvvx\" (UID: \"78c92f1b-de45-4229-b1bf-ab1dc4925b2c\") " pod="kube-system/kube-proxy-5fvvx" Feb 13 16:06:31.779972 kubelet[3663]: I0213 16:06:31.779018 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-run\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779972 kubelet[3663]: I0213 16:06:31.779065 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-cgroup\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779972 kubelet[3663]: I0213 16:06:31.779129 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-hostproc\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.779972 kubelet[3663]: I0213 16:06:31.779182 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-config-path\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.780314 kubelet[3663]: I0213 16:06:31.779230 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-net\") pod \"cilium-t87xp\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " pod="kube-system/cilium-t87xp" Feb 13 16:06:31.917127 kubelet[3663]: I0213 16:06:31.915875 3663 topology_manager.go:215] "Topology Admit Handler" podUID="cbbd3ef1-70d4-40bf-a12b-5c5f0a032526" podNamespace="kube-system" podName="cilium-operator-5cc964979-8xflc" Feb 13 16:06:31.985883 kubelet[3663]: I0213 16:06:31.985745 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-cilium-config-path\") pod \"cilium-operator-5cc964979-8xflc\" (UID: \"cbbd3ef1-70d4-40bf-a12b-5c5f0a032526\") " pod="kube-system/cilium-operator-5cc964979-8xflc" Feb 13 16:06:31.993147 kubelet[3663]: I0213 16:06:31.989502 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4qc\" (UniqueName: \"kubernetes.io/projected/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-kube-api-access-6q4qc\") pod \"cilium-operator-5cc964979-8xflc\" (UID: \"cbbd3ef1-70d4-40bf-a12b-5c5f0a032526\") " pod="kube-system/cilium-operator-5cc964979-8xflc" Feb 13 16:06:32.015600 containerd[2052]: time="2025-02-13T16:06:32.015305753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fvvx,Uid:78c92f1b-de45-4229-b1bf-ab1dc4925b2c,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:32.053135 containerd[2052]: time="2025-02-13T16:06:32.049453193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t87xp,Uid:771a3d43-4f24-491b-8c40-ec927a06293c,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:32.127310 containerd[2052]: time="2025-02-13T16:06:32.126814289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:32.127310 containerd[2052]: time="2025-02-13T16:06:32.126904241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:32.127310 containerd[2052]: time="2025-02-13T16:06:32.127003325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.127310 containerd[2052]: time="2025-02-13T16:06:32.126010049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:32.127310 containerd[2052]: time="2025-02-13T16:06:32.127228397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.127974 containerd[2052]: time="2025-02-13T16:06:32.127405337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:32.127974 containerd[2052]: time="2025-02-13T16:06:32.127720169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.128748 containerd[2052]: time="2025-02-13T16:06:32.128268557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.225673 containerd[2052]: time="2025-02-13T16:06:32.225547194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t87xp,Uid:771a3d43-4f24-491b-8c40-ec927a06293c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\"" Feb 13 16:06:32.234246 containerd[2052]: time="2025-02-13T16:06:32.233653194Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 16:06:32.236602 containerd[2052]: time="2025-02-13T16:06:32.236436546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fvvx,Uid:78c92f1b-de45-4229-b1bf-ab1dc4925b2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a09804f984bbb5169ab29bc10dbf2ce6ed04dd3221e52561169290b368028b\"" Feb 13 16:06:32.248593 containerd[2052]: time="2025-02-13T16:06:32.248514006Z" level=info msg="CreateContainer within sandbox \"f9a09804f984bbb5169ab29bc10dbf2ce6ed04dd3221e52561169290b368028b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:06:32.249025 containerd[2052]: time="2025-02-13T16:06:32.248970558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8xflc,Uid:cbbd3ef1-70d4-40bf-a12b-5c5f0a032526,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:32.296585 containerd[2052]: time="2025-02-13T16:06:32.296511534Z" level=info msg="CreateContainer within sandbox \"f9a09804f984bbb5169ab29bc10dbf2ce6ed04dd3221e52561169290b368028b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b126be9b8ed3281b2af980dc86e50767d31c77a8ed7a8132a506c0d6421df36f\"" Feb 13 16:06:32.299289 containerd[2052]: time="2025-02-13T16:06:32.299211510Z" level=info msg="StartContainer for \"b126be9b8ed3281b2af980dc86e50767d31c77a8ed7a8132a506c0d6421df36f\"" Feb 13 16:06:32.304209 containerd[2052]: time="2025-02-13T16:06:32.303991974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:32.304209 containerd[2052]: time="2025-02-13T16:06:32.304150242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:32.305932 containerd[2052]: time="2025-02-13T16:06:32.304476966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.307567 containerd[2052]: time="2025-02-13T16:06:32.306460806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:32.418477 containerd[2052]: time="2025-02-13T16:06:32.418403647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8xflc,Uid:cbbd3ef1-70d4-40bf-a12b-5c5f0a032526,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\"" Feb 13 16:06:32.433222 containerd[2052]: time="2025-02-13T16:06:32.433146919Z" level=info msg="StartContainer for \"b126be9b8ed3281b2af980dc86e50767d31c77a8ed7a8132a506c0d6421df36f\" returns successfully" Feb 13 16:06:37.729613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935386006.mount: Deactivated successfully. Feb 13 16:06:40.264955 containerd[2052]: time="2025-02-13T16:06:40.264885290Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:40.267504 containerd[2052]: time="2025-02-13T16:06:40.267300950Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 16:06:40.269060 containerd[2052]: time="2025-02-13T16:06:40.268922930Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:40.276690 containerd[2052]: time="2025-02-13T16:06:40.276625958Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.042784664s" Feb 13 16:06:40.277185 containerd[2052]: time="2025-02-13T16:06:40.277144670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 16:06:40.282611 containerd[2052]: time="2025-02-13T16:06:40.281046290Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 16:06:40.286070 containerd[2052]: time="2025-02-13T16:06:40.285970850Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:06:40.312863 containerd[2052]: time="2025-02-13T16:06:40.312791918Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\"" Feb 13 16:06:40.314896 containerd[2052]: time="2025-02-13T16:06:40.314820854Z" level=info msg="StartContainer for \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\"" Feb 13 16:06:40.407068 containerd[2052]: time="2025-02-13T16:06:40.406882490Z" level=info msg="StartContainer for \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\" returns successfully" Feb 13 16:06:41.304081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b-rootfs.mount: Deactivated successfully. Feb 13 16:06:41.308667 kubelet[3663]: I0213 16:06:41.306309 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5fvvx" podStartSLOduration=10.306174927 podStartE2EDuration="10.306174927s" podCreationTimestamp="2025-02-13 16:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:33.263506147 +0000 UTC m=+14.374696021" watchObservedRunningTime="2025-02-13 16:06:41.306174927 +0000 UTC m=+22.417364801" Feb 13 16:06:41.642520 containerd[2052]: time="2025-02-13T16:06:41.642303676Z" level=info msg="shim disconnected" id=eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b namespace=k8s.io Feb 13 16:06:41.643249 containerd[2052]: time="2025-02-13T16:06:41.643053400Z" level=warning msg="cleaning up after shim disconnected" id=eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b namespace=k8s.io Feb 13 16:06:41.643249 containerd[2052]: time="2025-02-13T16:06:41.643172308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:41.665953 containerd[2052]: time="2025-02-13T16:06:41.665842780Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:06:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:06:42.182165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289988328.mount: Deactivated successfully. Feb 13 16:06:42.297531 containerd[2052]: time="2025-02-13T16:06:42.297458920Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:06:42.338606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389180244.mount: Deactivated successfully. Feb 13 16:06:42.346413 containerd[2052]: time="2025-02-13T16:06:42.346339096Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\"" Feb 13 16:06:42.348553 containerd[2052]: time="2025-02-13T16:06:42.348208456Z" level=info msg="StartContainer for \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\"" Feb 13 16:06:42.430846 systemd[1]: run-containerd-runc-k8s.io-ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24-runc.AnU73N.mount: Deactivated successfully. Feb 13 16:06:42.502320 containerd[2052]: time="2025-02-13T16:06:42.500943941Z" level=info msg="StartContainer for \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\" returns successfully" Feb 13 16:06:42.522545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:06:42.525194 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:06:42.525316 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:06:42.535306 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:06:42.588152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:06:42.641694 containerd[2052]: time="2025-02-13T16:06:42.641514989Z" level=info msg="shim disconnected" id=ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24 namespace=k8s.io Feb 13 16:06:42.642088 containerd[2052]: time="2025-02-13T16:06:42.641908181Z" level=warning msg="cleaning up after shim disconnected" id=ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24 namespace=k8s.io Feb 13 16:06:42.642088 containerd[2052]: time="2025-02-13T16:06:42.641936009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:43.068672 containerd[2052]: time="2025-02-13T16:06:43.068596047Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:43.070627 containerd[2052]: time="2025-02-13T16:06:43.070558083Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 16:06:43.073150 containerd[2052]: time="2025-02-13T16:06:43.073050543Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:43.076351 containerd[2052]: time="2025-02-13T16:06:43.076282347Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.795138821s" Feb 13 16:06:43.076780 containerd[2052]: time="2025-02-13T16:06:43.076348875Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 16:06:43.082919 containerd[2052]: time="2025-02-13T16:06:43.082868056Z" level=info msg="CreateContainer within sandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 16:06:43.105641 containerd[2052]: time="2025-02-13T16:06:43.105511936Z" level=info msg="CreateContainer within sandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\"" Feb 13 16:06:43.106514 containerd[2052]: time="2025-02-13T16:06:43.106394224Z" level=info msg="StartContainer for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\"" Feb 13 16:06:43.190676 containerd[2052]: time="2025-02-13T16:06:43.190501768Z" level=info msg="StartContainer for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" returns successfully" Feb 13 16:06:43.313732 containerd[2052]: time="2025-02-13T16:06:43.313350089Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:06:43.350942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24-rootfs.mount: Deactivated successfully. Feb 13 16:06:43.357473 containerd[2052]: time="2025-02-13T16:06:43.356896481Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\"" Feb 13 16:06:43.359933 containerd[2052]: time="2025-02-13T16:06:43.359751821Z" level=info msg="StartContainer for \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\"" Feb 13 16:06:43.420160 kubelet[3663]: I0213 16:06:43.418890 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-8xflc" podStartSLOduration=1.762819237 podStartE2EDuration="12.418805513s" podCreationTimestamp="2025-02-13 16:06:31 +0000 UTC" firstStartedPulling="2025-02-13 16:06:32.420654787 +0000 UTC m=+13.531844661" lastFinishedPulling="2025-02-13 16:06:43.076641063 +0000 UTC m=+24.187830937" observedRunningTime="2025-02-13 16:06:43.418559537 +0000 UTC m=+24.529749423" watchObservedRunningTime="2025-02-13 16:06:43.418805513 +0000 UTC m=+24.529995387" Feb 13 16:06:43.480421 systemd[1]: run-containerd-runc-k8s.io-a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2-runc.XZGAAo.mount: Deactivated successfully. Feb 13 16:06:43.612490 containerd[2052]: time="2025-02-13T16:06:43.611497914Z" level=info msg="StartContainer for \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\" returns successfully" Feb 13 16:06:43.813551 containerd[2052]: time="2025-02-13T16:06:43.811504519Z" level=info msg="shim disconnected" id=a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2 namespace=k8s.io Feb 13 16:06:43.813551 containerd[2052]: time="2025-02-13T16:06:43.812498851Z" level=warning msg="cleaning up after shim disconnected" id=a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2 namespace=k8s.io Feb 13 16:06:43.813551 containerd[2052]: time="2025-02-13T16:06:43.812543215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:44.335966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2-rootfs.mount: Deactivated successfully. Feb 13 16:06:44.362524 containerd[2052]: time="2025-02-13T16:06:44.362458122Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:06:44.404127 containerd[2052]: time="2025-02-13T16:06:44.403916250Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\"" Feb 13 16:06:44.438841 containerd[2052]: time="2025-02-13T16:06:44.435384486Z" level=info msg="StartContainer for \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\"" Feb 13 16:06:44.723269 containerd[2052]: time="2025-02-13T16:06:44.723176900Z" level=info msg="StartContainer for \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\" returns successfully" Feb 13 16:06:44.766639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832-rootfs.mount: Deactivated successfully. Feb 13 16:06:44.774768 containerd[2052]: time="2025-02-13T16:06:44.774525644Z" level=info msg="shim disconnected" id=50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832 namespace=k8s.io Feb 13 16:06:44.775482 containerd[2052]: time="2025-02-13T16:06:44.775416836Z" level=warning msg="cleaning up after shim disconnected" id=50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832 namespace=k8s.io Feb 13 16:06:44.775597 containerd[2052]: time="2025-02-13T16:06:44.775489352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:06:45.351759 containerd[2052]: time="2025-02-13T16:06:45.351643315Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:06:45.379786 containerd[2052]: time="2025-02-13T16:06:45.379717303Z" level=info msg="CreateContainer within sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\"" Feb 13 16:06:45.382146 containerd[2052]: time="2025-02-13T16:06:45.381605083Z" level=info msg="StartContainer for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\"" Feb 13 16:06:45.489510 containerd[2052]: time="2025-02-13T16:06:45.489430471Z" level=info msg="StartContainer for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" returns successfully" Feb 13 16:06:45.663243 kubelet[3663]: I0213 16:06:45.661963 3663 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:06:45.724148 kubelet[3663]: I0213 16:06:45.718567 3663 topology_manager.go:215] "Topology Admit Handler" podUID="07013af2-9cdc-4e05-8e68-08a20b05136f" podNamespace="kube-system" podName="coredns-76f75df574-wvd6t" Feb 13 16:06:45.741620 kubelet[3663]: I0213 16:06:45.741557 3663 topology_manager.go:215] "Topology Admit Handler" podUID="ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71" podNamespace="kube-system" podName="coredns-76f75df574-x8ksr" Feb 13 16:06:45.804171 kubelet[3663]: I0213 16:06:45.802698 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6pv\" (UniqueName: \"kubernetes.io/projected/07013af2-9cdc-4e05-8e68-08a20b05136f-kube-api-access-gq6pv\") pod \"coredns-76f75df574-wvd6t\" (UID: \"07013af2-9cdc-4e05-8e68-08a20b05136f\") " pod="kube-system/coredns-76f75df574-wvd6t" Feb 13 16:06:45.804171 kubelet[3663]: I0213 16:06:45.802788 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2jj\" (UniqueName: \"kubernetes.io/projected/ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71-kube-api-access-rr2jj\") pod \"coredns-76f75df574-x8ksr\" (UID: \"ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71\") " pod="kube-system/coredns-76f75df574-x8ksr" Feb 13 16:06:45.804171 kubelet[3663]: I0213 16:06:45.802915 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07013af2-9cdc-4e05-8e68-08a20b05136f-config-volume\") pod \"coredns-76f75df574-wvd6t\" (UID: \"07013af2-9cdc-4e05-8e68-08a20b05136f\") " pod="kube-system/coredns-76f75df574-wvd6t" Feb 13 16:06:45.804171 kubelet[3663]: I0213 16:06:45.802974 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71-config-volume\") pod \"coredns-76f75df574-x8ksr\" (UID: \"ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71\") " pod="kube-system/coredns-76f75df574-x8ksr" Feb 13 16:06:46.039250 containerd[2052]: time="2025-02-13T16:06:46.039176118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wvd6t,Uid:07013af2-9cdc-4e05-8e68-08a20b05136f,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:46.059482 containerd[2052]: time="2025-02-13T16:06:46.059410434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x8ksr,Uid:ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:46.430499 kubelet[3663]: I0213 16:06:46.428623 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-t87xp" podStartSLOduration=7.379265688 podStartE2EDuration="15.428559908s" podCreationTimestamp="2025-02-13 16:06:31 +0000 UTC" firstStartedPulling="2025-02-13 16:06:32.231507174 +0000 UTC m=+13.342697048" lastFinishedPulling="2025-02-13 16:06:40.280801382 +0000 UTC m=+21.391991268" observedRunningTime="2025-02-13 16:06:46.425039216 +0000 UTC m=+27.536229114" watchObservedRunningTime="2025-02-13 16:06:46.428559908 +0000 UTC m=+27.539749782" Feb 13 16:06:48.339085 systemd-networkd[1603]: cilium_host: Link UP Feb 13 16:06:48.341126 (udev-worker)[4451]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:48.342459 systemd-networkd[1603]: cilium_net: Link UP Feb 13 16:06:48.343284 systemd-networkd[1603]: cilium_net: Gained carrier Feb 13 16:06:48.343619 systemd-networkd[1603]: cilium_host: Gained carrier Feb 13 16:06:48.346533 (udev-worker)[4450]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:48.367747 systemd-networkd[1603]: cilium_host: Gained IPv6LL Feb 13 16:06:48.403314 systemd-networkd[1603]: cilium_net: Gained IPv6LL Feb 13 16:06:48.520201 (udev-worker)[4494]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:48.531245 systemd-networkd[1603]: cilium_vxlan: Link UP Feb 13 16:06:48.531260 systemd-networkd[1603]: cilium_vxlan: Gained carrier Feb 13 16:06:49.020364 kernel: NET: Registered PF_ALG protocol family Feb 13 16:06:50.096909 systemd-networkd[1603]: cilium_vxlan: Gained IPv6LL Feb 13 16:06:50.371888 (udev-worker)[4496]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:50.388175 systemd-networkd[1603]: lxc_health: Link UP Feb 13 16:06:50.388873 systemd-networkd[1603]: lxc_health: Gained carrier Feb 13 16:06:50.672544 systemd-networkd[1603]: lxc2b0404f2299a: Link UP Feb 13 16:06:50.677541 kernel: eth0: renamed from tmp63b22 Feb 13 16:06:50.681202 systemd-networkd[1603]: lxc2b0404f2299a: Gained carrier Feb 13 16:06:51.140196 systemd-networkd[1603]: lxc3329762932c3: Link UP Feb 13 16:06:51.147170 kernel: eth0: renamed from tmp10ef7 Feb 13 16:06:51.159515 systemd-networkd[1603]: lxc3329762932c3: Gained carrier Feb 13 16:06:51.824364 systemd-networkd[1603]: lxc_health: Gained IPv6LL Feb 13 16:06:52.656399 systemd-networkd[1603]: lxc2b0404f2299a: Gained IPv6LL Feb 13 16:06:53.040807 systemd-networkd[1603]: lxc3329762932c3: Gained IPv6LL Feb 13 16:06:55.657417 ntpd[2011]: Listen normally on 6 cilium_host 192.168.0.242:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 6 cilium_host 192.168.0.242:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 7 cilium_net [fe80::10d3:b9ff:fecd:3e02%4]:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 8 cilium_host [fe80::cc0c:7eff:fe77:5159%5]:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 9 cilium_vxlan [fe80::f6:c5ff:fe47:2d7e%6]:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 10 lxc_health [fe80::44ca:f2ff:fe9d:5296%8]:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 11 lxc2b0404f2299a [fe80::5c0f:f4ff:fe3a:12fa%10]:123 Feb 13 16:06:55.658730 ntpd[2011]: 13 Feb 16:06:55 ntpd[2011]: Listen normally on 12 lxc3329762932c3 [fe80::84a:15ff:feba:b7cf%12]:123 Feb 13 16:06:55.657545 ntpd[2011]: Listen normally on 7 cilium_net [fe80::10d3:b9ff:fecd:3e02%4]:123 Feb 13 16:06:55.657630 ntpd[2011]: Listen normally on 8 cilium_host [fe80::cc0c:7eff:fe77:5159%5]:123 Feb 13 16:06:55.657700 ntpd[2011]: Listen normally on 9 cilium_vxlan [fe80::f6:c5ff:fe47:2d7e%6]:123 Feb 13 16:06:55.657768 ntpd[2011]: Listen normally on 10 lxc_health [fe80::44ca:f2ff:fe9d:5296%8]:123 Feb 13 16:06:55.657835 ntpd[2011]: Listen normally on 11 lxc2b0404f2299a [fe80::5c0f:f4ff:fe3a:12fa%10]:123 Feb 13 16:06:55.657903 ntpd[2011]: Listen normally on 12 lxc3329762932c3 [fe80::84a:15ff:feba:b7cf%12]:123 Feb 13 16:06:59.080168 containerd[2052]: time="2025-02-13T16:06:59.073855495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:59.084273 containerd[2052]: time="2025-02-13T16:06:59.081687895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:59.084273 containerd[2052]: time="2025-02-13T16:06:59.081746755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:59.084273 containerd[2052]: time="2025-02-13T16:06:59.081947635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:59.239162 containerd[2052]: time="2025-02-13T16:06:59.236746976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:59.239162 containerd[2052]: time="2025-02-13T16:06:59.236841116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:59.239162 containerd[2052]: time="2025-02-13T16:06:59.236867576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:59.239162 containerd[2052]: time="2025-02-13T16:06:59.237049652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:59.293869 containerd[2052]: time="2025-02-13T16:06:59.293760368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x8ksr,Uid:ee5cbf4b-0b19-4900-98f6-c7a3afcd7f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"63b22abd08a26aadad74761faac29d32218965800d51c930edacbe80542d9a1c\"" Feb 13 16:06:59.313994 containerd[2052]: time="2025-02-13T16:06:59.313915016Z" level=info msg="CreateContainer within sandbox \"63b22abd08a26aadad74761faac29d32218965800d51c930edacbe80542d9a1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:59.364965 containerd[2052]: time="2025-02-13T16:06:59.364724012Z" level=info msg="CreateContainer within sandbox \"63b22abd08a26aadad74761faac29d32218965800d51c930edacbe80542d9a1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6c749fa109fc4bd1b5d7fb2c7df68c23403f8869864f3166799bc7977d82303\"" Feb 13 16:06:59.372959 containerd[2052]: time="2025-02-13T16:06:59.372773240Z" level=info msg="StartContainer for \"a6c749fa109fc4bd1b5d7fb2c7df68c23403f8869864f3166799bc7977d82303\"" Feb 13 16:06:59.432135 containerd[2052]: time="2025-02-13T16:06:59.431009877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wvd6t,Uid:07013af2-9cdc-4e05-8e68-08a20b05136f,Namespace:kube-system,Attempt:0,} returns sandbox id \"10ef752ee6b8572bc011e5996e71ac221d3ba2ea2d0f0bcc1adb068c8b6f2f02\"" Feb 13 16:06:59.454136 containerd[2052]: time="2025-02-13T16:06:59.451070745Z" level=info msg="CreateContainer within sandbox \"10ef752ee6b8572bc011e5996e71ac221d3ba2ea2d0f0bcc1adb068c8b6f2f02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:59.486919 containerd[2052]: time="2025-02-13T16:06:59.486844713Z" level=info msg="CreateContainer within sandbox \"10ef752ee6b8572bc011e5996e71ac221d3ba2ea2d0f0bcc1adb068c8b6f2f02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59d3ad83a7b70ff2813dde3c675789d8815d7aaa5783b9be0bedafd50701171a\"" Feb 13 16:06:59.488858 containerd[2052]: time="2025-02-13T16:06:59.488441601Z" level=info msg="StartContainer for \"59d3ad83a7b70ff2813dde3c675789d8815d7aaa5783b9be0bedafd50701171a\"" Feb 13 16:06:59.563157 containerd[2052]: time="2025-02-13T16:06:59.562808241Z" level=info msg="StartContainer for \"a6c749fa109fc4bd1b5d7fb2c7df68c23403f8869864f3166799bc7977d82303\" returns successfully" Feb 13 16:06:59.644065 containerd[2052]: time="2025-02-13T16:06:59.639656554Z" level=info msg="StartContainer for \"59d3ad83a7b70ff2813dde3c675789d8815d7aaa5783b9be0bedafd50701171a\" returns successfully" Feb 13 16:07:00.499253 kubelet[3663]: I0213 16:07:00.497923 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wvd6t" podStartSLOduration=29.497861578 podStartE2EDuration="29.497861578s" podCreationTimestamp="2025-02-13 16:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:00.47534155 +0000 UTC m=+41.586531460" watchObservedRunningTime="2025-02-13 16:07:00.497861578 +0000 UTC m=+41.609051452" Feb 13 16:07:00.502664 kubelet[3663]: I0213 16:07:00.500954 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x8ksr" podStartSLOduration=29.500865454 podStartE2EDuration="29.500865454s" podCreationTimestamp="2025-02-13 16:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:00.495803542 +0000 UTC m=+41.606993440" watchObservedRunningTime="2025-02-13 16:07:00.500865454 +0000 UTC m=+41.612055436" Feb 13 16:07:03.813595 systemd[1]: Started sshd@7-172.31.24.10:22-139.178.68.195:44700.service - OpenSSH per-connection server daemon (139.178.68.195:44700). Feb 13 16:07:03.998970 sshd[5023]: Accepted publickey for core from 139.178.68.195 port 44700 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:04.001627 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:04.012753 systemd-logind[2030]: New session 8 of user core. Feb 13 16:07:04.018725 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:07:04.288548 sshd[5023]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:04.295589 systemd-logind[2030]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:07:04.297783 systemd[1]: sshd@7-172.31.24.10:22-139.178.68.195:44700.service: Deactivated successfully. Feb 13 16:07:04.302574 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:07:04.305382 systemd-logind[2030]: Removed session 8. Feb 13 16:07:09.320047 systemd[1]: Started sshd@8-172.31.24.10:22-139.178.68.195:48920.service - OpenSSH per-connection server daemon (139.178.68.195:48920). Feb 13 16:07:09.501554 sshd[5038]: Accepted publickey for core from 139.178.68.195 port 48920 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:09.504394 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:09.512961 systemd-logind[2030]: New session 9 of user core. Feb 13 16:07:09.522562 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:07:09.787437 sshd[5038]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:09.794335 systemd-logind[2030]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:07:09.796257 systemd[1]: sshd@8-172.31.24.10:22-139.178.68.195:48920.service: Deactivated successfully. Feb 13 16:07:09.801761 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:07:09.803795 systemd-logind[2030]: Removed session 9. Feb 13 16:07:14.820605 systemd[1]: Started sshd@9-172.31.24.10:22-139.178.68.195:48936.service - OpenSSH per-connection server daemon (139.178.68.195:48936). Feb 13 16:07:15.002816 sshd[5052]: Accepted publickey for core from 139.178.68.195 port 48936 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:15.005889 sshd[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:15.013979 systemd-logind[2030]: New session 10 of user core. Feb 13 16:07:15.024716 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:07:15.269382 sshd[5052]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:15.276059 systemd[1]: sshd@9-172.31.24.10:22-139.178.68.195:48936.service: Deactivated successfully. Feb 13 16:07:15.281745 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:07:15.282893 systemd-logind[2030]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:07:15.287372 systemd-logind[2030]: Removed session 10. Feb 13 16:07:20.300662 systemd[1]: Started sshd@10-172.31.24.10:22-139.178.68.195:39342.service - OpenSSH per-connection server daemon (139.178.68.195:39342). Feb 13 16:07:20.481619 sshd[5069]: Accepted publickey for core from 139.178.68.195 port 39342 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:20.484257 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:20.492707 systemd-logind[2030]: New session 11 of user core. Feb 13 16:07:20.499694 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:07:20.758321 sshd[5069]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:20.763685 systemd[1]: sshd@10-172.31.24.10:22-139.178.68.195:39342.service: Deactivated successfully. Feb 13 16:07:20.771693 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:07:20.776555 systemd-logind[2030]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:07:20.778222 systemd-logind[2030]: Removed session 11. Feb 13 16:07:20.789613 systemd[1]: Started sshd@11-172.31.24.10:22-139.178.68.195:39356.service - OpenSSH per-connection server daemon (139.178.68.195:39356). Feb 13 16:07:20.974711 sshd[5084]: Accepted publickey for core from 139.178.68.195 port 39356 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:20.978090 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:20.988057 systemd-logind[2030]: New session 12 of user core. Feb 13 16:07:21.000692 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:07:21.316834 sshd[5084]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:21.330394 systemd[1]: sshd@11-172.31.24.10:22-139.178.68.195:39356.service: Deactivated successfully. Feb 13 16:07:21.349552 systemd-logind[2030]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:07:21.350602 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:07:21.368587 systemd[1]: Started sshd@12-172.31.24.10:22-139.178.68.195:39372.service - OpenSSH per-connection server daemon (139.178.68.195:39372). Feb 13 16:07:21.370195 systemd-logind[2030]: Removed session 12. Feb 13 16:07:21.545812 sshd[5096]: Accepted publickey for core from 139.178.68.195 port 39372 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:21.548515 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:21.557594 systemd-logind[2030]: New session 13 of user core. Feb 13 16:07:21.565629 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:07:21.817442 sshd[5096]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:21.825031 systemd[1]: sshd@12-172.31.24.10:22-139.178.68.195:39372.service: Deactivated successfully. Feb 13 16:07:21.832810 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:07:21.835054 systemd-logind[2030]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:07:21.836930 systemd-logind[2030]: Removed session 13. Feb 13 16:07:26.849603 systemd[1]: Started sshd@13-172.31.24.10:22-139.178.68.195:32984.service - OpenSSH per-connection server daemon (139.178.68.195:32984). Feb 13 16:07:27.033799 sshd[5110]: Accepted publickey for core from 139.178.68.195 port 32984 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:27.036543 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:27.045035 systemd-logind[2030]: New session 14 of user core. Feb 13 16:07:27.050706 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:07:27.302235 sshd[5110]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:27.309408 systemd[1]: sshd@13-172.31.24.10:22-139.178.68.195:32984.service: Deactivated successfully. Feb 13 16:07:27.316193 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:07:27.316507 systemd-logind[2030]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:07:27.319488 systemd-logind[2030]: Removed session 14. Feb 13 16:07:32.332631 systemd[1]: Started sshd@14-172.31.24.10:22-139.178.68.195:33000.service - OpenSSH per-connection server daemon (139.178.68.195:33000). Feb 13 16:07:32.512523 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 33000 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:32.515320 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:32.523066 systemd-logind[2030]: New session 15 of user core. Feb 13 16:07:32.529573 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:07:32.774978 sshd[5125]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:32.780416 systemd-logind[2030]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:07:32.783926 systemd[1]: sshd@14-172.31.24.10:22-139.178.68.195:33000.service: Deactivated successfully. Feb 13 16:07:32.790439 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:07:32.793806 systemd-logind[2030]: Removed session 15. Feb 13 16:07:37.813766 systemd[1]: Started sshd@15-172.31.24.10:22-139.178.68.195:46202.service - OpenSSH per-connection server daemon (139.178.68.195:46202). Feb 13 16:07:37.981900 sshd[5141]: Accepted publickey for core from 139.178.68.195 port 46202 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:37.984553 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:37.993123 systemd-logind[2030]: New session 16 of user core. Feb 13 16:07:37.999733 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:07:38.249325 sshd[5141]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:38.255901 systemd[1]: sshd@15-172.31.24.10:22-139.178.68.195:46202.service: Deactivated successfully. Feb 13 16:07:38.262263 systemd-logind[2030]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:07:38.263324 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:07:38.266748 systemd-logind[2030]: Removed session 16. Feb 13 16:07:43.279618 systemd[1]: Started sshd@16-172.31.24.10:22-139.178.68.195:46204.service - OpenSSH per-connection server daemon (139.178.68.195:46204). Feb 13 16:07:43.452464 sshd[5156]: Accepted publickey for core from 139.178.68.195 port 46204 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:43.455130 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:43.463972 systemd-logind[2030]: New session 17 of user core. Feb 13 16:07:43.469804 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:07:43.717443 sshd[5156]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:43.722575 systemd[1]: sshd@16-172.31.24.10:22-139.178.68.195:46204.service: Deactivated successfully. Feb 13 16:07:43.731842 systemd-logind[2030]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:07:43.732754 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:07:43.735842 systemd-logind[2030]: Removed session 17. Feb 13 16:07:43.747594 systemd[1]: Started sshd@17-172.31.24.10:22-139.178.68.195:46206.service - OpenSSH per-connection server daemon (139.178.68.195:46206). Feb 13 16:07:43.922727 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 46206 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:43.925420 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:43.935153 systemd-logind[2030]: New session 18 of user core. Feb 13 16:07:43.941898 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:07:44.232254 sshd[5169]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:44.239642 systemd[1]: sshd@17-172.31.24.10:22-139.178.68.195:46206.service: Deactivated successfully. Feb 13 16:07:44.246890 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:07:44.248694 systemd-logind[2030]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:07:44.250488 systemd-logind[2030]: Removed session 18. Feb 13 16:07:44.263646 systemd[1]: Started sshd@18-172.31.24.10:22-139.178.68.195:46222.service - OpenSSH per-connection server daemon (139.178.68.195:46222). Feb 13 16:07:44.445491 sshd[5181]: Accepted publickey for core from 139.178.68.195 port 46222 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:44.448138 sshd[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:44.457200 systemd-logind[2030]: New session 19 of user core. Feb 13 16:07:44.463244 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:07:47.014687 sshd[5181]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:47.028765 systemd[1]: sshd@18-172.31.24.10:22-139.178.68.195:46222.service: Deactivated successfully. Feb 13 16:07:47.044866 systemd-logind[2030]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:07:47.046808 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:07:47.059361 systemd[1]: Started sshd@19-172.31.24.10:22-139.178.68.195:46256.service - OpenSSH per-connection server daemon (139.178.68.195:46256). Feb 13 16:07:47.065583 systemd-logind[2030]: Removed session 19. Feb 13 16:07:47.243554 sshd[5200]: Accepted publickey for core from 139.178.68.195 port 46256 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:47.246172 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:47.254269 systemd-logind[2030]: New session 20 of user core. Feb 13 16:07:47.266078 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:07:47.755934 sshd[5200]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:47.762976 systemd[1]: sshd@19-172.31.24.10:22-139.178.68.195:46256.service: Deactivated successfully. Feb 13 16:07:47.770195 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:07:47.772266 systemd-logind[2030]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:07:47.774501 systemd-logind[2030]: Removed session 20. Feb 13 16:07:47.786634 systemd[1]: Started sshd@20-172.31.24.10:22-139.178.68.195:46266.service - OpenSSH per-connection server daemon (139.178.68.195:46266). Feb 13 16:07:47.965858 sshd[5212]: Accepted publickey for core from 139.178.68.195 port 46266 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:47.968760 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:47.976469 systemd-logind[2030]: New session 21 of user core. Feb 13 16:07:47.981879 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:07:48.225857 sshd[5212]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:48.232001 systemd[1]: sshd@20-172.31.24.10:22-139.178.68.195:46266.service: Deactivated successfully. Feb 13 16:07:48.240656 systemd-logind[2030]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:07:48.242435 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:07:48.244481 systemd-logind[2030]: Removed session 21. Feb 13 16:07:53.258594 systemd[1]: Started sshd@21-172.31.24.10:22-139.178.68.195:46270.service - OpenSSH per-connection server daemon (139.178.68.195:46270). Feb 13 16:07:53.430657 sshd[5227]: Accepted publickey for core from 139.178.68.195 port 46270 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:53.433446 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:53.441906 systemd-logind[2030]: New session 22 of user core. Feb 13 16:07:53.448591 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:07:53.686723 sshd[5227]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:53.693255 systemd-logind[2030]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:07:53.693619 systemd[1]: sshd@21-172.31.24.10:22-139.178.68.195:46270.service: Deactivated successfully. Feb 13 16:07:53.704084 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:07:53.706813 systemd-logind[2030]: Removed session 22. Feb 13 16:07:58.719634 systemd[1]: Started sshd@22-172.31.24.10:22-139.178.68.195:52898.service - OpenSSH per-connection server daemon (139.178.68.195:52898). Feb 13 16:07:58.893520 sshd[5244]: Accepted publickey for core from 139.178.68.195 port 52898 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:58.893402 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:58.904535 systemd-logind[2030]: New session 23 of user core. Feb 13 16:07:58.912588 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:07:59.172644 sshd[5244]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:59.179897 systemd[1]: sshd@22-172.31.24.10:22-139.178.68.195:52898.service: Deactivated successfully. Feb 13 16:07:59.189650 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:07:59.191633 systemd-logind[2030]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:07:59.193450 systemd-logind[2030]: Removed session 23. Feb 13 16:08:04.205019 systemd[1]: Started sshd@23-172.31.24.10:22-139.178.68.195:52910.service - OpenSSH per-connection server daemon (139.178.68.195:52910). Feb 13 16:08:04.378655 sshd[5261]: Accepted publickey for core from 139.178.68.195 port 52910 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:04.381594 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:04.390006 systemd-logind[2030]: New session 24 of user core. Feb 13 16:08:04.394982 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:08:04.634466 sshd[5261]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:04.640319 systemd[1]: sshd@23-172.31.24.10:22-139.178.68.195:52910.service: Deactivated successfully. Feb 13 16:08:04.648553 systemd-logind[2030]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:08:04.649629 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:08:04.653899 systemd-logind[2030]: Removed session 24. Feb 13 16:08:09.664679 systemd[1]: Started sshd@24-172.31.24.10:22-139.178.68.195:46914.service - OpenSSH per-connection server daemon (139.178.68.195:46914). Feb 13 16:08:09.842242 sshd[5274]: Accepted publickey for core from 139.178.68.195 port 46914 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:09.844787 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:09.852241 systemd-logind[2030]: New session 25 of user core. Feb 13 16:08:09.857648 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:08:10.107476 sshd[5274]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:10.112219 systemd[1]: sshd@24-172.31.24.10:22-139.178.68.195:46914.service: Deactivated successfully. Feb 13 16:08:10.119430 systemd-logind[2030]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:08:10.122633 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:08:10.125072 systemd-logind[2030]: Removed session 25. Feb 13 16:08:10.134660 systemd[1]: Started sshd@25-172.31.24.10:22-139.178.68.195:46916.service - OpenSSH per-connection server daemon (139.178.68.195:46916). Feb 13 16:08:10.314813 sshd[5288]: Accepted publickey for core from 139.178.68.195 port 46916 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:10.317192 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:10.325431 systemd-logind[2030]: New session 26 of user core. Feb 13 16:08:10.336066 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:08:12.746562 containerd[2052]: time="2025-02-13T16:08:12.746340321Z" level=info msg="StopContainer for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" with timeout 30 (s)" Feb 13 16:08:12.750360 containerd[2052]: time="2025-02-13T16:08:12.749945409Z" level=info msg="Stop container \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" with signal terminated" Feb 13 16:08:12.798354 systemd[1]: run-containerd-runc-k8s.io-27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737-runc.my3L3Z.mount: Deactivated successfully. Feb 13 16:08:12.818699 containerd[2052]: time="2025-02-13T16:08:12.817673157Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:08:12.831012 containerd[2052]: time="2025-02-13T16:08:12.830933145Z" level=info msg="StopContainer for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" with timeout 2 (s)" Feb 13 16:08:12.832509 containerd[2052]: time="2025-02-13T16:08:12.832442025Z" level=info msg="Stop container \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" with signal terminated" Feb 13 16:08:12.850334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b-rootfs.mount: Deactivated successfully. Feb 13 16:08:12.867639 systemd-networkd[1603]: lxc_health: Link DOWN Feb 13 16:08:12.867659 systemd-networkd[1603]: lxc_health: Lost carrier Feb 13 16:08:12.889135 containerd[2052]: time="2025-02-13T16:08:12.885782422Z" level=info msg="shim disconnected" id=5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b namespace=k8s.io Feb 13 16:08:12.889135 containerd[2052]: time="2025-02-13T16:08:12.888447262Z" level=warning msg="cleaning up after shim disconnected" id=5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b namespace=k8s.io Feb 13 16:08:12.889135 containerd[2052]: time="2025-02-13T16:08:12.888476218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:12.930904 containerd[2052]: time="2025-02-13T16:08:12.929179966Z" level=info msg="StopContainer for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" returns successfully" Feb 13 16:08:12.931391 containerd[2052]: time="2025-02-13T16:08:12.931277950Z" level=info msg="StopPodSandbox for \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\"" Feb 13 16:08:12.931391 containerd[2052]: time="2025-02-13T16:08:12.931357366Z" level=info msg="Container to stop \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:08:12.936288 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874-shm.mount: Deactivated successfully. Feb 13 16:08:12.946580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737-rootfs.mount: Deactivated successfully. Feb 13 16:08:12.962144 containerd[2052]: time="2025-02-13T16:08:12.960505462Z" level=info msg="shim disconnected" id=27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737 namespace=k8s.io Feb 13 16:08:12.962144 containerd[2052]: time="2025-02-13T16:08:12.960595498Z" level=warning msg="cleaning up after shim disconnected" id=27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737 namespace=k8s.io Feb 13 16:08:12.962144 containerd[2052]: time="2025-02-13T16:08:12.960618070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:13.005336 containerd[2052]: time="2025-02-13T16:08:13.004076766Z" level=info msg="StopContainer for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" returns successfully" Feb 13 16:08:13.006255 containerd[2052]: time="2025-02-13T16:08:13.005604042Z" level=info msg="StopPodSandbox for \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\"" Feb 13 16:08:13.006255 containerd[2052]: time="2025-02-13T16:08:13.005665854Z" level=info msg="Container to stop \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:08:13.006255 containerd[2052]: time="2025-02-13T16:08:13.005691378Z" level=info msg="Container to stop \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:08:13.006255 containerd[2052]: time="2025-02-13T16:08:13.005714190Z" level=info msg="Container to stop \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:08:13.006255 containerd[2052]: time="2025-02-13T16:08:13.005753430Z" level=info msg="Container to stop \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:08:13.006255 containerd[2052]: time="2025-02-13T16:08:13.005784030Z" level=info msg="Container to stop \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:08:13.012131 containerd[2052]: time="2025-02-13T16:08:13.011830722Z" level=info msg="shim disconnected" id=cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874 namespace=k8s.io Feb 13 16:08:13.012131 containerd[2052]: time="2025-02-13T16:08:13.011902542Z" level=warning msg="cleaning up after shim disconnected" id=cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874 namespace=k8s.io Feb 13 16:08:13.012131 containerd[2052]: time="2025-02-13T16:08:13.011923326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:13.045976 containerd[2052]: time="2025-02-13T16:08:13.045688974Z" level=info msg="TearDown network for sandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" successfully" Feb 13 16:08:13.045976 containerd[2052]: time="2025-02-13T16:08:13.045750570Z" level=info msg="StopPodSandbox for \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" returns successfully" Feb 13 16:08:13.078782 containerd[2052]: time="2025-02-13T16:08:13.078664975Z" level=info msg="shim disconnected" id=9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530 namespace=k8s.io Feb 13 16:08:13.078782 containerd[2052]: time="2025-02-13T16:08:13.078740935Z" level=warning msg="cleaning up after shim disconnected" id=9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530 namespace=k8s.io Feb 13 16:08:13.078782 containerd[2052]: time="2025-02-13T16:08:13.078765883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:13.107506 containerd[2052]: time="2025-02-13T16:08:13.107293963Z" level=info msg="TearDown network for sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" successfully" Feb 13 16:08:13.107506 containerd[2052]: time="2025-02-13T16:08:13.107367379Z" level=info msg="StopPodSandbox for \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" returns successfully" Feb 13 16:08:13.167134 kubelet[3663]: I0213 16:08:13.165268 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-kernel\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.167134 kubelet[3663]: I0213 16:08:13.165334 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-lib-modules\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.167134 kubelet[3663]: I0213 16:08:13.165379 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-net\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.167134 kubelet[3663]: I0213 16:08:13.165426 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-hubble-tls\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.167134 kubelet[3663]: I0213 16:08:13.165468 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-xtables-lock\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.167134 kubelet[3663]: I0213 16:08:13.165507 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-cgroup\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168029 kubelet[3663]: I0213 16:08:13.165513 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.168029 kubelet[3663]: I0213 16:08:13.165556 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/771a3d43-4f24-491b-8c40-ec927a06293c-clustermesh-secrets\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168029 kubelet[3663]: I0213 16:08:13.165580 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.168029 kubelet[3663]: I0213 16:08:13.165597 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-etc-cni-netd\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168029 kubelet[3663]: I0213 16:08:13.165624 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.168377 kubelet[3663]: I0213 16:08:13.165639 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cni-path\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168377 kubelet[3663]: I0213 16:08:13.165663 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.168377 kubelet[3663]: I0213 16:08:13.165716 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-hostproc\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168377 kubelet[3663]: I0213 16:08:13.165757 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-bpf-maps\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168377 kubelet[3663]: I0213 16:08:13.165801 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-config-path\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168377 kubelet[3663]: I0213 16:08:13.165840 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-run\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.165883 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz5n4\" (UniqueName: \"kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-kube-api-access-qz5n4\") pod \"771a3d43-4f24-491b-8c40-ec927a06293c\" (UID: \"771a3d43-4f24-491b-8c40-ec927a06293c\") " Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.165930 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-cilium-config-path\") pod \"cbbd3ef1-70d4-40bf-a12b-5c5f0a032526\" (UID: \"cbbd3ef1-70d4-40bf-a12b-5c5f0a032526\") " Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.165973 3663 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q4qc\" (UniqueName: \"kubernetes.io/projected/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-kube-api-access-6q4qc\") pod \"cbbd3ef1-70d4-40bf-a12b-5c5f0a032526\" (UID: \"cbbd3ef1-70d4-40bf-a12b-5c5f0a032526\") " Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.166031 3663 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-kernel\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.166058 3663 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-lib-modules\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.166085 3663 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-host-proc-sys-net\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.168746 kubelet[3663]: I0213 16:08:13.166143 3663 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-xtables-lock\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.170705 kubelet[3663]: I0213 16:08:13.170640 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.170866 kubelet[3663]: I0213 16:08:13.170723 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cni-path" (OuterVolumeSpecName: "cni-path") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.170866 kubelet[3663]: I0213 16:08:13.170765 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-hostproc" (OuterVolumeSpecName: "hostproc") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.170866 kubelet[3663]: I0213 16:08:13.170804 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.171030 kubelet[3663]: I0213 16:08:13.170946 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:08:13.173899 kubelet[3663]: I0213 16:08:13.173838 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.174328 kubelet[3663]: I0213 16:08:13.174294 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:08:13.180479 kubelet[3663]: I0213 16:08:13.180321 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/771a3d43-4f24-491b-8c40-ec927a06293c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 16:08:13.185404 kubelet[3663]: I0213 16:08:13.185351 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cbbd3ef1-70d4-40bf-a12b-5c5f0a032526" (UID: "cbbd3ef1-70d4-40bf-a12b-5c5f0a032526"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:08:13.185816 kubelet[3663]: I0213 16:08:13.185696 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-kube-api-access-qz5n4" (OuterVolumeSpecName: "kube-api-access-qz5n4") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "kube-api-access-qz5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:08:13.186433 kubelet[3663]: I0213 16:08:13.186393 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "771a3d43-4f24-491b-8c40-ec927a06293c" (UID: "771a3d43-4f24-491b-8c40-ec927a06293c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:08:13.186781 kubelet[3663]: I0213 16:08:13.186717 3663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-kube-api-access-6q4qc" (OuterVolumeSpecName: "kube-api-access-6q4qc") pod "cbbd3ef1-70d4-40bf-a12b-5c5f0a032526" (UID: "cbbd3ef1-70d4-40bf-a12b-5c5f0a032526"). InnerVolumeSpecName "kube-api-access-6q4qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:08:13.267526 kubelet[3663]: I0213 16:08:13.267344 3663 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-hubble-tls\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.267526 kubelet[3663]: I0213 16:08:13.267405 3663 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-cgroup\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.267526 kubelet[3663]: I0213 16:08:13.267451 3663 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/771a3d43-4f24-491b-8c40-ec927a06293c-clustermesh-secrets\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.267526 kubelet[3663]: I0213 16:08:13.267486 3663 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-etc-cni-netd\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.267526 kubelet[3663]: I0213 16:08:13.267510 3663 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cni-path\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.267526 kubelet[3663]: I0213 16:08:13.267533 3663 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-hostproc\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.268274 kubelet[3663]: I0213 16:08:13.267556 3663 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-bpf-maps\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.268274 kubelet[3663]: I0213 16:08:13.267580 3663 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-config-path\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.268274 kubelet[3663]: I0213 16:08:13.267603 3663 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/771a3d43-4f24-491b-8c40-ec927a06293c-cilium-run\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.268274 kubelet[3663]: I0213 16:08:13.267627 3663 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qz5n4\" (UniqueName: \"kubernetes.io/projected/771a3d43-4f24-491b-8c40-ec927a06293c-kube-api-access-qz5n4\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.268274 kubelet[3663]: I0213 16:08:13.267650 3663 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-cilium-config-path\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.268274 kubelet[3663]: I0213 16:08:13.267675 3663 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6q4qc\" (UniqueName: \"kubernetes.io/projected/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526-kube-api-access-6q4qc\") on node \"ip-172-31-24-10\" DevicePath \"\"" Feb 13 16:08:13.648881 kubelet[3663]: I0213 16:08:13.648671 3663 scope.go:117] "RemoveContainer" containerID="27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737" Feb 13 16:08:13.655722 containerd[2052]: time="2025-02-13T16:08:13.655225305Z" level=info msg="RemoveContainer for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\"" Feb 13 16:08:13.674747 containerd[2052]: time="2025-02-13T16:08:13.674681217Z" level=info msg="RemoveContainer for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" returns successfully" Feb 13 16:08:13.677591 kubelet[3663]: I0213 16:08:13.675456 3663 scope.go:117] "RemoveContainer" containerID="50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832" Feb 13 16:08:13.684829 containerd[2052]: time="2025-02-13T16:08:13.682634458Z" level=info msg="RemoveContainer for \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\"" Feb 13 16:08:13.690275 containerd[2052]: time="2025-02-13T16:08:13.690176254Z" level=info msg="RemoveContainer for \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\" returns successfully" Feb 13 16:08:13.691228 kubelet[3663]: I0213 16:08:13.690766 3663 scope.go:117] "RemoveContainer" containerID="a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2" Feb 13 16:08:13.693553 containerd[2052]: time="2025-02-13T16:08:13.693496042Z" level=info msg="RemoveContainer for \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\"" Feb 13 16:08:13.700584 containerd[2052]: time="2025-02-13T16:08:13.700524934Z" level=info msg="RemoveContainer for \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\" returns successfully" Feb 13 16:08:13.702426 kubelet[3663]: I0213 16:08:13.701863 3663 scope.go:117] "RemoveContainer" containerID="ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24" Feb 13 16:08:13.707830 containerd[2052]: time="2025-02-13T16:08:13.707770666Z" level=info msg="RemoveContainer for \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\"" Feb 13 16:08:13.714120 containerd[2052]: time="2025-02-13T16:08:13.714040438Z" level=info msg="RemoveContainer for \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\" returns successfully" Feb 13 16:08:13.714518 kubelet[3663]: I0213 16:08:13.714402 3663 scope.go:117] "RemoveContainer" containerID="eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b" Feb 13 16:08:13.716625 containerd[2052]: time="2025-02-13T16:08:13.716569774Z" level=info msg="RemoveContainer for \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\"" Feb 13 16:08:13.722407 containerd[2052]: time="2025-02-13T16:08:13.722352298Z" level=info msg="RemoveContainer for \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\" returns successfully" Feb 13 16:08:13.723082 kubelet[3663]: I0213 16:08:13.722937 3663 scope.go:117] "RemoveContainer" containerID="27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737" Feb 13 16:08:13.723780 containerd[2052]: time="2025-02-13T16:08:13.723710950Z" level=error msg="ContainerStatus for \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\": not found" Feb 13 16:08:13.724456 kubelet[3663]: E0213 16:08:13.724230 3663 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\": not found" containerID="27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737" Feb 13 16:08:13.724456 kubelet[3663]: I0213 16:08:13.724369 3663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737"} err="failed to get container status \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\": rpc error: code = NotFound desc = an error occurred when try to find container \"27082640d8d9e781e81d1c6defb01d82b5ea87be3241085236372ebb48b60737\": not found" Feb 13 16:08:13.724456 kubelet[3663]: I0213 16:08:13.724396 3663 scope.go:117] "RemoveContainer" containerID="50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832" Feb 13 16:08:13.725389 containerd[2052]: time="2025-02-13T16:08:13.725329546Z" level=error msg="ContainerStatus for \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\": not found" Feb 13 16:08:13.725732 kubelet[3663]: E0213 16:08:13.725632 3663 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\": not found" containerID="50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832" Feb 13 16:08:13.725732 kubelet[3663]: I0213 16:08:13.725698 3663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832"} err="failed to get container status \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\": rpc error: code = NotFound desc = an error occurred when try to find container \"50af7d626780b0e95ec03e58db38cea168015badcd3e8011056f2ed2a6353832\": not found" Feb 13 16:08:13.725732 kubelet[3663]: I0213 16:08:13.725723 3663 scope.go:117] "RemoveContainer" containerID="a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2" Feb 13 16:08:13.726542 containerd[2052]: time="2025-02-13T16:08:13.726368470Z" level=error msg="ContainerStatus for \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\": not found" Feb 13 16:08:13.726744 kubelet[3663]: E0213 16:08:13.726692 3663 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\": not found" containerID="a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2" Feb 13 16:08:13.726925 kubelet[3663]: I0213 16:08:13.726752 3663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2"} err="failed to get container status \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a68fa05f4b98fa2d19d21d212661b3060b9d9e8741b9bdb92e273ad506cb1de2\": not found" Feb 13 16:08:13.726925 kubelet[3663]: I0213 16:08:13.726775 3663 scope.go:117] "RemoveContainer" containerID="ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24" Feb 13 16:08:13.727236 containerd[2052]: time="2025-02-13T16:08:13.727089118Z" level=error msg="ContainerStatus for \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\": not found" Feb 13 16:08:13.727383 kubelet[3663]: E0213 16:08:13.727317 3663 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\": not found" containerID="ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24" Feb 13 16:08:13.727383 kubelet[3663]: I0213 16:08:13.727374 3663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24"} err="failed to get container status \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba420b258e5b0c3917de5b512c17756d08857bea1c468b0906887c945b329f24\": not found" Feb 13 16:08:13.727726 kubelet[3663]: I0213 16:08:13.727397 3663 scope.go:117] "RemoveContainer" containerID="eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b" Feb 13 16:08:13.727796 containerd[2052]: time="2025-02-13T16:08:13.727746538Z" level=error msg="ContainerStatus for \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\": not found" Feb 13 16:08:13.728407 kubelet[3663]: E0213 16:08:13.728191 3663 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\": not found" containerID="eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b" Feb 13 16:08:13.728407 kubelet[3663]: I0213 16:08:13.728262 3663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b"} err="failed to get container status \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaf6c3f32e4df4d6b652a2633b94aa43460119c9396859243eea4d2bbc6ee24b\": not found" Feb 13 16:08:13.728407 kubelet[3663]: I0213 16:08:13.728287 3663 scope.go:117] "RemoveContainer" containerID="5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b" Feb 13 16:08:13.731152 containerd[2052]: time="2025-02-13T16:08:13.730781638Z" level=info msg="RemoveContainer for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\"" Feb 13 16:08:13.736666 containerd[2052]: time="2025-02-13T16:08:13.736612738Z" level=info msg="RemoveContainer for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" returns successfully" Feb 13 16:08:13.737002 kubelet[3663]: I0213 16:08:13.736948 3663 scope.go:117] "RemoveContainer" containerID="5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b" Feb 13 16:08:13.737664 containerd[2052]: time="2025-02-13T16:08:13.737315434Z" level=error msg="ContainerStatus for \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\": not found" Feb 13 16:08:13.737779 kubelet[3663]: E0213 16:08:13.737549 3663 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\": not found" containerID="5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b" Feb 13 16:08:13.737779 kubelet[3663]: I0213 16:08:13.737602 3663 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b"} err="failed to get container status \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5749bbca1a3299eabdc142dc3be2d11d0ab42238e1b6db65a8d207334354fd4b\": not found" Feb 13 16:08:13.774595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874-rootfs.mount: Deactivated successfully. Feb 13 16:08:13.775159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530-rootfs.mount: Deactivated successfully. Feb 13 16:08:13.775394 systemd[1]: var-lib-kubelet-pods-cbbd3ef1\x2d70d4\x2d40bf\x2da12b\x2d5c5f0a032526-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6q4qc.mount: Deactivated successfully. Feb 13 16:08:13.775637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530-shm.mount: Deactivated successfully. Feb 13 16:08:13.775857 systemd[1]: var-lib-kubelet-pods-771a3d43\x2d4f24\x2d491b\x2d8c40\x2dec927a06293c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqz5n4.mount: Deactivated successfully. Feb 13 16:08:13.776071 systemd[1]: var-lib-kubelet-pods-771a3d43\x2d4f24\x2d491b\x2d8c40\x2dec927a06293c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 16:08:13.776816 systemd[1]: var-lib-kubelet-pods-771a3d43\x2d4f24\x2d491b\x2d8c40\x2dec927a06293c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 16:08:14.378640 kubelet[3663]: E0213 16:08:14.378580 3663 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:08:14.686746 sshd[5288]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:14.692851 systemd[1]: sshd@25-172.31.24.10:22-139.178.68.195:46916.service: Deactivated successfully. Feb 13 16:08:14.693420 systemd-logind[2030]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:08:14.702435 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:08:14.707480 systemd-logind[2030]: Removed session 26. Feb 13 16:08:14.717639 systemd[1]: Started sshd@26-172.31.24.10:22-139.178.68.195:46926.service - OpenSSH per-connection server daemon (139.178.68.195:46926). Feb 13 16:08:14.899394 sshd[5458]: Accepted publickey for core from 139.178.68.195 port 46926 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:14.902044 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:14.909930 systemd-logind[2030]: New session 27 of user core. Feb 13 16:08:14.918724 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:08:15.087310 kubelet[3663]: I0213 16:08:15.087241 3663 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" path="/var/lib/kubelet/pods/771a3d43-4f24-491b-8c40-ec927a06293c/volumes" Feb 13 16:08:15.089157 kubelet[3663]: I0213 16:08:15.089082 3663 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cbbd3ef1-70d4-40bf-a12b-5c5f0a032526" path="/var/lib/kubelet/pods/cbbd3ef1-70d4-40bf-a12b-5c5f0a032526/volumes" Feb 13 16:08:15.657459 ntpd[2011]: Deleting interface #10 lxc_health, fe80::44ca:f2ff:fe9d:5296%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Feb 13 16:08:15.658054 ntpd[2011]: 13 Feb 16:08:15 ntpd[2011]: Deleting interface #10 lxc_health, fe80::44ca:f2ff:fe9d:5296%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Feb 13 16:08:16.079619 kubelet[3663]: E0213 16:08:16.079452 3663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-wvd6t" podUID="07013af2-9cdc-4e05-8e68-08a20b05136f" Feb 13 16:08:16.448489 sshd[5458]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:16.461982 systemd[1]: sshd@26-172.31.24.10:22-139.178.68.195:46926.service: Deactivated successfully. Feb 13 16:08:16.472595 kubelet[3663]: I0213 16:08:16.467268 3663 topology_manager.go:215] "Topology Admit Handler" podUID="55e63509-a044-4bf7-805e-93fb4b7d4bde" podNamespace="kube-system" podName="cilium-rdth9" Feb 13 16:08:16.472595 kubelet[3663]: E0213 16:08:16.471701 3663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" containerName="mount-cgroup" Feb 13 16:08:16.472595 kubelet[3663]: E0213 16:08:16.471746 3663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" containerName="clean-cilium-state" Feb 13 16:08:16.472595 kubelet[3663]: E0213 16:08:16.471790 3663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" containerName="apply-sysctl-overwrites" Feb 13 16:08:16.472595 kubelet[3663]: E0213 16:08:16.471812 3663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cbbd3ef1-70d4-40bf-a12b-5c5f0a032526" containerName="cilium-operator" Feb 13 16:08:16.472595 kubelet[3663]: E0213 16:08:16.471834 3663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" containerName="mount-bpf-fs" Feb 13 16:08:16.472595 kubelet[3663]: E0213 16:08:16.471874 3663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" containerName="cilium-agent" Feb 13 16:08:16.472595 kubelet[3663]: I0213 16:08:16.471926 3663 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbbd3ef1-70d4-40bf-a12b-5c5f0a032526" containerName="cilium-operator" Feb 13 16:08:16.472595 kubelet[3663]: I0213 16:08:16.471969 3663 memory_manager.go:354] "RemoveStaleState removing state" podUID="771a3d43-4f24-491b-8c40-ec927a06293c" containerName="cilium-agent" Feb 13 16:08:16.487624 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:08:16.499454 systemd-logind[2030]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:08:16.520039 systemd[1]: Started sshd@27-172.31.24.10:22-139.178.68.195:36328.service - OpenSSH per-connection server daemon (139.178.68.195:36328). Feb 13 16:08:16.533410 systemd-logind[2030]: Removed session 27. Feb 13 16:08:16.590397 kubelet[3663]: I0213 16:08:16.590331 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55e63509-a044-4bf7-805e-93fb4b7d4bde-cilium-config-path\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590532 kubelet[3663]: I0213 16:08:16.590417 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-host-proc-sys-kernel\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590532 kubelet[3663]: I0213 16:08:16.590464 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-hostproc\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590532 kubelet[3663]: I0213 16:08:16.590513 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-cilium-run\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590734 kubelet[3663]: I0213 16:08:16.590555 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-bpf-maps\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590734 kubelet[3663]: I0213 16:08:16.590602 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-lib-modules\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590734 kubelet[3663]: I0213 16:08:16.590646 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/55e63509-a044-4bf7-805e-93fb4b7d4bde-cilium-ipsec-secrets\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590734 kubelet[3663]: I0213 16:08:16.590688 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-host-proc-sys-net\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590734 kubelet[3663]: I0213 16:08:16.590733 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnst\" (UniqueName: \"kubernetes.io/projected/55e63509-a044-4bf7-805e-93fb4b7d4bde-kube-api-access-2xnst\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590997 kubelet[3663]: I0213 16:08:16.590780 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-cilium-cgroup\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590997 kubelet[3663]: I0213 16:08:16.590822 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-xtables-lock\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590997 kubelet[3663]: I0213 16:08:16.590864 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-cni-path\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590997 kubelet[3663]: I0213 16:08:16.590910 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55e63509-a044-4bf7-805e-93fb4b7d4bde-etc-cni-netd\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590997 kubelet[3663]: I0213 16:08:16.590955 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55e63509-a044-4bf7-805e-93fb4b7d4bde-hubble-tls\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.590997 kubelet[3663]: I0213 16:08:16.590999 3663 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55e63509-a044-4bf7-805e-93fb4b7d4bde-clustermesh-secrets\") pod \"cilium-rdth9\" (UID: \"55e63509-a044-4bf7-805e-93fb4b7d4bde\") " pod="kube-system/cilium-rdth9" Feb 13 16:08:16.804720 sshd[5472]: Accepted publickey for core from 139.178.68.195 port 36328 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:16.810263 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:16.820161 containerd[2052]: time="2025-02-13T16:08:16.818570005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdth9,Uid:55e63509-a044-4bf7-805e-93fb4b7d4bde,Namespace:kube-system,Attempt:0,}" Feb 13 16:08:16.825646 systemd-logind[2030]: New session 28 of user core. Feb 13 16:08:16.834806 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:08:16.885141 containerd[2052]: time="2025-02-13T16:08:16.884730013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:08:16.885141 containerd[2052]: time="2025-02-13T16:08:16.884834701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:08:16.885346 containerd[2052]: time="2025-02-13T16:08:16.884894845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:08:16.886467 containerd[2052]: time="2025-02-13T16:08:16.886353097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:08:16.946573 containerd[2052]: time="2025-02-13T16:08:16.946515218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdth9,Uid:55e63509-a044-4bf7-805e-93fb4b7d4bde,Namespace:kube-system,Attempt:0,} returns sandbox id \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\"" Feb 13 16:08:16.952973 containerd[2052]: time="2025-02-13T16:08:16.952883126Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:08:16.977947 containerd[2052]: time="2025-02-13T16:08:16.977858270Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b07654cf761ee96e1bda20b41cecef293a1461d36925ef8db5f2fda96320d8e9\"" Feb 13 16:08:16.979180 containerd[2052]: time="2025-02-13T16:08:16.979094090Z" level=info msg="StartContainer for \"b07654cf761ee96e1bda20b41cecef293a1461d36925ef8db5f2fda96320d8e9\"" Feb 13 16:08:16.983939 sshd[5472]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:16.992296 systemd[1]: sshd@27-172.31.24.10:22-139.178.68.195:36328.service: Deactivated successfully. Feb 13 16:08:17.002526 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:08:17.003772 systemd-logind[2030]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:08:17.020691 systemd[1]: Started sshd@28-172.31.24.10:22-139.178.68.195:36334.service - OpenSSH per-connection server daemon (139.178.68.195:36334). Feb 13 16:08:17.023252 systemd-logind[2030]: Removed session 28. Feb 13 16:08:17.093392 containerd[2052]: time="2025-02-13T16:08:17.093008374Z" level=info msg="StartContainer for \"b07654cf761ee96e1bda20b41cecef293a1461d36925ef8db5f2fda96320d8e9\" returns successfully" Feb 13 16:08:17.160160 containerd[2052]: time="2025-02-13T16:08:17.159374531Z" level=info msg="shim disconnected" id=b07654cf761ee96e1bda20b41cecef293a1461d36925ef8db5f2fda96320d8e9 namespace=k8s.io Feb 13 16:08:17.160160 containerd[2052]: time="2025-02-13T16:08:17.159656327Z" level=warning msg="cleaning up after shim disconnected" id=b07654cf761ee96e1bda20b41cecef293a1461d36925ef8db5f2fda96320d8e9 namespace=k8s.io Feb 13 16:08:17.160160 containerd[2052]: time="2025-02-13T16:08:17.159713567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:17.215176 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 36334 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:17.216737 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:17.224902 systemd-logind[2030]: New session 29 of user core. Feb 13 16:08:17.232659 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 16:08:17.693650 containerd[2052]: time="2025-02-13T16:08:17.693585349Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:08:17.729072 containerd[2052]: time="2025-02-13T16:08:17.728958602Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0680c608b61b5a69c19f1f6e9e74c7a7683250f77fafe0b775ad6e7e10e7b9ee\"" Feb 13 16:08:17.730605 containerd[2052]: time="2025-02-13T16:08:17.730446950Z" level=info msg="StartContainer for \"0680c608b61b5a69c19f1f6e9e74c7a7683250f77fafe0b775ad6e7e10e7b9ee\"" Feb 13 16:08:17.838589 containerd[2052]: time="2025-02-13T16:08:17.838396010Z" level=info msg="StartContainer for \"0680c608b61b5a69c19f1f6e9e74c7a7683250f77fafe0b775ad6e7e10e7b9ee\" returns successfully" Feb 13 16:08:17.889578 containerd[2052]: time="2025-02-13T16:08:17.889063298Z" level=info msg="shim disconnected" id=0680c608b61b5a69c19f1f6e9e74c7a7683250f77fafe0b775ad6e7e10e7b9ee namespace=k8s.io Feb 13 16:08:17.889578 containerd[2052]: time="2025-02-13T16:08:17.889200422Z" level=warning msg="cleaning up after shim disconnected" id=0680c608b61b5a69c19f1f6e9e74c7a7683250f77fafe0b775ad6e7e10e7b9ee namespace=k8s.io Feb 13 16:08:17.889578 containerd[2052]: time="2025-02-13T16:08:17.889247030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:18.079416 kubelet[3663]: E0213 16:08:18.079026 3663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-wvd6t" podUID="07013af2-9cdc-4e05-8e68-08a20b05136f" Feb 13 16:08:18.694430 containerd[2052]: time="2025-02-13T16:08:18.694136798Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:08:18.710172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0680c608b61b5a69c19f1f6e9e74c7a7683250f77fafe0b775ad6e7e10e7b9ee-rootfs.mount: Deactivated successfully. Feb 13 16:08:18.733217 containerd[2052]: time="2025-02-13T16:08:18.732059703Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af8abfcb8cfdb92feb323fb80d0db420532549f821c588c13cd8e355e2834a4b\"" Feb 13 16:08:18.735265 containerd[2052]: time="2025-02-13T16:08:18.734819679Z" level=info msg="StartContainer for \"af8abfcb8cfdb92feb323fb80d0db420532549f821c588c13cd8e355e2834a4b\"" Feb 13 16:08:18.841894 containerd[2052]: time="2025-02-13T16:08:18.841355607Z" level=info msg="StartContainer for \"af8abfcb8cfdb92feb323fb80d0db420532549f821c588c13cd8e355e2834a4b\" returns successfully" Feb 13 16:08:18.894708 containerd[2052]: time="2025-02-13T16:08:18.894625407Z" level=info msg="shim disconnected" id=af8abfcb8cfdb92feb323fb80d0db420532549f821c588c13cd8e355e2834a4b namespace=k8s.io Feb 13 16:08:18.894708 containerd[2052]: time="2025-02-13T16:08:18.894703215Z" level=warning msg="cleaning up after shim disconnected" id=af8abfcb8cfdb92feb323fb80d0db420532549f821c588c13cd8e355e2834a4b namespace=k8s.io Feb 13 16:08:18.896550 containerd[2052]: time="2025-02-13T16:08:18.894725463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:19.135232 containerd[2052]: time="2025-02-13T16:08:19.134854033Z" level=info msg="StopPodSandbox for \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\"" Feb 13 16:08:19.135232 containerd[2052]: time="2025-02-13T16:08:19.134993677Z" level=info msg="TearDown network for sandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" successfully" Feb 13 16:08:19.135232 containerd[2052]: time="2025-02-13T16:08:19.135016657Z" level=info msg="StopPodSandbox for \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" returns successfully" Feb 13 16:08:19.136628 containerd[2052]: time="2025-02-13T16:08:19.136551421Z" level=info msg="RemovePodSandbox for \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\"" Feb 13 16:08:19.136628 containerd[2052]: time="2025-02-13T16:08:19.136604617Z" level=info msg="Forcibly stopping sandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\"" Feb 13 16:08:19.136842 containerd[2052]: time="2025-02-13T16:08:19.136710517Z" level=info msg="TearDown network for sandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" successfully" Feb 13 16:08:19.142852 containerd[2052]: time="2025-02-13T16:08:19.142736821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:08:19.142994 containerd[2052]: time="2025-02-13T16:08:19.142874605Z" level=info msg="RemovePodSandbox \"cc92d1a23a7eaedcebf9baa19afd27c2ed93e281a2db3ca413369f0cc0143874\" returns successfully" Feb 13 16:08:19.143725 containerd[2052]: time="2025-02-13T16:08:19.143666689Z" level=info msg="StopPodSandbox for \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\"" Feb 13 16:08:19.144048 containerd[2052]: time="2025-02-13T16:08:19.143948581Z" level=info msg="TearDown network for sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" successfully" Feb 13 16:08:19.144147 containerd[2052]: time="2025-02-13T16:08:19.144047953Z" level=info msg="StopPodSandbox for \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" returns successfully" Feb 13 16:08:19.144748 containerd[2052]: time="2025-02-13T16:08:19.144693097Z" level=info msg="RemovePodSandbox for \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\"" Feb 13 16:08:19.144869 containerd[2052]: time="2025-02-13T16:08:19.144745021Z" level=info msg="Forcibly stopping sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\"" Feb 13 16:08:19.144869 containerd[2052]: time="2025-02-13T16:08:19.144853501Z" level=info msg="TearDown network for sandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" successfully" Feb 13 16:08:19.150697 containerd[2052]: time="2025-02-13T16:08:19.150623521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:08:19.150840 containerd[2052]: time="2025-02-13T16:08:19.150715885Z" level=info msg="RemovePodSandbox \"9e236e1130b50a46fb4326927e14d389785df3ee59acae0082a008fcc298d530\" returns successfully" Feb 13 16:08:19.379750 kubelet[3663]: E0213 16:08:19.379696 3663 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:08:19.707719 containerd[2052]: time="2025-02-13T16:08:19.707596419Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:08:19.710472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af8abfcb8cfdb92feb323fb80d0db420532549f821c588c13cd8e355e2834a4b-rootfs.mount: Deactivated successfully. Feb 13 16:08:19.747011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087168503.mount: Deactivated successfully. Feb 13 16:08:19.749699 containerd[2052]: time="2025-02-13T16:08:19.747776620Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f7c9a54a4f6ff2459171c94fa0056f0d8a4430619b1e4f72211a5dbbe0e2490\"" Feb 13 16:08:19.750301 containerd[2052]: time="2025-02-13T16:08:19.750256156Z" level=info msg="StartContainer for \"0f7c9a54a4f6ff2459171c94fa0056f0d8a4430619b1e4f72211a5dbbe0e2490\"" Feb 13 16:08:19.849463 containerd[2052]: time="2025-02-13T16:08:19.849394204Z" level=info msg="StartContainer for \"0f7c9a54a4f6ff2459171c94fa0056f0d8a4430619b1e4f72211a5dbbe0e2490\" returns successfully" Feb 13 16:08:19.901733 containerd[2052]: time="2025-02-13T16:08:19.901644208Z" level=info msg="shim disconnected" id=0f7c9a54a4f6ff2459171c94fa0056f0d8a4430619b1e4f72211a5dbbe0e2490 namespace=k8s.io Feb 13 16:08:19.901733 containerd[2052]: time="2025-02-13T16:08:19.901723696Z" level=warning msg="cleaning up after shim disconnected" id=0f7c9a54a4f6ff2459171c94fa0056f0d8a4430619b1e4f72211a5dbbe0e2490 namespace=k8s.io Feb 13 16:08:19.902163 containerd[2052]: time="2025-02-13T16:08:19.901743640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:19.921817 containerd[2052]: time="2025-02-13T16:08:19.921635693Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:08:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:08:20.080000 kubelet[3663]: E0213 16:08:20.079324 3663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-wvd6t" podUID="07013af2-9cdc-4e05-8e68-08a20b05136f" Feb 13 16:08:20.713987 containerd[2052]: time="2025-02-13T16:08:20.713927956Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:08:20.714958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f7c9a54a4f6ff2459171c94fa0056f0d8a4430619b1e4f72211a5dbbe0e2490-rootfs.mount: Deactivated successfully. Feb 13 16:08:20.751727 containerd[2052]: time="2025-02-13T16:08:20.751652549Z" level=info msg="CreateContainer within sandbox \"7741a0f590fccc79ff01189e53d0274868b736caec65dacf3c65b45209528202\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0d2af26e63c05d398da06dd6ec4496f65b96fc2b01abcd04e2c9e08daf34129\"" Feb 13 16:08:20.756192 containerd[2052]: time="2025-02-13T16:08:20.756063593Z" level=info msg="StartContainer for \"a0d2af26e63c05d398da06dd6ec4496f65b96fc2b01abcd04e2c9e08daf34129\"" Feb 13 16:08:20.862918 containerd[2052]: time="2025-02-13T16:08:20.862678625Z" level=info msg="StartContainer for \"a0d2af26e63c05d398da06dd6ec4496f65b96fc2b01abcd04e2c9e08daf34129\" returns successfully" Feb 13 16:08:21.471223 kubelet[3663]: I0213 16:08:21.471075 3663 setters.go:568] "Node became not ready" node="ip-172-31-24-10" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T16:08:21Z","lastTransitionTime":"2025-02-13T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 16:08:21.635324 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 16:08:21.716586 systemd[1]: run-containerd-runc-k8s.io-a0d2af26e63c05d398da06dd6ec4496f65b96fc2b01abcd04e2c9e08daf34129-runc.J5boPd.mount: Deactivated successfully. Feb 13 16:08:22.079683 kubelet[3663]: E0213 16:08:22.079198 3663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-wvd6t" podUID="07013af2-9cdc-4e05-8e68-08a20b05136f" Feb 13 16:08:24.080497 kubelet[3663]: E0213 16:08:24.078939 3663 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-wvd6t" podUID="07013af2-9cdc-4e05-8e68-08a20b05136f" Feb 13 16:08:25.836952 systemd-networkd[1603]: lxc_health: Link UP Feb 13 16:08:25.854469 systemd-networkd[1603]: lxc_health: Gained carrier Feb 13 16:08:25.857200 (udev-worker)[6313]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:08:26.861276 kubelet[3663]: I0213 16:08:26.859444 3663 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rdth9" podStartSLOduration=10.859094483 podStartE2EDuration="10.859094483s" podCreationTimestamp="2025-02-13 16:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:08:21.758678214 +0000 UTC m=+122.869868112" watchObservedRunningTime="2025-02-13 16:08:26.859094483 +0000 UTC m=+127.970284369" Feb 13 16:08:27.442345 systemd-networkd[1603]: lxc_health: Gained IPv6LL Feb 13 16:08:28.767409 kubelet[3663]: E0213 16:08:28.767351 3663 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55328->127.0.0.1:38737: write tcp 127.0.0.1:55328->127.0.0.1:38737: write: broken pipe Feb 13 16:08:28.768021 kubelet[3663]: E0213 16:08:28.767209 3663 upgradeaware.go:439] Error proxying data from backend to client: write tcp 172.31.24.10:10250->172.31.24.10:47790: write: broken pipe Feb 13 16:08:29.657761 ntpd[2011]: Listen normally on 13 lxc_health [fe80::9849:d2ff:fe24:97a4%14]:123 Feb 13 16:08:29.658331 ntpd[2011]: 13 Feb 16:08:29 ntpd[2011]: Listen normally on 13 lxc_health [fe80::9849:d2ff:fe24:97a4%14]:123 Feb 13 16:08:33.340520 sshd[5536]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:33.353473 systemd[1]: sshd@28-172.31.24.10:22-139.178.68.195:36334.service: Deactivated successfully. Feb 13 16:08:33.366670 systemd-logind[2030]: Session 29 logged out. Waiting for processes to exit. Feb 13 16:08:33.368179 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 16:08:33.372244 systemd-logind[2030]: Removed session 29.