Mar 17 17:37:02.189922 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:37:02.189966 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:37:02.189991 kernel: KASLR disabled due to lack of seed Mar 17 17:37:02.190006 kernel: efi: EFI v2.7 by EDK II Mar 17 17:37:02.190022 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Mar 17 17:37:02.190037 kernel: secureboot: Secure boot disabled Mar 17 17:37:02.190054 kernel: ACPI: Early table checksum verification disabled Mar 17 17:37:02.190069 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:37:02.190084 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:37:02.190099 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:37:02.190119 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:37:02.190134 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:37:02.190149 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:37:02.190164 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:37:02.190182 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:37:02.190202 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:37:02.190219 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:37:02.190235 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:37:02.190275 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:37:02.190292 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:37:02.190308 kernel: printk: bootconsole [uart0] enabled Mar 17 17:37:02.190324 kernel: NUMA: Failed to initialise from firmware Mar 17 17:37:02.190340 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:37:02.190357 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:37:02.190372 kernel: Zone ranges: Mar 17 17:37:02.190388 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:37:02.190410 kernel: DMA32 empty Mar 17 17:37:02.190426 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:37:02.190442 kernel: Movable zone start for each node Mar 17 17:37:02.190457 kernel: Early memory node ranges Mar 17 17:37:02.190473 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:37:02.190489 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:37:02.190505 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:37:02.190520 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:37:02.190536 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:37:02.190552 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:37:02.190567 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:37:02.190583 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:37:02.190603 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:37:02.190620 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:37:02.190643 kernel: psci: probing for conduit method from ACPI. Mar 17 17:37:02.190659 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:37:02.190676 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:37:02.190697 kernel: psci: Trusted OS migration not required Mar 17 17:37:02.190714 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:37:02.190731 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:37:02.190747 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:37:02.190764 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:37:02.190781 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:37:02.190798 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:37:02.190814 kernel: CPU features: detected: Spectre-v2 Mar 17 17:37:02.190831 kernel: CPU features: detected: Spectre-v3a Mar 17 17:37:02.190847 kernel: CPU features: detected: Spectre-BHB Mar 17 17:37:02.190864 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:37:02.190881 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:37:02.190902 kernel: alternatives: applying boot alternatives Mar 17 17:37:02.190921 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:37:02.190939 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:37:02.190955 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:37:02.190972 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:37:02.190989 kernel: Fallback order for Node 0: 0 Mar 17 17:37:02.191006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:37:02.191022 kernel: Policy zone: Normal Mar 17 17:37:02.191039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:37:02.191055 kernel: software IO TLB: area num 2. Mar 17 17:37:02.191076 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:37:02.191094 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Mar 17 17:37:02.191111 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:37:02.191129 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:37:02.191147 kernel: rcu: RCU event tracing is enabled. Mar 17 17:37:02.191165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:37:02.191182 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:37:02.191201 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:37:02.191218 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:37:02.191234 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:37:02.191273 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:37:02.191297 kernel: GICv3: 96 SPIs implemented Mar 17 17:37:02.191314 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:37:02.191330 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:37:02.191347 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:37:02.191364 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:37:02.191380 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:37:02.191397 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:37:02.191414 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:37:02.191431 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:37:02.192715 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:37:02.193033 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:37:02.194224 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:37:02.194274 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:37:02.194292 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:37:02.194309 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:37:02.194326 kernel: Console: colour dummy device 80x25 Mar 17 17:37:02.194343 kernel: printk: console [tty1] enabled Mar 17 17:37:02.194361 kernel: ACPI: Core revision 20230628 Mar 17 17:37:02.194378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:37:02.194396 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:37:02.194413 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:37:02.194430 kernel: landlock: Up and running. Mar 17 17:37:02.194453 kernel: SELinux: Initializing. Mar 17 17:37:02.194471 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:37:02.194488 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:37:02.194505 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:37:02.194522 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:37:02.194540 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:37:02.194558 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:37:02.194575 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:37:02.194596 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:37:02.194613 kernel: Remapping and enabling EFI services. Mar 17 17:37:02.194630 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:37:02.194647 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:37:02.194664 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:37:02.194682 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:37:02.194699 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:37:02.194716 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:37:02.194734 kernel: SMP: Total of 2 processors activated. Mar 17 17:37:02.194752 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:37:02.194774 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:37:02.194792 kernel: CPU features: detected: CRC32 instructions Mar 17 17:37:02.194821 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:37:02.194843 kernel: alternatives: applying system-wide alternatives Mar 17 17:37:02.194860 kernel: devtmpfs: initialized Mar 17 17:37:02.194878 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:37:02.194896 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:37:02.194914 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:37:02.194932 kernel: SMBIOS 3.0.0 present. Mar 17 17:37:02.194954 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:37:02.194971 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:37:02.194989 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:37:02.195007 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:37:02.195025 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:37:02.195043 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:37:02.195061 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 Mar 17 17:37:02.195083 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:37:02.195101 kernel: cpuidle: using governor menu Mar 17 17:37:02.195119 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:37:02.195136 kernel: ASID allocator initialised with 65536 entries Mar 17 17:37:02.195154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:37:02.195172 kernel: Serial: AMBA PL011 UART driver Mar 17 17:37:02.195189 kernel: Modules: 17760 pages in range for non-PLT usage Mar 17 17:37:02.195207 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:37:02.195225 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:37:02.195265 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:37:02.195286 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:37:02.195304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:37:02.195322 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:37:02.195339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:37:02.195357 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:37:02.195374 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:37:02.195392 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:37:02.195410 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:37:02.195433 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:37:02.195451 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:37:02.195469 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:37:02.195487 kernel: ACPI: Interpreter enabled Mar 17 17:37:02.195504 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:37:02.195523 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:37:02.195540 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:37:02.195911 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:37:02.196125 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:37:02.199107 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:37:02.199419 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:37:02.199623 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:37:02.199647 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:37:02.199666 kernel: acpiphp: Slot [1] registered Mar 17 17:37:02.199685 kernel: acpiphp: Slot [2] registered Mar 17 17:37:02.199702 kernel: acpiphp: Slot [3] registered Mar 17 17:37:02.199730 kernel: acpiphp: Slot [4] registered Mar 17 17:37:02.199748 kernel: acpiphp: Slot [5] registered Mar 17 17:37:02.199766 kernel: acpiphp: Slot [6] registered Mar 17 17:37:02.199784 kernel: acpiphp: Slot [7] registered Mar 17 17:37:02.199801 kernel: acpiphp: Slot [8] registered Mar 17 17:37:02.199818 kernel: acpiphp: Slot [9] registered Mar 17 17:37:02.199836 kernel: acpiphp: Slot [10] registered Mar 17 17:37:02.199854 kernel: acpiphp: Slot [11] registered Mar 17 17:37:02.199871 kernel: acpiphp: Slot [12] registered Mar 17 17:37:02.199889 kernel: acpiphp: Slot [13] registered Mar 17 17:37:02.199912 kernel: acpiphp: Slot [14] registered Mar 17 17:37:02.199929 kernel: acpiphp: Slot [15] registered Mar 17 17:37:02.199947 kernel: acpiphp: Slot [16] registered Mar 17 17:37:02.199965 kernel: acpiphp: Slot [17] registered Mar 17 17:37:02.199982 kernel: acpiphp: Slot [18] registered Mar 17 17:37:02.200000 kernel: acpiphp: Slot [19] registered Mar 17 17:37:02.200017 kernel: acpiphp: Slot [20] registered Mar 17 17:37:02.200035 kernel: acpiphp: Slot [21] registered Mar 17 17:37:02.200053 kernel: acpiphp: Slot [22] registered Mar 17 17:37:02.200075 kernel: acpiphp: Slot [23] registered Mar 17 17:37:02.200093 kernel: acpiphp: Slot [24] registered Mar 17 17:37:02.200110 kernel: acpiphp: Slot [25] registered Mar 17 17:37:02.200128 kernel: acpiphp: Slot [26] registered Mar 17 17:37:02.200145 kernel: acpiphp: Slot [27] registered Mar 17 17:37:02.200163 kernel: acpiphp: Slot [28] registered Mar 17 17:37:02.200181 kernel: acpiphp: Slot [29] registered Mar 17 17:37:02.200198 kernel: acpiphp: Slot [30] registered Mar 17 17:37:02.200216 kernel: acpiphp: Slot [31] registered Mar 17 17:37:02.200233 kernel: PCI host bridge to bus 0000:00 Mar 17 17:37:02.200601 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:37:02.200815 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:37:02.201001 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:37:02.201214 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:37:02.201471 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:37:02.201710 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:37:02.201963 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:37:02.202190 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:37:02.202431 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:37:02.202676 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:37:02.202934 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:37:02.203174 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:37:02.203416 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:37:02.203634 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:37:02.203864 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:37:02.204080 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:37:02.204310 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:37:02.204546 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:37:02.204807 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:37:02.205021 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:37:02.207267 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:37:02.207464 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:37:02.207653 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:37:02.207678 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:37:02.207697 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:37:02.207715 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:37:02.207733 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:37:02.207751 kernel: iommu: Default domain type: Translated Mar 17 17:37:02.207781 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:37:02.207799 kernel: efivars: Registered efivars operations Mar 17 17:37:02.207817 kernel: vgaarb: loaded Mar 17 17:37:02.207834 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:37:02.207852 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:37:02.207870 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:37:02.207888 kernel: pnp: PnP ACPI init Mar 17 17:37:02.208107 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:37:02.208138 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:37:02.208157 kernel: NET: Registered PF_INET protocol family Mar 17 17:37:02.208175 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:37:02.208193 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:37:02.208211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:37:02.208229 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:37:02.208407 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:37:02.208429 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:37:02.208447 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:37:02.208472 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:37:02.208490 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:37:02.208507 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:37:02.208525 kernel: kvm [1]: HYP mode not available Mar 17 17:37:02.208542 kernel: Initialise system trusted keyrings Mar 17 17:37:02.208561 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:37:02.208579 kernel: Key type asymmetric registered Mar 17 17:37:02.208596 kernel: Asymmetric key parser 'x509' registered Mar 17 17:37:02.208613 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:37:02.208636 kernel: io scheduler mq-deadline registered Mar 17 17:37:02.208654 kernel: io scheduler kyber registered Mar 17 17:37:02.208689 kernel: io scheduler bfq registered Mar 17 17:37:02.208911 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:37:02.208938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:37:02.208956 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:37:02.208974 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:37:02.208992 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:37:02.209016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:37:02.209035 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:37:02.209256 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:37:02.209284 kernel: printk: console [ttyS0] disabled Mar 17 17:37:02.209303 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:37:02.209321 kernel: printk: console [ttyS0] enabled Mar 17 17:37:02.209339 kernel: printk: bootconsole [uart0] disabled Mar 17 17:37:02.209356 kernel: thunder_xcv, ver 1.0 Mar 17 17:37:02.209374 kernel: thunder_bgx, ver 1.0 Mar 17 17:37:02.209392 kernel: nicpf, ver 1.0 Mar 17 17:37:02.209416 kernel: nicvf, ver 1.0 Mar 17 17:37:02.209628 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:37:02.209819 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:37:01 UTC (1742233021) Mar 17 17:37:02.209844 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:37:02.209862 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:37:02.209880 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:37:02.209898 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:37:02.209921 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:37:02.209939 kernel: Segment Routing with IPv6 Mar 17 17:37:02.209957 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:37:02.209975 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:37:02.209992 kernel: Key type dns_resolver registered Mar 17 17:37:02.210010 kernel: registered taskstats version 1 Mar 17 17:37:02.210027 kernel: Loading compiled-in X.509 certificates Mar 17 17:37:02.210046 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:37:02.210063 kernel: Key type .fscrypt registered Mar 17 17:37:02.210081 kernel: Key type fscrypt-provisioning registered Mar 17 17:37:02.210104 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:37:02.210121 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:37:02.210139 kernel: ima: No architecture policies found Mar 17 17:37:02.210157 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:37:02.210174 kernel: clk: Disabling unused clocks Mar 17 17:37:02.210192 kernel: Freeing unused kernel memory: 38336K Mar 17 17:37:02.210210 kernel: Run /init as init process Mar 17 17:37:02.210227 kernel: with arguments: Mar 17 17:37:02.211477 kernel: /init Mar 17 17:37:02.211513 kernel: with environment: Mar 17 17:37:02.211531 kernel: HOME=/ Mar 17 17:37:02.211549 kernel: TERM=linux Mar 17 17:37:02.211567 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:37:02.211587 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:37:02.211612 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:37:02.211633 systemd[1]: Detected virtualization amazon. Mar 17 17:37:02.211657 systemd[1]: Detected architecture arm64. Mar 17 17:37:02.211676 systemd[1]: Running in initrd. Mar 17 17:37:02.211695 systemd[1]: No hostname configured, using default hostname. Mar 17 17:37:02.211716 systemd[1]: Hostname set to . Mar 17 17:37:02.211735 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:37:02.211754 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:37:02.211773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:37:02.211793 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:37:02.211814 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:37:02.211839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:37:02.211859 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:37:02.211880 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:37:02.211902 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:37:02.211922 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:37:02.211941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:37:02.211965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:37:02.211985 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:37:02.212004 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:37:02.212024 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:37:02.212043 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:37:02.212063 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:37:02.212083 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:37:02.212103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:37:02.212122 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:37:02.212147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:37:02.212166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:37:02.212186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:37:02.212206 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:37:02.212225 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:37:02.212265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:37:02.212288 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:37:02.212308 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:37:02.212334 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:37:02.212354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:37:02.212373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:02.212393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:37:02.212412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:37:02.212433 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:37:02.212458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:37:02.212478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:02.212539 systemd-journald[251]: Collecting audit messages is disabled. Mar 17 17:37:02.212586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:37:02.212607 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:37:02.212625 kernel: Bridge firewalling registered Mar 17 17:37:02.212645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:37:02.212665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:37:02.212704 systemd-journald[251]: Journal started Mar 17 17:37:02.212747 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2cd8ea8ed61a09e6135a218dc9c948) is 8M, max 75.3M, 67.3M free. Mar 17 17:37:02.160296 systemd-modules-load[252]: Inserted module 'overlay' Mar 17 17:37:02.196720 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 17 17:37:02.229937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:37:02.239316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:37:02.245712 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:37:02.251603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:37:02.258520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:37:02.273290 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:02.291745 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:37:02.297316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:37:02.303010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:37:02.310816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:37:02.337207 dracut-cmdline[287]: dracut-dracut-053 Mar 17 17:37:02.345704 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:37:02.409632 systemd-resolved[290]: Positive Trust Anchors: Mar 17 17:37:02.409659 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:37:02.409720 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:37:02.513279 kernel: SCSI subsystem initialized Mar 17 17:37:02.521278 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:37:02.533281 kernel: iscsi: registered transport (tcp) Mar 17 17:37:02.555276 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:37:02.555361 kernel: QLogic iSCSI HBA Driver Mar 17 17:37:02.653279 kernel: random: crng init done Mar 17 17:37:02.653652 systemd-resolved[290]: Defaulting to hostname 'linux'. Mar 17 17:37:02.656924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:37:02.657666 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:37:02.690965 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:37:02.700569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:37:02.741345 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:37:02.741421 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:37:02.741447 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:37:02.807286 kernel: raid6: neonx8 gen() 6609 MB/s Mar 17 17:37:02.824273 kernel: raid6: neonx4 gen() 6549 MB/s Mar 17 17:37:02.841272 kernel: raid6: neonx2 gen() 5438 MB/s Mar 17 17:37:02.858272 kernel: raid6: neonx1 gen() 3952 MB/s Mar 17 17:37:02.875272 kernel: raid6: int64x8 gen() 3634 MB/s Mar 17 17:37:02.892271 kernel: raid6: int64x4 gen() 3713 MB/s Mar 17 17:37:02.909272 kernel: raid6: int64x2 gen() 3609 MB/s Mar 17 17:37:02.927078 kernel: raid6: int64x1 gen() 2768 MB/s Mar 17 17:37:02.927110 kernel: raid6: using algorithm neonx8 gen() 6609 MB/s Mar 17 17:37:02.945049 kernel: raid6: .... xor() 4762 MB/s, rmw enabled Mar 17 17:37:02.945087 kernel: raid6: using neon recovery algorithm Mar 17 17:37:02.953089 kernel: xor: measuring software checksum speed Mar 17 17:37:02.953149 kernel: 8regs : 12919 MB/sec Mar 17 17:37:02.954272 kernel: 32regs : 11995 MB/sec Mar 17 17:37:02.956300 kernel: arm64_neon : 8974 MB/sec Mar 17 17:37:02.956333 kernel: xor: using function: 8regs (12919 MB/sec) Mar 17 17:37:03.038289 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:37:03.057487 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:37:03.069536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:37:03.109446 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 17 17:37:03.120662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:37:03.130498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:37:03.169542 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Mar 17 17:37:03.225539 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:37:03.235510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:37:03.355544 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:37:03.373534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:37:03.411333 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:37:03.416939 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:37:03.426388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:37:03.428736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:37:03.441550 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:37:03.479108 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:37:03.553922 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:37:03.553985 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:37:03.579505 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:37:03.579772 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:37:03.580006 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:89:9c:b6:f8:4f Mar 17 17:37:03.585420 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:37:03.593905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:37:03.612950 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:37:03.612991 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:37:03.594167 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:03.596820 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:37:03.599356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:37:03.599645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:03.632338 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:37:03.615789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:03.635060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:03.641980 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:37:03.652302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:37:03.652383 kernel: GPT:9289727 != 16777215 Mar 17 17:37:03.655557 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:37:03.656358 kernel: GPT:9289727 != 16777215 Mar 17 17:37:03.657363 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:37:03.658287 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:37:03.667562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:03.679604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:37:03.730498 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:03.769292 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (544) Mar 17 17:37:03.785336 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (519) Mar 17 17:37:03.905161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:37:03.930960 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:37:03.955066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:37:03.978430 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:37:03.980802 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:37:03.996559 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:37:04.009358 disk-uuid[662]: Primary Header is updated. Mar 17 17:37:04.009358 disk-uuid[662]: Secondary Entries is updated. Mar 17 17:37:04.009358 disk-uuid[662]: Secondary Header is updated. Mar 17 17:37:04.020277 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:37:04.028303 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:37:05.035019 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:37:05.036983 disk-uuid[663]: The operation has completed successfully. Mar 17 17:37:05.242309 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:37:05.244296 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:37:05.314487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:37:05.323122 sh[922]: Success Mar 17 17:37:05.341275 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:37:05.454065 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:37:05.465481 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:37:05.468939 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:37:05.509733 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:37:05.509795 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:05.513279 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:37:05.513315 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:37:05.513341 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:37:05.574287 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:37:05.602557 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:37:05.606462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:37:05.624487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:37:05.631618 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:37:05.665282 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:37:05.665364 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:05.666715 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:37:05.674312 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:37:05.689801 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:37:05.692501 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:37:05.703144 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:37:05.714698 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:37:05.818144 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:37:05.833501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:37:05.889139 systemd-networkd[1117]: lo: Link UP Mar 17 17:37:05.889162 systemd-networkd[1117]: lo: Gained carrier Mar 17 17:37:05.894222 systemd-networkd[1117]: Enumeration completed Mar 17 17:37:05.894886 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:37:05.894894 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:37:05.896550 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:37:05.906170 systemd-networkd[1117]: eth0: Link UP Mar 17 17:37:05.906183 systemd-networkd[1117]: eth0: Gained carrier Mar 17 17:37:05.906200 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:37:05.922115 systemd[1]: Reached target network.target - Network. Mar 17 17:37:05.939334 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.25.124/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:37:06.111690 ignition[1026]: Ignition 2.20.0 Mar 17 17:37:06.111720 ignition[1026]: Stage: fetch-offline Mar 17 17:37:06.112152 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:06.112177 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:06.114874 ignition[1026]: Ignition finished successfully Mar 17 17:37:06.121861 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:37:06.132603 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:37:06.159344 ignition[1127]: Ignition 2.20.0 Mar 17 17:37:06.159372 ignition[1127]: Stage: fetch Mar 17 17:37:06.160972 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:06.161028 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:06.161684 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:06.193139 ignition[1127]: PUT result: OK Mar 17 17:37:06.196915 ignition[1127]: parsed url from cmdline: "" Mar 17 17:37:06.196933 ignition[1127]: no config URL provided Mar 17 17:37:06.196949 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:37:06.197141 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:37:06.197176 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:06.200850 ignition[1127]: PUT result: OK Mar 17 17:37:06.200929 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:37:06.205064 ignition[1127]: GET result: OK Mar 17 17:37:06.205235 ignition[1127]: parsing config with SHA512: e6902003fa9d43ac4c3e326c233d94c6b7185747da260001c202337a13ec122243c7c13f55f4cb343817d0c4fe3d1277548e8cf86e4c9fc85c9435907d3ed579 Mar 17 17:37:06.218163 unknown[1127]: fetched base config from "system" Mar 17 17:37:06.218437 unknown[1127]: fetched base config from "system" Mar 17 17:37:06.219486 ignition[1127]: fetch: fetch complete Mar 17 17:37:06.218466 unknown[1127]: fetched user config from "aws" Mar 17 17:37:06.219498 ignition[1127]: fetch: fetch passed Mar 17 17:37:06.219589 ignition[1127]: Ignition finished successfully Mar 17 17:37:06.230712 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:37:06.242545 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:37:06.272626 ignition[1134]: Ignition 2.20.0 Mar 17 17:37:06.272670 ignition[1134]: Stage: kargs Mar 17 17:37:06.274301 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:06.274356 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:06.275419 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:06.279309 ignition[1134]: PUT result: OK Mar 17 17:37:06.285343 ignition[1134]: kargs: kargs passed Mar 17 17:37:06.285485 ignition[1134]: Ignition finished successfully Mar 17 17:37:06.290225 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:37:06.304080 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:37:06.325545 ignition[1140]: Ignition 2.20.0 Mar 17 17:37:06.325566 ignition[1140]: Stage: disks Mar 17 17:37:06.326111 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:06.326145 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:06.326831 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:06.330383 ignition[1140]: PUT result: OK Mar 17 17:37:06.340110 ignition[1140]: disks: disks passed Mar 17 17:37:06.341551 ignition[1140]: Ignition finished successfully Mar 17 17:37:06.345106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:37:06.349050 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:37:06.353495 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:37:06.356111 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:37:06.358025 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:37:06.360411 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:37:06.376503 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:37:06.429308 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:37:06.434892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:37:06.512442 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:37:06.607294 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:37:06.608470 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:37:06.612429 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:37:06.628425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:37:06.644490 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:37:06.650828 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:37:06.650939 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:37:06.651067 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:37:06.659646 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:37:06.667438 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:37:06.688086 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) Mar 17 17:37:06.691812 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:37:06.691862 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:06.693091 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:37:06.707289 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:37:06.710098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:37:07.045299 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:37:07.054696 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:37:07.063056 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:37:07.071724 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:37:07.407557 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:37:07.420409 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:37:07.426459 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:37:07.443277 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:37:07.486805 ignition[1280]: INFO : Ignition 2.20.0 Mar 17 17:37:07.489430 ignition[1280]: INFO : Stage: mount Mar 17 17:37:07.489430 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:07.489430 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:07.489430 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:07.487957 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:37:07.499838 ignition[1280]: INFO : PUT result: OK Mar 17 17:37:07.508342 ignition[1280]: INFO : mount: mount passed Mar 17 17:37:07.508342 ignition[1280]: INFO : Ignition finished successfully Mar 17 17:37:07.508062 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:37:07.514509 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:37:07.530546 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:37:07.538524 systemd-networkd[1117]: eth0: Gained IPv6LL Mar 17 17:37:07.552636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:37:07.581942 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1292) Mar 17 17:37:07.582006 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:37:07.582032 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:37:07.584625 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:37:07.590280 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:37:07.593335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:37:07.627535 ignition[1309]: INFO : Ignition 2.20.0 Mar 17 17:37:07.627535 ignition[1309]: INFO : Stage: files Mar 17 17:37:07.630681 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:07.630681 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:07.630681 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:07.637303 ignition[1309]: INFO : PUT result: OK Mar 17 17:37:07.642650 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:37:07.664415 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:37:07.664415 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:37:07.685609 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:37:07.688428 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:37:07.691400 unknown[1309]: wrote ssh authorized keys file for user: core Mar 17 17:37:07.693648 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:37:07.707012 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:37:07.710727 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:37:07.911897 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:37:08.549274 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:37:08.553104 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:37:08.553104 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:37:09.016212 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:37:09.167300 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:37:09.167300 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:37:09.174399 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:37:09.197128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:37:09.197128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:37:09.197128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:37:09.197128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:37:09.197128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:37:09.197128 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:37:09.589396 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:37:09.931649 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:37:09.931649 ignition[1309]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:37:09.939413 ignition[1309]: INFO : files: files passed Mar 17 17:37:09.939413 ignition[1309]: INFO : Ignition finished successfully Mar 17 17:37:09.964646 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:37:09.983674 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:37:09.990395 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:37:10.000443 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:37:10.000681 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:37:10.040018 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:37:10.040018 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:37:10.048690 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:37:10.054103 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:37:10.059890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:37:10.070627 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:37:10.124938 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:37:10.125137 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:37:10.128047 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:37:10.130093 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:37:10.132099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:37:10.141980 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:37:10.181576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:37:10.195505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:37:10.228204 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:37:10.232985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:37:10.250737 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:37:10.252856 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:37:10.253171 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:37:10.260725 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:37:10.263015 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:37:10.268204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:37:10.270643 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:37:10.276957 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:37:10.279461 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:37:10.285327 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:37:10.288299 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:37:10.294068 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:37:10.296351 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:37:10.301101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:37:10.301521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:37:10.307608 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:37:10.309973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:37:10.316198 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:37:10.320201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:37:10.322931 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:37:10.323425 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:37:10.331290 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:37:10.331715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:37:10.338605 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:37:10.339017 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:37:10.361444 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:37:10.369348 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:37:10.371101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:37:10.371430 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:37:10.379812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:37:10.380059 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:37:10.401908 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:37:10.403835 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:37:10.425969 ignition[1361]: INFO : Ignition 2.20.0 Mar 17 17:37:10.425969 ignition[1361]: INFO : Stage: umount Mar 17 17:37:10.425969 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:37:10.425969 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:37:10.425969 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:37:10.448800 ignition[1361]: INFO : PUT result: OK Mar 17 17:37:10.451587 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:37:10.457114 ignition[1361]: INFO : umount: umount passed Mar 17 17:37:10.458851 ignition[1361]: INFO : Ignition finished successfully Mar 17 17:37:10.463125 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:37:10.463545 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:37:10.468893 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:37:10.469170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:37:10.473907 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:37:10.474067 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:37:10.476130 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:37:10.476231 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:37:10.477624 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:37:10.477708 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:37:10.478309 systemd[1]: Stopped target network.target - Network. Mar 17 17:37:10.478550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:37:10.478629 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:37:10.478895 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:37:10.479153 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:37:10.484723 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:37:10.484806 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:37:10.484860 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:37:10.484965 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:37:10.485036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:37:10.485146 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:37:10.485207 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:37:10.485399 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:37:10.485479 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:37:10.497568 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:37:10.497660 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:37:10.499669 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:37:10.499748 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:37:10.501966 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:37:10.504108 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:37:10.562299 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:37:10.562701 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:37:10.586428 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:37:10.589230 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:37:10.589465 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:37:10.597371 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:37:10.598519 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:37:10.598636 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:37:10.616912 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:37:10.618696 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:37:10.618807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:37:10.621207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:37:10.621329 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:37:10.626514 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:37:10.626613 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:37:10.629116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:37:10.629199 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:37:10.648360 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:37:10.665698 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:37:10.665850 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:37:10.679033 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:37:10.681414 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:37:10.687748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:37:10.687886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:37:10.690561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:37:10.690633 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:37:10.697796 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:37:10.697904 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:37:10.700187 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:37:10.700301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:37:10.713599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:37:10.713703 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:37:10.725537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:37:10.728759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:37:10.728880 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:37:10.731819 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:37:10.731905 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:37:10.745874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:37:10.745976 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:37:10.748358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:37:10.748437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:10.760581 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:37:10.760749 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:37:10.761501 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:37:10.761715 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:37:10.779175 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:37:10.779538 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:37:10.785969 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:37:10.795602 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:37:10.815429 systemd[1]: Switching root. Mar 17 17:37:10.854156 systemd-journald[251]: Journal stopped Mar 17 17:37:13.175594 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 17 17:37:13.175729 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:37:13.175773 kernel: SELinux: policy capability open_perms=1 Mar 17 17:37:13.175819 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:37:13.175858 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:37:13.175887 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:37:13.175917 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:37:13.175946 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:37:13.175981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:37:13.176011 kernel: audit: type=1403 audit(1742233031.394:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:37:13.176042 systemd[1]: Successfully loaded SELinux policy in 92.839ms. Mar 17 17:37:13.176085 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.184ms. Mar 17 17:37:13.176118 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:37:13.176152 systemd[1]: Detected virtualization amazon. Mar 17 17:37:13.176184 systemd[1]: Detected architecture arm64. Mar 17 17:37:13.176215 systemd[1]: Detected first boot. Mar 17 17:37:13.176269 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:37:13.176312 zram_generator::config[1406]: No configuration found. Mar 17 17:37:13.176353 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:37:13.176386 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:37:13.176417 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:37:13.176449 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:37:13.176483 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:37:13.176513 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:37:13.176545 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:37:13.176581 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:37:13.176645 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:37:13.176678 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:37:13.176711 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:37:13.176743 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:37:13.176774 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:37:13.176805 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:37:13.176836 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:37:13.176865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:37:13.176899 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:37:13.176929 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:37:13.176958 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:37:13.176987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:37:13.177018 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:37:13.177052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:37:13.177082 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:37:13.177112 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:37:13.177148 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:37:13.177179 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:37:13.177209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:37:13.177283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:37:13.177322 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:37:13.177353 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:37:13.177382 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:37:13.177412 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:37:13.177444 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:37:13.177557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:37:13.178166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:37:13.178790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:37:13.179401 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:37:13.179474 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:37:13.179510 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:37:13.179541 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:37:13.179570 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:37:13.179600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:37:13.179638 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:37:13.179675 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:37:13.179707 systemd[1]: Reached target machines.target - Containers. Mar 17 17:37:13.179736 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:37:13.179766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:37:13.179796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:37:13.179828 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:37:13.179856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:37:13.179889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:37:13.179918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:37:13.179946 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:37:13.179974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:37:13.180006 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:37:13.180035 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:37:13.180064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:37:13.180092 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:37:13.180125 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:37:13.180157 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:37:13.180186 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:37:13.180214 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:37:13.180262 kernel: fuse: init (API version 7.39) Mar 17 17:37:13.180299 kernel: loop: module loaded Mar 17 17:37:13.180328 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:37:13.180412 systemd-journald[1489]: Collecting audit messages is disabled. Mar 17 17:37:13.180483 kernel: ACPI: bus type drm_connector registered Mar 17 17:37:13.180515 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:37:13.180544 systemd-journald[1489]: Journal started Mar 17 17:37:13.180594 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec2cd8ea8ed61a09e6135a218dc9c948) is 8M, max 75.3M, 67.3M free. Mar 17 17:37:12.703439 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:37:12.717498 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:37:12.718376 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:37:13.195548 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:37:13.195625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:37:13.204435 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:37:13.204522 systemd[1]: Stopped verity-setup.service. Mar 17 17:37:13.220319 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:37:13.226846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:37:13.231443 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:37:13.233893 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:37:13.236608 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:37:13.238980 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:37:13.241406 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:37:13.243949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:37:13.247132 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:37:13.247569 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:37:13.252125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:37:13.252541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:37:13.255649 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:37:13.256048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:37:13.259181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:37:13.259613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:37:13.262746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:37:13.263144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:37:13.267108 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:37:13.267812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:37:13.273673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:37:13.276561 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:37:13.285839 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:37:13.319913 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:37:13.333435 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:37:13.339483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:37:13.343624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:37:13.343710 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:37:13.347763 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:37:13.360481 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:37:13.371742 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:37:13.373976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:37:13.382534 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:37:13.389558 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:37:13.391836 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:37:13.395616 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:37:13.397747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:37:13.400518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:37:13.410670 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:37:13.415537 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:37:13.421375 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:37:13.425544 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:37:13.428300 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:37:13.446633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:37:13.450651 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:37:13.514107 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec2cd8ea8ed61a09e6135a218dc9c948 is 108.700ms for 925 entries. Mar 17 17:37:13.514107 systemd-journald[1489]: System Journal (/var/log/journal/ec2cd8ea8ed61a09e6135a218dc9c948) is 8M, max 195.6M, 187.6M free. Mar 17 17:37:13.674085 systemd-journald[1489]: Received client request to flush runtime journal. Mar 17 17:37:13.674172 kernel: loop0: detected capacity change from 0 to 123192 Mar 17 17:37:13.531188 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:37:13.534636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:37:13.554564 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:37:13.561478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:37:13.605346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:37:13.619138 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:37:13.631807 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Mar 17 17:37:13.631832 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Mar 17 17:37:13.674452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:37:13.680423 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:37:13.694278 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:37:13.695667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:37:13.698608 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:37:13.712411 udevadm[1554]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:37:13.722135 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:37:13.737797 kernel: loop1: detected capacity change from 0 to 53784 Mar 17 17:37:13.805870 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:37:13.819840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:37:13.844292 kernel: loop2: detected capacity change from 0 to 201592 Mar 17 17:37:13.879096 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Mar 17 17:37:13.879128 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Mar 17 17:37:13.891977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:37:13.991383 kernel: loop3: detected capacity change from 0 to 113512 Mar 17 17:37:14.073105 kernel: loop4: detected capacity change from 0 to 123192 Mar 17 17:37:14.098291 kernel: loop5: detected capacity change from 0 to 53784 Mar 17 17:37:14.117675 kernel: loop6: detected capacity change from 0 to 201592 Mar 17 17:37:14.148349 kernel: loop7: detected capacity change from 0 to 113512 Mar 17 17:37:14.163322 (sd-merge)[1571]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:37:14.164903 (sd-merge)[1571]: Merged extensions into '/usr'. Mar 17 17:37:14.173686 systemd[1]: Reload requested from client PID 1539 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:37:14.173889 systemd[1]: Reloading... Mar 17 17:37:14.363280 zram_generator::config[1606]: No configuration found. Mar 17 17:37:14.704674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:37:14.862597 systemd[1]: Reloading finished in 686 ms. Mar 17 17:37:14.891001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:37:14.905346 systemd[1]: Starting ensure-sysext.service... Mar 17 17:37:14.910589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:37:14.928470 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:37:14.948904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:37:14.951593 systemd[1]: Reload requested from client PID 1650 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:37:14.951618 systemd[1]: Reloading... Mar 17 17:37:15.018402 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:37:15.018920 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:37:15.020751 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:37:15.023455 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Mar 17 17:37:15.023624 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Mar 17 17:37:15.044587 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:37:15.044628 systemd-tmpfiles[1651]: Skipping /boot Mar 17 17:37:15.066150 systemd-udevd[1653]: Using default interface naming scheme 'v255'. Mar 17 17:37:15.113165 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:37:15.113196 systemd-tmpfiles[1651]: Skipping /boot Mar 17 17:37:15.218315 zram_generator::config[1694]: No configuration found. Mar 17 17:37:15.235197 ldconfig[1534]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:37:15.408886 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:37:15.526270 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1686) Mar 17 17:37:15.610943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:37:15.799577 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:37:15.800159 systemd[1]: Reloading finished in 847 ms. Mar 17 17:37:15.817823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:37:15.820859 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:37:15.853916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:37:15.904442 systemd[1]: Finished ensure-sysext.service. Mar 17 17:37:15.937996 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:37:15.960829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:37:15.969610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:37:15.981607 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:37:15.984102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:37:15.987148 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:37:15.993227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:37:16.002847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:37:16.009102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:37:16.014616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:37:16.016971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:37:16.024380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:37:16.026679 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:37:16.030257 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:37:16.041568 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:37:16.052618 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:37:16.055591 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:37:16.064035 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:37:16.073589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:37:16.082613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:37:16.083478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:37:16.089979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:37:16.091754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:37:16.109909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:37:16.124042 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:37:16.126404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:37:16.129206 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:37:16.140541 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:37:16.141456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:37:16.152794 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:37:16.167211 lvm[1853]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:37:16.194132 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:37:16.203101 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:37:16.227990 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:37:16.230745 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:37:16.252018 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:37:16.254655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:37:16.268618 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:37:16.285429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:37:16.297209 augenrules[1898]: No rules Mar 17 17:37:16.300706 lvm[1893]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:37:16.298674 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:37:16.307063 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:37:16.307526 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:37:16.346087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:37:16.357343 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:37:16.372016 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:37:16.416100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:37:16.507441 systemd-networkd[1861]: lo: Link UP Mar 17 17:37:16.507456 systemd-networkd[1861]: lo: Gained carrier Mar 17 17:37:16.511199 systemd-networkd[1861]: Enumeration completed Mar 17 17:37:16.511618 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:37:16.515996 systemd-networkd[1861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:37:16.516019 systemd-networkd[1861]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:37:16.518303 systemd-resolved[1864]: Positive Trust Anchors: Mar 17 17:37:16.518337 systemd-resolved[1864]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:37:16.518401 systemd-resolved[1864]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:37:16.520412 systemd-networkd[1861]: eth0: Link UP Mar 17 17:37:16.520747 systemd-networkd[1861]: eth0: Gained carrier Mar 17 17:37:16.520785 systemd-networkd[1861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:37:16.521526 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:37:16.530373 systemd-networkd[1861]: eth0: DHCPv4 address 172.31.25.124/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:37:16.534606 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:37:16.548197 systemd-resolved[1864]: Defaulting to hostname 'linux'. Mar 17 17:37:16.552087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:37:16.554440 systemd[1]: Reached target network.target - Network. Mar 17 17:37:16.556202 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:37:16.558458 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:37:16.560612 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:37:16.562957 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:37:16.565601 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:37:16.567844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:37:16.570222 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:37:16.572575 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:37:16.572644 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:37:16.574374 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:37:16.578467 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:37:16.583028 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:37:16.591405 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:37:16.594455 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:37:16.596790 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:37:16.608505 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:37:16.611208 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:37:16.615189 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:37:16.617991 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:37:16.621345 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:37:16.623435 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:37:16.625492 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:37:16.625560 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:37:16.640408 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:37:16.645598 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:37:16.651608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:37:16.662704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:37:16.668414 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:37:16.670478 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:37:16.675401 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:37:16.689276 jq[1925]: false Mar 17 17:37:16.697722 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:37:16.705781 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:37:16.720460 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:37:16.732195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:37:16.740622 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:37:16.770565 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:37:16.776913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:37:16.779909 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:37:16.782592 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:37:16.791968 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:37:16.805790 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:37:16.806303 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:37:16.850801 extend-filesystems[1926]: Found loop4 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found loop5 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found loop6 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found loop7 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p1 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p2 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p3 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found usr Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p4 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p6 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p7 Mar 17 17:37:16.850801 extend-filesystems[1926]: Found nvme0n1p9 Mar 17 17:37:16.850801 extend-filesystems[1926]: Checking size of /dev/nvme0n1p9 Mar 17 17:37:16.929786 update_engine[1938]: I20250317 17:37:16.890637 1938 main.cc:92] Flatcar Update Engine starting Mar 17 17:37:16.899024 dbus-daemon[1924]: [system] SELinux support is enabled Mar 17 17:37:16.865766 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:37:16.918534 dbus-daemon[1924]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1861 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:37:16.866192 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:37:16.925164 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:37:16.899324 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:37:16.922616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:37:16.922666 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:37:16.938490 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:37:16.938536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:37:16.950010 tar[1942]: linux-arm64/LICENSE Mar 17 17:37:16.993434 tar[1942]: linux-arm64/helm Mar 17 17:37:16.993546 update_engine[1938]: I20250317 17:37:16.967825 1938 update_check_scheduler.cc:74] Next update check in 11m23s Mar 17 17:37:16.960577 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:37:16.964656 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:37:16.993046 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:37:17.000566 extend-filesystems[1926]: Resized partition /dev/nvme0n1p9 Mar 17 17:37:16.999364 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:37:16.999867 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:37:17.030877 jq[1939]: true Mar 17 17:37:17.032885 ntpd[1928]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:16 UTC 2025 (1): Starting Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:16 UTC 2025 (1): Starting Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: ---------------------------------------------------- Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: corporation. Support and training for ntp-4 are Mar 17 17:37:17.038716 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: available at https://www.nwtime.org/support Mar 17 17:37:17.039190 extend-filesystems[1971]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:37:17.038120 ntpd[1928]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:37:17.044837 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: ---------------------------------------------------- Mar 17 17:37:17.044837 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: proto: precision = 0.096 usec (-23) Mar 17 17:37:17.038140 ntpd[1928]: ---------------------------------------------------- Mar 17 17:37:17.038159 ntpd[1928]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:37:17.038177 ntpd[1928]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:37:17.038195 ntpd[1928]: corporation. Support and training for ntp-4 are Mar 17 17:37:17.038213 ntpd[1928]: available at https://www.nwtime.org/support Mar 17 17:37:17.038232 ntpd[1928]: ---------------------------------------------------- Mar 17 17:37:17.044154 ntpd[1928]: proto: precision = 0.096 usec (-23) Mar 17 17:37:17.050284 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:37:17.047766 ntpd[1928]: basedate set to 2025-03-05 Mar 17 17:37:17.050441 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: basedate set to 2025-03-05 Mar 17 17:37:17.050441 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: gps base set to 2025-03-09 (week 2357) Mar 17 17:37:17.047810 ntpd[1928]: gps base set to 2025-03-09 (week 2357) Mar 17 17:37:17.051682 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:37:17.053430 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:37:17.053430 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:37:17.053430 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:37:17.053430 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Listen normally on 3 eth0 172.31.25.124:123 Mar 17 17:37:17.053430 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Listen normally on 4 lo [::1]:123 Mar 17 17:37:17.053430 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: bind(21) AF_INET6 fe80::489:9cff:feb6:f84f%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:37:17.051766 ntpd[1928]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:37:17.052013 ntpd[1928]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:37:17.052073 ntpd[1928]: Listen normally on 3 eth0 172.31.25.124:123 Mar 17 17:37:17.052143 ntpd[1928]: Listen normally on 4 lo [::1]:123 Mar 17 17:37:17.052214 ntpd[1928]: bind(21) AF_INET6 fe80::489:9cff:feb6:f84f%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:37:17.054037 ntpd[1928]: unable to create socket on eth0 (5) for fe80::489:9cff:feb6:f84f%2#123 Mar 17 17:37:17.054216 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: unable to create socket on eth0 (5) for fe80::489:9cff:feb6:f84f%2#123 Mar 17 17:37:17.054455 ntpd[1928]: failed to init interface for address fe80::489:9cff:feb6:f84f%2 Mar 17 17:37:17.054571 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: failed to init interface for address fe80::489:9cff:feb6:f84f%2 Mar 17 17:37:17.054742 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Mar 17 17:37:17.055379 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: Listening on routing socket on fd #21 for interface updates Mar 17 17:37:17.063600 (ntainerd)[1966]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:37:17.067460 jq[1972]: true Mar 17 17:37:17.089178 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:37:17.089383 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:37:17.089383 ntpd[1928]: 17 Mar 17:37:17 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:37:17.089258 ntpd[1928]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:37:17.151117 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:37:17.168929 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:37:17.173987 extend-filesystems[1971]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:37:17.173987 extend-filesystems[1971]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:37:17.173987 extend-filesystems[1971]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:37:17.200083 extend-filesystems[1926]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:37:17.180918 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:37:17.181925 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:37:17.214461 bash[2000]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:37:17.220067 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:37:17.242329 coreos-metadata[1923]: Mar 17 17:37:17.242 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:37:17.242329 coreos-metadata[1923]: Mar 17 17:37:17.242 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:37:17.242329 coreos-metadata[1923]: Mar 17 17:37:17.242 INFO Fetch successful Mar 17 17:37:17.242329 coreos-metadata[1923]: Mar 17 17:37:17.242 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:37:17.293075 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1708) Mar 17 17:37:17.289170 systemd[1]: Starting sshkeys.service... Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.244 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.252 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.254 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.256 INFO Fetch failed with 404: resource not found Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.259 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.266 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.272 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.272 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.274 INFO Fetch successful Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.274 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:37:17.293318 coreos-metadata[1923]: Mar 17 17:37:17.276 INFO Fetch successful Mar 17 17:37:17.378619 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:37:17.389535 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:37:17.403924 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:37:17.433394 systemd-logind[1935]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:37:17.434085 systemd-logind[1935]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:37:17.436991 systemd-logind[1935]: New seat seat0. Mar 17 17:37:17.440532 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:37:17.446909 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:37:17.456001 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:37:17.595631 locksmithd[1965]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:37:17.735511 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:37:17.736118 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:37:17.748500 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1962 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:37:17.769137 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:37:17.794096 polkitd[2094]: Started polkitd version 121 Mar 17 17:37:17.806006 polkitd[2094]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:37:17.806152 polkitd[2094]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:37:17.810346 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:37:17.809323 polkitd[2094]: Finished loading, compiling and executing 2 rules Mar 17 17:37:17.810056 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:37:17.810947 polkitd[2094]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:37:17.842897 coreos-metadata[2020]: Mar 17 17:37:17.842 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:37:17.851962 coreos-metadata[2020]: Mar 17 17:37:17.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:37:17.852616 coreos-metadata[2020]: Mar 17 17:37:17.852 INFO Fetch successful Mar 17 17:37:17.852703 coreos-metadata[2020]: Mar 17 17:37:17.852 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:37:17.855156 coreos-metadata[2020]: Mar 17 17:37:17.855 INFO Fetch successful Mar 17 17:37:17.862488 unknown[2020]: wrote ssh authorized keys file for user: core Mar 17 17:37:17.894864 systemd-hostnamed[1962]: Hostname set to (transient) Mar 17 17:37:17.896817 systemd-resolved[1864]: System hostname changed to 'ip-172-31-25-124'. Mar 17 17:37:17.953058 update-ssh-keys[2107]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:37:17.966326 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:37:17.985820 systemd[1]: Finished sshkeys.service. Mar 17 17:37:18.005786 containerd[1966]: time="2025-03-17T17:37:18.005657781Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:37:18.041501 ntpd[1928]: bind(24) AF_INET6 fe80::489:9cff:feb6:f84f%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:37:18.041571 ntpd[1928]: unable to create socket on eth0 (6) for fe80::489:9cff:feb6:f84f%2#123 Mar 17 17:37:18.042006 ntpd[1928]: 17 Mar 17:37:18 ntpd[1928]: bind(24) AF_INET6 fe80::489:9cff:feb6:f84f%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:37:18.042006 ntpd[1928]: 17 Mar 17:37:18 ntpd[1928]: unable to create socket on eth0 (6) for fe80::489:9cff:feb6:f84f%2#123 Mar 17 17:37:18.042006 ntpd[1928]: 17 Mar 17:37:18 ntpd[1928]: failed to init interface for address fe80::489:9cff:feb6:f84f%2 Mar 17 17:37:18.041600 ntpd[1928]: failed to init interface for address fe80::489:9cff:feb6:f84f%2 Mar 17 17:37:18.123266 containerd[1966]: time="2025-03-17T17:37:18.120869014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.126546 containerd[1966]: time="2025-03-17T17:37:18.126470038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:37:18.126546 containerd[1966]: time="2025-03-17T17:37:18.126538738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:37:18.126719 containerd[1966]: time="2025-03-17T17:37:18.126575182Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:37:18.126912 containerd[1966]: time="2025-03-17T17:37:18.126871786Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:37:18.126967 containerd[1966]: time="2025-03-17T17:37:18.126919198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.127087 containerd[1966]: time="2025-03-17T17:37:18.127045522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:37:18.127150 containerd[1966]: time="2025-03-17T17:37:18.127083442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.129561 containerd[1966]: time="2025-03-17T17:37:18.129502642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:37:18.129561 containerd[1966]: time="2025-03-17T17:37:18.129556330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.129715 containerd[1966]: time="2025-03-17T17:37:18.129591346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:37:18.129715 containerd[1966]: time="2025-03-17T17:37:18.129614926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.129880 containerd[1966]: time="2025-03-17T17:37:18.129840310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.130312 containerd[1966]: time="2025-03-17T17:37:18.130271194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:37:18.130577 containerd[1966]: time="2025-03-17T17:37:18.130532434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:37:18.130631 containerd[1966]: time="2025-03-17T17:37:18.130575694Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:37:18.130784 containerd[1966]: time="2025-03-17T17:37:18.130747066Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:37:18.130893 containerd[1966]: time="2025-03-17T17:37:18.130856842Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:37:18.141174 containerd[1966]: time="2025-03-17T17:37:18.141096574Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:37:18.141346 containerd[1966]: time="2025-03-17T17:37:18.141227362Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:37:18.141346 containerd[1966]: time="2025-03-17T17:37:18.141331522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:37:18.141433 containerd[1966]: time="2025-03-17T17:37:18.141368542Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:37:18.141508 containerd[1966]: time="2025-03-17T17:37:18.141428698Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:37:18.143288 containerd[1966]: time="2025-03-17T17:37:18.142589698Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:37:18.143565 containerd[1966]: time="2025-03-17T17:37:18.143327050Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:37:18.143734 containerd[1966]: time="2025-03-17T17:37:18.143671534Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:37:18.143788 containerd[1966]: time="2025-03-17T17:37:18.143739706Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:37:18.143835 containerd[1966]: time="2025-03-17T17:37:18.143802838Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:37:18.143879 containerd[1966]: time="2025-03-17T17:37:18.143838934Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.143947 containerd[1966]: time="2025-03-17T17:37:18.143894554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.143947 containerd[1966]: time="2025-03-17T17:37:18.143928586Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.146355 containerd[1966]: time="2025-03-17T17:37:18.146303074Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.146442 containerd[1966]: time="2025-03-17T17:37:18.146381818Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.146488 containerd[1966]: time="2025-03-17T17:37:18.146417278Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.146488 containerd[1966]: time="2025-03-17T17:37:18.146477290Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.146591 containerd[1966]: time="2025-03-17T17:37:18.146507146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:37:18.146690 containerd[1966]: time="2025-03-17T17:37:18.146626306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146744 containerd[1966]: time="2025-03-17T17:37:18.146696866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146744 containerd[1966]: time="2025-03-17T17:37:18.146729626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146848 containerd[1966]: time="2025-03-17T17:37:18.146772598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146848 containerd[1966]: time="2025-03-17T17:37:18.146820418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146929 containerd[1966]: time="2025-03-17T17:37:18.146860006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146929 containerd[1966]: time="2025-03-17T17:37:18.146888674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.146929 containerd[1966]: time="2025-03-17T17:37:18.146918302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147054 containerd[1966]: time="2025-03-17T17:37:18.146948554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147054 containerd[1966]: time="2025-03-17T17:37:18.146983246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147054 containerd[1966]: time="2025-03-17T17:37:18.147010942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147054 containerd[1966]: time="2025-03-17T17:37:18.147039190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147204 containerd[1966]: time="2025-03-17T17:37:18.147068398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147204 containerd[1966]: time="2025-03-17T17:37:18.147099202Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:37:18.147204 containerd[1966]: time="2025-03-17T17:37:18.147147826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147204 containerd[1966]: time="2025-03-17T17:37:18.147179758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147420 containerd[1966]: time="2025-03-17T17:37:18.147205978Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:37:18.147420 containerd[1966]: time="2025-03-17T17:37:18.147381358Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:37:18.147504 containerd[1966]: time="2025-03-17T17:37:18.147420022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:37:18.147504 containerd[1966]: time="2025-03-17T17:37:18.147446662Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:37:18.147504 containerd[1966]: time="2025-03-17T17:37:18.147475210Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:37:18.147504 containerd[1966]: time="2025-03-17T17:37:18.147497230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.147683 containerd[1966]: time="2025-03-17T17:37:18.147524830Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:37:18.147683 containerd[1966]: time="2025-03-17T17:37:18.147548110Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:37:18.147683 containerd[1966]: time="2025-03-17T17:37:18.147572494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:37:18.150285 containerd[1966]: time="2025-03-17T17:37:18.148082746Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:37:18.150285 containerd[1966]: time="2025-03-17T17:37:18.148179238Z" level=info msg="Connect containerd service" Mar 17 17:37:18.150285 containerd[1966]: time="2025-03-17T17:37:18.148804270Z" level=info msg="using legacy CRI server" Mar 17 17:37:18.150285 containerd[1966]: time="2025-03-17T17:37:18.148825426Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:37:18.150285 containerd[1966]: time="2025-03-17T17:37:18.149161846Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:37:18.150690 containerd[1966]: time="2025-03-17T17:37:18.150451402Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:37:18.150690 containerd[1966]: time="2025-03-17T17:37:18.150650350Z" level=info msg="Start subscribing containerd event" Mar 17 17:37:18.150805 containerd[1966]: time="2025-03-17T17:37:18.150714358Z" level=info msg="Start recovering state" Mar 17 17:37:18.150851 containerd[1966]: time="2025-03-17T17:37:18.150827242Z" level=info msg="Start event monitor" Mar 17 17:37:18.150896 containerd[1966]: time="2025-03-17T17:37:18.150850054Z" level=info msg="Start snapshots syncer" Mar 17 17:37:18.150896 containerd[1966]: time="2025-03-17T17:37:18.150874846Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:37:18.150994 containerd[1966]: time="2025-03-17T17:37:18.150893782Z" level=info msg="Start streaming server" Mar 17 17:37:18.160276 containerd[1966]: time="2025-03-17T17:37:18.153386926Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:37:18.160276 containerd[1966]: time="2025-03-17T17:37:18.153540778Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:37:18.160276 containerd[1966]: time="2025-03-17T17:37:18.153670582Z" level=info msg="containerd successfully booted in 0.153615s" Mar 17 17:37:18.154430 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:37:18.226432 systemd-networkd[1861]: eth0: Gained IPv6LL Mar 17 17:37:18.231993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:37:18.237942 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:37:18.253832 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:37:18.261181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:18.276688 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:37:18.380362 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:37:18.405417 amazon-ssm-agent[2129]: Initializing new seelog logger Mar 17 17:37:18.406704 amazon-ssm-agent[2129]: New Seelog Logger Creation Complete Mar 17 17:37:18.406704 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.406704 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.411735 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 processing appconfig overrides Mar 17 17:37:18.411735 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.411735 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.411735 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 processing appconfig overrides Mar 17 17:37:18.411735 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.411735 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.414153 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 processing appconfig overrides Mar 17 17:37:18.414153 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO Proxy environment variables: Mar 17 17:37:18.417547 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.417547 amazon-ssm-agent[2129]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:37:18.417721 amazon-ssm-agent[2129]: 2025/03/17 17:37:18 processing appconfig overrides Mar 17 17:37:18.515479 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO no_proxy: Mar 17 17:37:18.617126 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO https_proxy: Mar 17 17:37:18.721402 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO http_proxy: Mar 17 17:37:18.819050 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:37:18.918262 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:37:18.955836 tar[1942]: linux-arm64/README.md Mar 17 17:37:18.992391 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:37:19.017314 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO Agent will take identity from EC2 Mar 17 17:37:19.116196 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:37:19.215698 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:37:19.315825 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:37:19.415096 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:37:19.515825 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [Registrar] Starting registrar module Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:18 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:19 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:19 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:19 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:37:19.596148 amazon-ssm-agent[2129]: 2025-03-17 17:37:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:37:19.614834 amazon-ssm-agent[2129]: 2025-03-17 17:37:19 INFO [CredentialRefresher] Next credential rotation will be in 32.19165712256667 minutes Mar 17 17:37:20.052261 sshd_keygen[1959]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:37:20.093693 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:37:20.106747 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:37:20.116792 systemd[1]: Started sshd@0-172.31.25.124:22-147.75.109.163:45376.service - OpenSSH per-connection server daemon (147.75.109.163:45376). Mar 17 17:37:20.134034 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:37:20.134877 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:37:20.150661 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:37:20.178898 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:37:20.190876 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:37:20.203590 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:37:20.206065 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:37:20.433847 sshd[2160]: Accepted publickey for core from 147.75.109.163 port 45376 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:20.437482 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:20.450974 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:37:20.467834 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:37:20.491574 systemd-logind[1935]: New session 1 of user core. Mar 17 17:37:20.502551 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:37:20.513808 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:37:20.534133 (systemd)[2172]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:37:20.540301 systemd-logind[1935]: New session c1 of user core. Mar 17 17:37:20.648603 amazon-ssm-agent[2129]: 2025-03-17 17:37:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:37:20.658634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:20.662773 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:37:20.676861 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:20.749944 amazon-ssm-agent[2129]: 2025-03-17 17:37:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2185) started Mar 17 17:37:20.849617 amazon-ssm-agent[2129]: 2025-03-17 17:37:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:37:20.891677 systemd[2172]: Queued start job for default target default.target. Mar 17 17:37:20.898495 systemd[2172]: Created slice app.slice - User Application Slice. Mar 17 17:37:20.898567 systemd[2172]: Reached target paths.target - Paths. Mar 17 17:37:20.898656 systemd[2172]: Reached target timers.target - Timers. Mar 17 17:37:20.910591 systemd[2172]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:37:20.930316 systemd[2172]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:37:20.932448 systemd[2172]: Reached target sockets.target - Sockets. Mar 17 17:37:20.932541 systemd[2172]: Reached target basic.target - Basic System. Mar 17 17:37:20.932654 systemd[2172]: Reached target default.target - Main User Target. Mar 17 17:37:20.932717 systemd[2172]: Startup finished in 376ms. Mar 17 17:37:20.932982 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:37:20.945709 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:37:20.950918 systemd[1]: Startup finished in 1.074s (kernel) + 9.570s (initrd) + 9.647s (userspace) = 20.291s. Mar 17 17:37:21.040174 ntpd[1928]: Listen normally on 7 eth0 [fe80::489:9cff:feb6:f84f%2]:123 Mar 17 17:37:21.040726 ntpd[1928]: 17 Mar 17:37:21 ntpd[1928]: Listen normally on 7 eth0 [fe80::489:9cff:feb6:f84f%2]:123 Mar 17 17:37:21.121765 systemd[1]: Started sshd@1-172.31.25.124:22-147.75.109.163:45392.service - OpenSSH per-connection server daemon (147.75.109.163:45392). Mar 17 17:37:21.307324 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 45392 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:21.310835 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:21.321800 systemd-logind[1935]: New session 2 of user core. Mar 17 17:37:21.328522 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:37:21.458178 sshd[2210]: Connection closed by 147.75.109.163 port 45392 Mar 17 17:37:21.458992 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:21.465048 systemd[1]: sshd@1-172.31.25.124:22-147.75.109.163:45392.service: Deactivated successfully. Mar 17 17:37:21.471659 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:37:21.475864 systemd-logind[1935]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:37:21.480651 systemd-logind[1935]: Removed session 2. Mar 17 17:37:21.503859 systemd[1]: Started sshd@2-172.31.25.124:22-147.75.109.163:45402.service - OpenSSH per-connection server daemon (147.75.109.163:45402). Mar 17 17:37:21.694813 sshd[2216]: Accepted publickey for core from 147.75.109.163 port 45402 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:21.697747 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:21.707768 systemd-logind[1935]: New session 3 of user core. Mar 17 17:37:21.719567 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:37:21.839585 sshd[2219]: Connection closed by 147.75.109.163 port 45402 Mar 17 17:37:21.841942 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:21.847948 systemd[1]: sshd@2-172.31.25.124:22-147.75.109.163:45402.service: Deactivated successfully. Mar 17 17:37:21.852891 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:37:21.854794 systemd-logind[1935]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:37:21.857584 systemd-logind[1935]: Removed session 3. Mar 17 17:37:21.882890 systemd[1]: Started sshd@3-172.31.25.124:22-147.75.109.163:45414.service - OpenSSH per-connection server daemon (147.75.109.163:45414). Mar 17 17:37:21.907398 kubelet[2184]: E0317 17:37:21.907339 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:21.912980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:21.914388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:21.915129 systemd[1]: kubelet.service: Consumed 1.318s CPU time, 249.7M memory peak. Mar 17 17:37:22.082710 sshd[2225]: Accepted publickey for core from 147.75.109.163 port 45414 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:22.085107 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:22.093343 systemd-logind[1935]: New session 4 of user core. Mar 17 17:37:22.104528 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:37:22.233281 sshd[2229]: Connection closed by 147.75.109.163 port 45414 Mar 17 17:37:22.233419 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:22.239694 systemd[1]: sshd@3-172.31.25.124:22-147.75.109.163:45414.service: Deactivated successfully. Mar 17 17:37:22.244490 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:37:22.246332 systemd-logind[1935]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:37:22.248799 systemd-logind[1935]: Removed session 4. Mar 17 17:37:22.273770 systemd[1]: Started sshd@4-172.31.25.124:22-147.75.109.163:45420.service - OpenSSH per-connection server daemon (147.75.109.163:45420). Mar 17 17:37:22.463352 sshd[2235]: Accepted publickey for core from 147.75.109.163 port 45420 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:22.465974 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:22.476658 systemd-logind[1935]: New session 5 of user core. Mar 17 17:37:22.482551 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:37:22.623315 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:37:22.623950 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:37:22.639925 sudo[2238]: pam_unix(sudo:session): session closed for user root Mar 17 17:37:22.663503 sshd[2237]: Connection closed by 147.75.109.163 port 45420 Mar 17 17:37:22.664664 sshd-session[2235]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:22.671534 systemd[1]: sshd@4-172.31.25.124:22-147.75.109.163:45420.service: Deactivated successfully. Mar 17 17:37:22.674788 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:37:22.676539 systemd-logind[1935]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:37:22.678629 systemd-logind[1935]: Removed session 5. Mar 17 17:37:22.713725 systemd[1]: Started sshd@5-172.31.25.124:22-147.75.109.163:45432.service - OpenSSH per-connection server daemon (147.75.109.163:45432). Mar 17 17:37:22.891283 sshd[2244]: Accepted publickey for core from 147.75.109.163 port 45432 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:22.894196 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:22.902840 systemd-logind[1935]: New session 6 of user core. Mar 17 17:37:22.909509 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:37:23.014625 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:37:23.015789 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:37:23.022350 sudo[2248]: pam_unix(sudo:session): session closed for user root Mar 17 17:37:23.033164 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:37:23.033843 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:37:23.057805 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:37:23.105873 augenrules[2270]: No rules Mar 17 17:37:23.108532 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:37:23.110365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:37:23.114399 sudo[2247]: pam_unix(sudo:session): session closed for user root Mar 17 17:37:23.137513 sshd[2246]: Connection closed by 147.75.109.163 port 45432 Mar 17 17:37:23.138558 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:23.144310 systemd-logind[1935]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:37:23.144518 systemd[1]: sshd@5-172.31.25.124:22-147.75.109.163:45432.service: Deactivated successfully. Mar 17 17:37:23.148137 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:37:23.152694 systemd-logind[1935]: Removed session 6. Mar 17 17:37:23.178742 systemd[1]: Started sshd@6-172.31.25.124:22-147.75.109.163:45438.service - OpenSSH per-connection server daemon (147.75.109.163:45438). Mar 17 17:37:23.358874 sshd[2279]: Accepted publickey for core from 147.75.109.163 port 45438 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:23.361491 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:23.370526 systemd-logind[1935]: New session 7 of user core. Mar 17 17:37:23.377538 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:37:23.482150 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:37:23.483419 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:37:24.266868 (dockerd)[2300]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:37:24.267962 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:37:24.729394 dockerd[2300]: time="2025-03-17T17:37:24.728980207Z" level=info msg="Starting up" Mar 17 17:37:24.909689 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3372588767-merged.mount: Deactivated successfully. Mar 17 17:37:25.059338 dockerd[2300]: time="2025-03-17T17:37:25.058917878Z" level=info msg="Loading containers: start." Mar 17 17:37:25.328329 kernel: Initializing XFRM netlink socket Mar 17 17:37:25.375421 (udev-worker)[2325]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:37:25.467656 systemd-networkd[1861]: docker0: Link UP Mar 17 17:37:25.508636 dockerd[2300]: time="2025-03-17T17:37:25.508585466Z" level=info msg="Loading containers: done." Mar 17 17:37:25.539382 dockerd[2300]: time="2025-03-17T17:37:25.539326818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:37:25.540198 dockerd[2300]: time="2025-03-17T17:37:25.539832931Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:37:25.540198 dockerd[2300]: time="2025-03-17T17:37:25.540064562Z" level=info msg="Daemon has completed initialization" Mar 17 17:37:25.601552 dockerd[2300]: time="2025-03-17T17:37:25.600876309Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:37:25.600997 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:37:25.901293 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3391841869-merged.mount: Deactivated successfully. Mar 17 17:37:26.738724 containerd[1966]: time="2025-03-17T17:37:26.738670590Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:37:27.454192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939295932.mount: Deactivated successfully. Mar 17 17:37:29.415229 containerd[1966]: time="2025-03-17T17:37:29.415148559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:29.417289 containerd[1966]: time="2025-03-17T17:37:29.417184308Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231950" Mar 17 17:37:29.418506 containerd[1966]: time="2025-03-17T17:37:29.418446523Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:29.424588 containerd[1966]: time="2025-03-17T17:37:29.424527346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:29.427179 containerd[1966]: time="2025-03-17T17:37:29.426937238Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.688207243s" Mar 17 17:37:29.427179 containerd[1966]: time="2025-03-17T17:37:29.426993138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 17:37:29.428809 containerd[1966]: time="2025-03-17T17:37:29.428496530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:37:31.513868 containerd[1966]: time="2025-03-17T17:37:31.513812657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:31.516031 containerd[1966]: time="2025-03-17T17:37:31.515900417Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530032" Mar 17 17:37:31.517208 containerd[1966]: time="2025-03-17T17:37:31.516577963Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:31.522306 containerd[1966]: time="2025-03-17T17:37:31.522199028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:31.524759 containerd[1966]: time="2025-03-17T17:37:31.524568088Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 2.096013258s" Mar 17 17:37:31.524759 containerd[1966]: time="2025-03-17T17:37:31.524622019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 17:37:31.525287 containerd[1966]: time="2025-03-17T17:37:31.525222824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:37:32.027312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:37:32.039583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:32.365297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:32.380995 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:32.464122 kubelet[2557]: E0317 17:37:32.464023 2557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:32.470186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:32.470541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:32.471396 systemd[1]: kubelet.service: Consumed 321ms CPU time, 100.4M memory peak. Mar 17 17:37:33.116886 containerd[1966]: time="2025-03-17T17:37:33.116793182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:33.118986 containerd[1966]: time="2025-03-17T17:37:33.118905662Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482561" Mar 17 17:37:33.119830 containerd[1966]: time="2025-03-17T17:37:33.119348299Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:33.124957 containerd[1966]: time="2025-03-17T17:37:33.124875033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:33.127405 containerd[1966]: time="2025-03-17T17:37:33.127199335Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.601779625s" Mar 17 17:37:33.127405 containerd[1966]: time="2025-03-17T17:37:33.127270374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 17:37:33.128188 containerd[1966]: time="2025-03-17T17:37:33.128148721Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:37:34.455752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359606033.mount: Deactivated successfully. Mar 17 17:37:35.002310 containerd[1966]: time="2025-03-17T17:37:35.001584628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:35.003544 containerd[1966]: time="2025-03-17T17:37:35.003460950Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 17 17:37:35.005057 containerd[1966]: time="2025-03-17T17:37:35.004983719Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:35.008539 containerd[1966]: time="2025-03-17T17:37:35.008442935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:35.010317 containerd[1966]: time="2025-03-17T17:37:35.009965752Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.881592963s" Mar 17 17:37:35.010317 containerd[1966]: time="2025-03-17T17:37:35.010017786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:37:35.011006 containerd[1966]: time="2025-03-17T17:37:35.010967317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:37:35.631689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431219541.mount: Deactivated successfully. Mar 17 17:37:36.826740 containerd[1966]: time="2025-03-17T17:37:36.826493459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:36.828717 containerd[1966]: time="2025-03-17T17:37:36.828630815Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Mar 17 17:37:36.829126 containerd[1966]: time="2025-03-17T17:37:36.829053931Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:36.834972 containerd[1966]: time="2025-03-17T17:37:36.834879770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:36.837463 containerd[1966]: time="2025-03-17T17:37:36.837417442Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.82611834s" Mar 17 17:37:36.837748 containerd[1966]: time="2025-03-17T17:37:36.837605348Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 17:37:36.838928 containerd[1966]: time="2025-03-17T17:37:36.838651851Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:37:37.377632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944622365.mount: Deactivated successfully. Mar 17 17:37:37.384668 containerd[1966]: time="2025-03-17T17:37:37.384381848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:37.385919 containerd[1966]: time="2025-03-17T17:37:37.385854912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 17 17:37:37.386842 containerd[1966]: time="2025-03-17T17:37:37.386758928Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:37.390987 containerd[1966]: time="2025-03-17T17:37:37.390888451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:37.393842 containerd[1966]: time="2025-03-17T17:37:37.392697444Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 553.992117ms" Mar 17 17:37:37.393842 containerd[1966]: time="2025-03-17T17:37:37.392748373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:37:37.393842 containerd[1966]: time="2025-03-17T17:37:37.393630082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:37:38.051607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792120644.mount: Deactivated successfully. Mar 17 17:37:41.914287 containerd[1966]: time="2025-03-17T17:37:41.912715130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:41.915829 containerd[1966]: time="2025-03-17T17:37:41.915765519Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Mar 17 17:37:41.918022 containerd[1966]: time="2025-03-17T17:37:41.917982391Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:41.924592 containerd[1966]: time="2025-03-17T17:37:41.924540464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:37:41.927151 containerd[1966]: time="2025-03-17T17:37:41.927101824Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.533428881s" Mar 17 17:37:41.927347 containerd[1966]: time="2025-03-17T17:37:41.927318653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 17:37:42.527346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:37:42.537414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:42.881890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:42.894008 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:37:42.971122 kubelet[2715]: E0317 17:37:42.971059 2715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:37:42.977867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:37:42.978492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:37:42.980425 systemd[1]: kubelet.service: Consumed 284ms CPU time, 102.1M memory peak. Mar 17 17:37:47.929959 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:37:50.171613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:50.172540 systemd[1]: kubelet.service: Consumed 284ms CPU time, 102.1M memory peak. Mar 17 17:37:50.180755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:50.234565 systemd[1]: Reload requested from client PID 2732 ('systemctl') (unit session-7.scope)... Mar 17 17:37:50.234599 systemd[1]: Reloading... Mar 17 17:37:50.493277 zram_generator::config[2780]: No configuration found. Mar 17 17:37:50.733521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:37:50.957617 systemd[1]: Reloading finished in 722 ms. Mar 17 17:37:51.042580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:51.057079 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:37:51.061431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:51.063323 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:37:51.065404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:51.065653 systemd[1]: kubelet.service: Consumed 228ms CPU time, 90.1M memory peak. Mar 17 17:37:51.076739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:37:51.406560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:37:51.423821 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:37:51.521687 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:37:51.521687 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:37:51.521687 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:37:51.522221 kubelet[2843]: I0317 17:37:51.521797 2843 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:37:53.337271 kubelet[2843]: I0317 17:37:53.336060 2843 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:37:53.337271 kubelet[2843]: I0317 17:37:53.336109 2843 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:37:53.337271 kubelet[2843]: I0317 17:37:53.336574 2843 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:37:53.380076 kubelet[2843]: E0317 17:37:53.380011 2843 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:53.383861 kubelet[2843]: I0317 17:37:53.383817 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:37:53.396421 kubelet[2843]: E0317 17:37:53.396371 2843 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:37:53.396717 kubelet[2843]: I0317 17:37:53.396693 2843 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:37:53.402639 kubelet[2843]: I0317 17:37:53.402588 2843 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:37:53.404788 kubelet[2843]: I0317 17:37:53.404734 2843 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:37:53.405195 kubelet[2843]: I0317 17:37:53.404913 2843 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-124","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:37:53.405491 kubelet[2843]: I0317 17:37:53.405468 2843 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:37:53.405591 kubelet[2843]: I0317 17:37:53.405574 2843 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:37:53.406171 kubelet[2843]: I0317 17:37:53.405880 2843 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:37:53.411427 kubelet[2843]: I0317 17:37:53.411391 2843 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:37:53.411598 kubelet[2843]: I0317 17:37:53.411576 2843 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:37:53.411709 kubelet[2843]: I0317 17:37:53.411690 2843 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:37:53.411831 kubelet[2843]: I0317 17:37:53.411810 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:37:53.414599 kubelet[2843]: W0317 17:37:53.414519 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-124&limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:53.414763 kubelet[2843]: E0317 17:37:53.414635 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-124&limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:53.416396 kubelet[2843]: W0317 17:37:53.416114 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:53.416396 kubelet[2843]: E0317 17:37:53.416211 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:53.419283 kubelet[2843]: I0317 17:37:53.417150 2843 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:37:53.419283 kubelet[2843]: I0317 17:37:53.417938 2843 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:37:53.419283 kubelet[2843]: W0317 17:37:53.418047 2843 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:37:53.420042 kubelet[2843]: I0317 17:37:53.420007 2843 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:37:53.420198 kubelet[2843]: I0317 17:37:53.420178 2843 server.go:1287] "Started kubelet" Mar 17 17:37:53.427758 kubelet[2843]: E0317 17:37:53.427509 2843 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.124:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-124.182da7bfe12510b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-124,UID:ip-172-31-25-124,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-124,},FirstTimestamp:2025-03-17 17:37:53.420144822 +0000 UTC m=+1.988547444,LastTimestamp:2025-03-17 17:37:53.420144822 +0000 UTC m=+1.988547444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-124,}" Mar 17 17:37:53.428007 kubelet[2843]: I0317 17:37:53.427926 2843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:37:53.428745 kubelet[2843]: I0317 17:37:53.428694 2843 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:37:53.428848 kubelet[2843]: I0317 17:37:53.428803 2843 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:37:53.430391 kubelet[2843]: I0317 17:37:53.430356 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:37:53.430835 kubelet[2843]: I0317 17:37:53.430795 2843 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:37:53.433273 kubelet[2843]: I0317 17:37:53.433197 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:37:53.436917 kubelet[2843]: E0317 17:37:53.436860 2843 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-124\" not found" Mar 17 17:37:53.437067 kubelet[2843]: I0317 17:37:53.436933 2843 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:37:53.437323 kubelet[2843]: I0317 17:37:53.437285 2843 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:37:53.437394 kubelet[2843]: I0317 17:37:53.437385 2843 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:37:53.438020 kubelet[2843]: W0317 17:37:53.437931 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:53.438158 kubelet[2843]: E0317 17:37:53.438024 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:53.439086 kubelet[2843]: E0317 17:37:53.439003 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-124?timeout=10s\": dial tcp 172.31.25.124:6443: connect: connection refused" interval="200ms" Mar 17 17:37:53.439943 kubelet[2843]: E0317 17:37:53.439896 2843 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:37:53.442141 kubelet[2843]: I0317 17:37:53.442094 2843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:37:53.447205 kubelet[2843]: I0317 17:37:53.447147 2843 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:37:53.447205 kubelet[2843]: I0317 17:37:53.447191 2843 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:37:53.472223 kubelet[2843]: I0317 17:37:53.472157 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:37:53.474886 kubelet[2843]: I0317 17:37:53.474424 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:37:53.474886 kubelet[2843]: I0317 17:37:53.474473 2843 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:37:53.474886 kubelet[2843]: I0317 17:37:53.474510 2843 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:37:53.474886 kubelet[2843]: I0317 17:37:53.474525 2843 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:37:53.474886 kubelet[2843]: E0317 17:37:53.474592 2843 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:37:53.489732 kubelet[2843]: W0317 17:37:53.489554 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:53.489732 kubelet[2843]: E0317 17:37:53.489661 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:53.500177 kubelet[2843]: I0317 17:37:53.500126 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:37:53.500177 kubelet[2843]: I0317 17:37:53.500157 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:37:53.500409 kubelet[2843]: I0317 17:37:53.500191 2843 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:37:53.502409 kubelet[2843]: I0317 17:37:53.502363 2843 policy_none.go:49] "None policy: Start" Mar 17 17:37:53.502409 kubelet[2843]: I0317 17:37:53.502405 2843 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:37:53.502555 kubelet[2843]: I0317 17:37:53.502429 2843 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:37:53.511990 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:37:53.530075 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:37:53.538218 kubelet[2843]: E0317 17:37:53.537357 2843 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-124\" not found" Mar 17 17:37:53.537467 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:37:53.551636 kubelet[2843]: I0317 17:37:53.550806 2843 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:37:53.551636 kubelet[2843]: I0317 17:37:53.551129 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:37:53.551636 kubelet[2843]: I0317 17:37:53.551153 2843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:37:53.551636 kubelet[2843]: I0317 17:37:53.551489 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:37:53.554307 kubelet[2843]: E0317 17:37:53.554121 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:37:53.554307 kubelet[2843]: E0317 17:37:53.554191 2843 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-124\" not found" Mar 17 17:37:53.594920 systemd[1]: Created slice kubepods-burstable-pod5fee038753295d0fc5148672d6c06328.slice - libcontainer container kubepods-burstable-pod5fee038753295d0fc5148672d6c06328.slice. Mar 17 17:37:53.614498 kubelet[2843]: E0317 17:37:53.614190 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:53.619945 systemd[1]: Created slice kubepods-burstable-podd062b80ef5d6287a5ff7483dd8b9d365.slice - libcontainer container kubepods-burstable-podd062b80ef5d6287a5ff7483dd8b9d365.slice. Mar 17 17:37:53.630667 kubelet[2843]: E0317 17:37:53.630604 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:53.636109 systemd[1]: Created slice kubepods-burstable-podfec878be0d104966d6ccedc80c6c0e17.slice - libcontainer container kubepods-burstable-podfec878be0d104966d6ccedc80c6c0e17.slice. Mar 17 17:37:53.639899 kubelet[2843]: E0317 17:37:53.639833 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-124?timeout=10s\": dial tcp 172.31.25.124:6443: connect: connection refused" interval="400ms" Mar 17 17:37:53.641549 kubelet[2843]: E0317 17:37:53.641510 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:53.654756 kubelet[2843]: I0317 17:37:53.654700 2843 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-124" Mar 17 17:37:53.655396 kubelet[2843]: E0317 17:37:53.655332 2843 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.124:6443/api/v1/nodes\": dial tcp 172.31.25.124:6443: connect: connection refused" node="ip-172-31-25-124" Mar 17 17:37:53.738114 kubelet[2843]: I0317 17:37:53.737954 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:53.738114 kubelet[2843]: I0317 17:37:53.738026 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:53.738114 kubelet[2843]: I0317 17:37:53.738073 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:53.738114 kubelet[2843]: I0317 17:37:53.738114 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fee038753295d0fc5148672d6c06328-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-124\" (UID: \"5fee038753295d0fc5148672d6c06328\") " pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:37:53.738461 kubelet[2843]: I0317 17:37:53.738152 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d062b80ef5d6287a5ff7483dd8b9d365-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-124\" (UID: \"d062b80ef5d6287a5ff7483dd8b9d365\") " pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:37:53.738461 kubelet[2843]: I0317 17:37:53.738189 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d062b80ef5d6287a5ff7483dd8b9d365-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-124\" (UID: \"d062b80ef5d6287a5ff7483dd8b9d365\") " pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:37:53.738461 kubelet[2843]: I0317 17:37:53.738225 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:53.738461 kubelet[2843]: I0317 17:37:53.738291 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:53.738461 kubelet[2843]: I0317 17:37:53.738330 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d062b80ef5d6287a5ff7483dd8b9d365-ca-certs\") pod \"kube-apiserver-ip-172-31-25-124\" (UID: \"d062b80ef5d6287a5ff7483dd8b9d365\") " pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:37:53.858358 kubelet[2843]: I0317 17:37:53.858208 2843 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-124" Mar 17 17:37:53.859495 kubelet[2843]: E0317 17:37:53.859439 2843 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.124:6443/api/v1/nodes\": dial tcp 172.31.25.124:6443: connect: connection refused" node="ip-172-31-25-124" Mar 17 17:37:53.917358 containerd[1966]: time="2025-03-17T17:37:53.916931915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-124,Uid:5fee038753295d0fc5148672d6c06328,Namespace:kube-system,Attempt:0,}" Mar 17 17:37:53.932999 containerd[1966]: time="2025-03-17T17:37:53.932943453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-124,Uid:d062b80ef5d6287a5ff7483dd8b9d365,Namespace:kube-system,Attempt:0,}" Mar 17 17:37:53.943363 containerd[1966]: time="2025-03-17T17:37:53.943290199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-124,Uid:fec878be0d104966d6ccedc80c6c0e17,Namespace:kube-system,Attempt:0,}" Mar 17 17:37:54.040560 kubelet[2843]: E0317 17:37:54.040486 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-124?timeout=10s\": dial tcp 172.31.25.124:6443: connect: connection refused" interval="800ms" Mar 17 17:37:54.262231 kubelet[2843]: I0317 17:37:54.262099 2843 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-124" Mar 17 17:37:54.263140 kubelet[2843]: E0317 17:37:54.263090 2843 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.124:6443/api/v1/nodes\": dial tcp 172.31.25.124:6443: connect: connection refused" node="ip-172-31-25-124" Mar 17 17:37:54.349360 kubelet[2843]: W0317 17:37:54.349226 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:54.349360 kubelet[2843]: E0317 17:37:54.349311 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:54.467003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331472866.mount: Deactivated successfully. Mar 17 17:37:54.476309 containerd[1966]: time="2025-03-17T17:37:54.475549680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:37:54.479391 containerd[1966]: time="2025-03-17T17:37:54.479306537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:37:54.487566 containerd[1966]: time="2025-03-17T17:37:54.487403096Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:37:54.490359 containerd[1966]: time="2025-03-17T17:37:54.490227459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:37:54.493056 containerd[1966]: time="2025-03-17T17:37:54.492861215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:37:54.495125 containerd[1966]: time="2025-03-17T17:37:54.495055420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:37:54.498159 kubelet[2843]: W0317 17:37:54.498012 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:54.498159 kubelet[2843]: E0317 17:37:54.498113 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:54.499332 containerd[1966]: time="2025-03-17T17:37:54.498563513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:37:54.499489 containerd[1966]: time="2025-03-17T17:37:54.499433540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:37:54.503326 containerd[1966]: time="2025-03-17T17:37:54.503264943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.817249ms" Mar 17 17:37:54.507003 containerd[1966]: time="2025-03-17T17:37:54.506947339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.903541ms" Mar 17 17:37:54.555035 containerd[1966]: time="2025-03-17T17:37:54.554978831Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 621.742624ms" Mar 17 17:37:54.707519 kubelet[2843]: W0317 17:37:54.707364 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:54.707519 kubelet[2843]: E0317 17:37:54.707462 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:54.729543 kubelet[2843]: W0317 17:37:54.729385 2843 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-124&limit=500&resourceVersion=0": dial tcp 172.31.25.124:6443: connect: connection refused Mar 17 17:37:54.729543 kubelet[2843]: E0317 17:37:54.729486 2843 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-124&limit=500&resourceVersion=0\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:54.773132 containerd[1966]: time="2025-03-17T17:37:54.772789091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:37:54.773132 containerd[1966]: time="2025-03-17T17:37:54.773082038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:37:54.773613 containerd[1966]: time="2025-03-17T17:37:54.773123603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:37:54.773986 containerd[1966]: time="2025-03-17T17:37:54.773821691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:37:54.781022 containerd[1966]: time="2025-03-17T17:37:54.780371373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:37:54.783622 containerd[1966]: time="2025-03-17T17:37:54.780493510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:37:54.783863 containerd[1966]: time="2025-03-17T17:37:54.783132212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:37:54.783863 containerd[1966]: time="2025-03-17T17:37:54.783272418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:37:54.783863 containerd[1966]: time="2025-03-17T17:37:54.783328775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:37:54.783863 containerd[1966]: time="2025-03-17T17:37:54.783475296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:37:54.784941 containerd[1966]: time="2025-03-17T17:37:54.784830533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:37:54.789292 containerd[1966]: time="2025-03-17T17:37:54.785978199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:37:54.830567 systemd[1]: Started cri-containerd-5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178.scope - libcontainer container 5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178. Mar 17 17:37:54.842213 kubelet[2843]: E0317 17:37:54.841410 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-124?timeout=10s\": dial tcp 172.31.25.124:6443: connect: connection refused" interval="1.6s" Mar 17 17:37:54.850698 systemd[1]: Started cri-containerd-00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34.scope - libcontainer container 00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34. Mar 17 17:37:54.853801 systemd[1]: Started cri-containerd-66e41917000de716053cc96c6ae6edc8304cf4a125d253b3c5150a6d19246175.scope - libcontainer container 66e41917000de716053cc96c6ae6edc8304cf4a125d253b3c5150a6d19246175. Mar 17 17:37:54.968287 containerd[1966]: time="2025-03-17T17:37:54.967870409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-124,Uid:5fee038753295d0fc5148672d6c06328,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178\"" Mar 17 17:37:54.974856 containerd[1966]: time="2025-03-17T17:37:54.974789707Z" level=info msg="CreateContainer within sandbox \"5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:37:54.976523 containerd[1966]: time="2025-03-17T17:37:54.976461578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-124,Uid:fec878be0d104966d6ccedc80c6c0e17,Namespace:kube-system,Attempt:0,} returns sandbox id \"00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34\"" Mar 17 17:37:54.983222 containerd[1966]: time="2025-03-17T17:37:54.983173869Z" level=info msg="CreateContainer within sandbox \"00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:37:54.996992 containerd[1966]: time="2025-03-17T17:37:54.996900160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-124,Uid:d062b80ef5d6287a5ff7483dd8b9d365,Namespace:kube-system,Attempt:0,} returns sandbox id \"66e41917000de716053cc96c6ae6edc8304cf4a125d253b3c5150a6d19246175\"" Mar 17 17:37:55.010411 containerd[1966]: time="2025-03-17T17:37:55.010046647Z" level=info msg="CreateContainer within sandbox \"66e41917000de716053cc96c6ae6edc8304cf4a125d253b3c5150a6d19246175\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:37:55.035062 containerd[1966]: time="2025-03-17T17:37:55.034982520Z" level=info msg="CreateContainer within sandbox \"00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e\"" Mar 17 17:37:55.036454 containerd[1966]: time="2025-03-17T17:37:55.036397596Z" level=info msg="StartContainer for \"1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e\"" Mar 17 17:37:55.040207 containerd[1966]: time="2025-03-17T17:37:55.039943916Z" level=info msg="CreateContainer within sandbox \"5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8\"" Mar 17 17:37:55.042305 containerd[1966]: time="2025-03-17T17:37:55.041061050Z" level=info msg="StartContainer for \"2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8\"" Mar 17 17:37:55.059155 containerd[1966]: time="2025-03-17T17:37:55.059088275Z" level=info msg="CreateContainer within sandbox \"66e41917000de716053cc96c6ae6edc8304cf4a125d253b3c5150a6d19246175\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15289bd4a834c5b46055fc46cb899fe6c447481e30b0394c65ea34a62b09498a\"" Mar 17 17:37:55.062364 containerd[1966]: time="2025-03-17T17:37:55.062306484Z" level=info msg="StartContainer for \"15289bd4a834c5b46055fc46cb899fe6c447481e30b0394c65ea34a62b09498a\"" Mar 17 17:37:55.066640 kubelet[2843]: I0317 17:37:55.066599 2843 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-124" Mar 17 17:37:55.067375 kubelet[2843]: E0317 17:37:55.067313 2843 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.25.124:6443/api/v1/nodes\": dial tcp 172.31.25.124:6443: connect: connection refused" node="ip-172-31-25-124" Mar 17 17:37:55.098602 systemd[1]: Started cri-containerd-1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e.scope - libcontainer container 1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e. Mar 17 17:37:55.140550 systemd[1]: Started cri-containerd-2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8.scope - libcontainer container 2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8. Mar 17 17:37:55.161607 systemd[1]: Started cri-containerd-15289bd4a834c5b46055fc46cb899fe6c447481e30b0394c65ea34a62b09498a.scope - libcontainer container 15289bd4a834c5b46055fc46cb899fe6c447481e30b0394c65ea34a62b09498a. Mar 17 17:37:55.259002 containerd[1966]: time="2025-03-17T17:37:55.258833732Z" level=info msg="StartContainer for \"1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e\" returns successfully" Mar 17 17:37:55.289642 containerd[1966]: time="2025-03-17T17:37:55.289061178Z" level=info msg="StartContainer for \"2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8\" returns successfully" Mar 17 17:37:55.298300 containerd[1966]: time="2025-03-17T17:37:55.296345302Z" level=info msg="StartContainer for \"15289bd4a834c5b46055fc46cb899fe6c447481e30b0394c65ea34a62b09498a\" returns successfully" Mar 17 17:37:55.411428 kubelet[2843]: E0317 17:37:55.411228 2843 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.124:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:37:55.504117 kubelet[2843]: E0317 17:37:55.504076 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:55.512184 kubelet[2843]: E0317 17:37:55.511712 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:55.516879 kubelet[2843]: E0317 17:37:55.516416 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:56.519980 kubelet[2843]: E0317 17:37:56.519928 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:56.523666 kubelet[2843]: E0317 17:37:56.521005 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:56.670218 kubelet[2843]: I0317 17:37:56.669828 2843 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-124" Mar 17 17:37:57.523594 kubelet[2843]: E0317 17:37:57.523361 2843 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:58.922330 kubelet[2843]: E0317 17:37:58.922264 2843 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-124\" not found" node="ip-172-31-25-124" Mar 17 17:37:59.040038 kubelet[2843]: I0317 17:37:59.039977 2843 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:37:59.041131 kubelet[2843]: I0317 17:37:59.041055 2843 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-124" Mar 17 17:37:59.135145 kubelet[2843]: E0317 17:37:59.135090 2843 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-124\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:37:59.135145 kubelet[2843]: I0317 17:37:59.135138 2843 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:37:59.149160 kubelet[2843]: E0317 17:37:59.149093 2843 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-124\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:37:59.149160 kubelet[2843]: I0317 17:37:59.149142 2843 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:59.159628 kubelet[2843]: E0317 17:37:59.159564 2843 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-124\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:37:59.418835 kubelet[2843]: I0317 17:37:59.418771 2843 apiserver.go:52] "Watching apiserver" Mar 17 17:37:59.443311 kubelet[2843]: I0317 17:37:59.443231 2843 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:37:59.573681 kubelet[2843]: I0317 17:37:59.572582 2843 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:37:59.586208 kubelet[2843]: E0317 17:37:59.584483 2843 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-124\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:38:00.952459 systemd[1]: Reload requested from client PID 3120 ('systemctl') (unit session-7.scope)... Mar 17 17:38:00.952495 systemd[1]: Reloading... Mar 17 17:38:01.168282 zram_generator::config[3171]: No configuration found. Mar 17 17:38:01.415330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:38:01.672491 systemd[1]: Reloading finished in 719 ms. Mar 17 17:38:01.731411 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:01.744851 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:38:01.745449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:01.745550 systemd[1]: kubelet.service: Consumed 2.663s CPU time, 122.3M memory peak. Mar 17 17:38:01.754796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:38:02.124547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:38:02.136883 (kubelet)[3225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:38:02.236777 kubelet[3225]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:02.236777 kubelet[3225]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:38:02.236777 kubelet[3225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:38:02.236777 kubelet[3225]: I0317 17:38:02.236309 3225 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:38:02.260622 kubelet[3225]: I0317 17:38:02.260575 3225 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:38:02.261295 kubelet[3225]: I0317 17:38:02.260806 3225 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:38:02.261688 kubelet[3225]: I0317 17:38:02.261657 3225 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:38:02.264407 kubelet[3225]: I0317 17:38:02.264368 3225 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:38:02.269402 kubelet[3225]: I0317 17:38:02.269343 3225 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:38:02.275907 kubelet[3225]: E0317 17:38:02.275851 3225 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:38:02.275907 kubelet[3225]: I0317 17:38:02.275907 3225 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:38:02.285679 kubelet[3225]: I0317 17:38:02.285619 3225 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:38:02.288781 kubelet[3225]: I0317 17:38:02.285997 3225 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:38:02.288781 kubelet[3225]: I0317 17:38:02.286048 3225 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-124","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:38:02.288781 kubelet[3225]: I0317 17:38:02.286402 3225 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:38:02.288781 kubelet[3225]: I0317 17:38:02.286422 3225 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:38:02.289126 kubelet[3225]: I0317 17:38:02.286500 3225 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:02.289126 kubelet[3225]: I0317 17:38:02.286705 3225 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:38:02.289126 kubelet[3225]: I0317 17:38:02.286727 3225 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:38:02.289126 kubelet[3225]: I0317 17:38:02.286759 3225 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:38:02.289126 kubelet[3225]: I0317 17:38:02.286779 3225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:38:02.290148 sudo[3239]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:38:02.290861 sudo[3239]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:38:02.298475 kubelet[3225]: I0317 17:38:02.298414 3225 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:38:02.304754 kubelet[3225]: I0317 17:38:02.304462 3225 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:38:02.310946 kubelet[3225]: I0317 17:38:02.310902 3225 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:38:02.312884 kubelet[3225]: I0317 17:38:02.312838 3225 server.go:1287] "Started kubelet" Mar 17 17:38:02.339413 kubelet[3225]: I0317 17:38:02.338737 3225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:38:02.341784 kubelet[3225]: I0317 17:38:02.341702 3225 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:38:02.342818 kubelet[3225]: I0317 17:38:02.342232 3225 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:38:02.359905 kubelet[3225]: I0317 17:38:02.359834 3225 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:38:02.369002 kubelet[3225]: I0317 17:38:02.368841 3225 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:38:02.370201 kubelet[3225]: I0317 17:38:02.370164 3225 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:38:02.371656 kubelet[3225]: E0317 17:38:02.371616 3225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-25-124\" not found" Mar 17 17:38:02.393601 kubelet[3225]: I0317 17:38:02.373861 3225 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:38:02.394532 kubelet[3225]: I0317 17:38:02.377830 3225 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:38:02.398195 kubelet[3225]: I0317 17:38:02.392029 3225 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:38:02.404720 kubelet[3225]: E0317 17:38:02.404678 3225 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:38:02.406450 kubelet[3225]: I0317 17:38:02.406018 3225 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:38:02.407655 kubelet[3225]: I0317 17:38:02.407035 3225 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:38:02.417071 kubelet[3225]: I0317 17:38:02.416798 3225 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:38:02.422675 kubelet[3225]: I0317 17:38:02.422622 3225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:38:02.426222 kubelet[3225]: I0317 17:38:02.425484 3225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:38:02.426222 kubelet[3225]: I0317 17:38:02.425527 3225 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:38:02.426222 kubelet[3225]: I0317 17:38:02.425559 3225 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:38:02.426222 kubelet[3225]: I0317 17:38:02.425573 3225 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:38:02.426222 kubelet[3225]: E0317 17:38:02.425633 3225 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:38:02.526105 kubelet[3225]: E0317 17:38:02.525930 3225 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:38:02.558585 kubelet[3225]: I0317 17:38:02.558525 3225 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:38:02.558585 kubelet[3225]: I0317 17:38:02.558560 3225 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:38:02.558773 kubelet[3225]: I0317 17:38:02.558596 3225 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:38:02.561274 kubelet[3225]: I0317 17:38:02.559624 3225 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:38:02.561274 kubelet[3225]: I0317 17:38:02.559691 3225 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:38:02.561274 kubelet[3225]: I0317 17:38:02.559950 3225 policy_none.go:49] "None policy: Start" Mar 17 17:38:02.561274 kubelet[3225]: I0317 17:38:02.559975 3225 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:38:02.561274 kubelet[3225]: I0317 17:38:02.559999 3225 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:38:02.561274 kubelet[3225]: I0317 17:38:02.560413 3225 state_mem.go:75] "Updated machine memory state" Mar 17 17:38:02.572167 kubelet[3225]: I0317 17:38:02.572106 3225 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:38:02.572457 kubelet[3225]: I0317 17:38:02.572413 3225 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:38:02.572543 kubelet[3225]: I0317 17:38:02.572447 3225 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:38:02.577568 kubelet[3225]: I0317 17:38:02.576478 3225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:38:02.580512 kubelet[3225]: E0317 17:38:02.580080 3225 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:38:02.660380 update_engine[1938]: I20250317 17:38:02.660151 1938 update_attempter.cc:509] Updating boot flags... Mar 17 17:38:02.705289 kubelet[3225]: I0317 17:38:02.702631 3225 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-25-124" Mar 17 17:38:02.726960 kubelet[3225]: I0317 17:38:02.726898 3225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:38:02.727744 kubelet[3225]: I0317 17:38:02.727688 3225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:38:02.728416 kubelet[3225]: I0317 17:38:02.728209 3225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:38:02.738973 kubelet[3225]: I0317 17:38:02.738893 3225 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-25-124" Mar 17 17:38:02.739445 kubelet[3225]: I0317 17:38:02.739275 3225 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-25-124" Mar 17 17:38:02.800401 kubelet[3225]: I0317 17:38:02.800057 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:38:02.800401 kubelet[3225]: I0317 17:38:02.800148 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:38:02.800401 kubelet[3225]: I0317 17:38:02.800199 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:38:02.800401 kubelet[3225]: I0317 17:38:02.800313 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:38:02.803103 kubelet[3225]: I0317 17:38:02.801479 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fee038753295d0fc5148672d6c06328-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-124\" (UID: \"5fee038753295d0fc5148672d6c06328\") " pod="kube-system/kube-scheduler-ip-172-31-25-124" Mar 17 17:38:02.803103 kubelet[3225]: I0317 17:38:02.801564 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d062b80ef5d6287a5ff7483dd8b9d365-ca-certs\") pod \"kube-apiserver-ip-172-31-25-124\" (UID: \"d062b80ef5d6287a5ff7483dd8b9d365\") " pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:38:02.803103 kubelet[3225]: I0317 17:38:02.801628 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d062b80ef5d6287a5ff7483dd8b9d365-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-124\" (UID: \"d062b80ef5d6287a5ff7483dd8b9d365\") " pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:38:02.803103 kubelet[3225]: I0317 17:38:02.801673 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d062b80ef5d6287a5ff7483dd8b9d365-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-124\" (UID: \"d062b80ef5d6287a5ff7483dd8b9d365\") " pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:38:02.803103 kubelet[3225]: I0317 17:38:02.801743 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec878be0d104966d6ccedc80c6c0e17-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-124\" (UID: \"fec878be0d104966d6ccedc80c6c0e17\") " pod="kube-system/kube-controller-manager-ip-172-31-25-124" Mar 17 17:38:02.838452 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3276) Mar 17 17:38:03.289237 kubelet[3225]: I0317 17:38:03.289182 3225 apiserver.go:52] "Watching apiserver" Mar 17 17:38:03.378663 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3276) Mar 17 17:38:03.395679 kubelet[3225]: I0317 17:38:03.395617 3225 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:38:03.480540 kubelet[3225]: I0317 17:38:03.480504 3225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:38:03.528215 kubelet[3225]: E0317 17:38:03.528156 3225 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-124\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-124" Mar 17 17:38:03.618340 kubelet[3225]: I0317 17:38:03.616561 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-124" podStartSLOduration=1.6165381779999999 podStartE2EDuration="1.616538178s" podCreationTimestamp="2025-03-17 17:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:03.534908879 +0000 UTC m=+1.389450141" watchObservedRunningTime="2025-03-17 17:38:03.616538178 +0000 UTC m=+1.471079428" Mar 17 17:38:03.681727 kubelet[3225]: I0317 17:38:03.681503 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-124" podStartSLOduration=1.681480241 podStartE2EDuration="1.681480241s" podCreationTimestamp="2025-03-17 17:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:03.621614483 +0000 UTC m=+1.476155745" watchObservedRunningTime="2025-03-17 17:38:03.681480241 +0000 UTC m=+1.536021479" Mar 17 17:38:03.781397 kubelet[3225]: I0317 17:38:03.780024 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-124" podStartSLOduration=1.780002018 podStartE2EDuration="1.780002018s" podCreationTimestamp="2025-03-17 17:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:03.685681356 +0000 UTC m=+1.540222630" watchObservedRunningTime="2025-03-17 17:38:03.780002018 +0000 UTC m=+1.634543268" Mar 17 17:38:03.803430 sudo[3239]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:05.505711 kubelet[3225]: I0317 17:38:05.505647 3225 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:38:05.507024 containerd[1966]: time="2025-03-17T17:38:05.506435691Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:38:05.508939 kubelet[3225]: I0317 17:38:05.508338 3225 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:38:05.723782 kubelet[3225]: I0317 17:38:05.721727 3225 status_manager.go:890] "Failed to get status for pod" podUID="388d4b9e-3743-47a7-981a-d0d5eff2558d" pod="kube-system/kube-proxy-6mxcj" err="pods \"kube-proxy-6mxcj\" is forbidden: User \"system:node:ip-172-31-25-124\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-124' and this object" Mar 17 17:38:05.723782 kubelet[3225]: W0317 17:38:05.721855 3225 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-25-124" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-124' and this object Mar 17 17:38:05.723782 kubelet[3225]: E0317 17:38:05.721898 3225 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-25-124\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-124' and this object" logger="UnhandledError" Mar 17 17:38:05.723782 kubelet[3225]: W0317 17:38:05.721974 3225 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-25-124" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-124' and this object Mar 17 17:38:05.723782 kubelet[3225]: E0317 17:38:05.721999 3225 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-25-124\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-124' and this object" logger="UnhandledError" Mar 17 17:38:05.729590 systemd[1]: Created slice kubepods-besteffort-pod388d4b9e_3743_47a7_981a_d0d5eff2558d.slice - libcontainer container kubepods-besteffort-pod388d4b9e_3743_47a7_981a_d0d5eff2558d.slice. Mar 17 17:38:05.732986 kubelet[3225]: I0317 17:38:05.732912 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/388d4b9e-3743-47a7-981a-d0d5eff2558d-xtables-lock\") pod \"kube-proxy-6mxcj\" (UID: \"388d4b9e-3743-47a7-981a-d0d5eff2558d\") " pod="kube-system/kube-proxy-6mxcj" Mar 17 17:38:05.733145 kubelet[3225]: I0317 17:38:05.732997 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzwmh\" (UniqueName: \"kubernetes.io/projected/388d4b9e-3743-47a7-981a-d0d5eff2558d-kube-api-access-wzwmh\") pod \"kube-proxy-6mxcj\" (UID: \"388d4b9e-3743-47a7-981a-d0d5eff2558d\") " pod="kube-system/kube-proxy-6mxcj" Mar 17 17:38:05.733145 kubelet[3225]: I0317 17:38:05.733046 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/388d4b9e-3743-47a7-981a-d0d5eff2558d-kube-proxy\") pod \"kube-proxy-6mxcj\" (UID: \"388d4b9e-3743-47a7-981a-d0d5eff2558d\") " pod="kube-system/kube-proxy-6mxcj" Mar 17 17:38:05.733145 kubelet[3225]: I0317 17:38:05.733083 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/388d4b9e-3743-47a7-981a-d0d5eff2558d-lib-modules\") pod \"kube-proxy-6mxcj\" (UID: \"388d4b9e-3743-47a7-981a-d0d5eff2558d\") " pod="kube-system/kube-proxy-6mxcj" Mar 17 17:38:05.782192 systemd[1]: Created slice kubepods-burstable-pod5e2b31e8_67d5_49ed_945b_41f7327688c7.slice - libcontainer container kubepods-burstable-pod5e2b31e8_67d5_49ed_945b_41f7327688c7.slice. Mar 17 17:38:05.834925 kubelet[3225]: I0317 17:38:05.834438 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-bpf-maps\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.834925 kubelet[3225]: I0317 17:38:05.834515 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kg96\" (UniqueName: \"kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-kube-api-access-6kg96\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.834925 kubelet[3225]: I0317 17:38:05.834554 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cni-path\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.834925 kubelet[3225]: I0317 17:38:05.834647 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-etc-cni-netd\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.834925 kubelet[3225]: I0317 17:38:05.834689 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-hostproc\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.834925 kubelet[3225]: I0317 17:38:05.834725 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-net\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.835398 kubelet[3225]: I0317 17:38:05.834760 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-hubble-tls\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.835920 kubelet[3225]: I0317 17:38:05.834815 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-cgroup\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.836131 kubelet[3225]: I0317 17:38:05.835968 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-xtables-lock\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.836131 kubelet[3225]: I0317 17:38:05.836061 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-run\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.836131 kubelet[3225]: I0317 17:38:05.836099 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e2b31e8-67d5-49ed-945b-41f7327688c7-clustermesh-secrets\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.836325 kubelet[3225]: I0317 17:38:05.836135 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-config-path\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.836325 kubelet[3225]: I0317 17:38:05.836192 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-kernel\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:05.836325 kubelet[3225]: I0317 17:38:05.836279 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-lib-modules\") pod \"cilium-kw79c\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " pod="kube-system/cilium-kw79c" Mar 17 17:38:06.382856 sudo[2282]: pam_unix(sudo:session): session closed for user root Mar 17 17:38:06.406414 sshd[2281]: Connection closed by 147.75.109.163 port 45438 Mar 17 17:38:06.408521 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:06.415625 systemd[1]: sshd@6-172.31.25.124:22-147.75.109.163:45438.service: Deactivated successfully. Mar 17 17:38:06.422856 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:38:06.423272 systemd[1]: session-7.scope: Consumed 11.736s CPU time, 264.3M memory peak. Mar 17 17:38:06.429146 systemd-logind[1935]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:38:06.432305 systemd-logind[1935]: Removed session 7. Mar 17 17:38:06.610735 systemd[1]: Created slice kubepods-besteffort-podb2716884_45db_436e_9a89_642ac6bbcb3f.slice - libcontainer container kubepods-besteffort-podb2716884_45db_436e_9a89_642ac6bbcb3f.slice. Mar 17 17:38:06.642868 kubelet[3225]: I0317 17:38:06.642727 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5j84\" (UniqueName: \"kubernetes.io/projected/b2716884-45db-436e-9a89-642ac6bbcb3f-kube-api-access-b5j84\") pod \"cilium-operator-6c4d7847fc-ldfgc\" (UID: \"b2716884-45db-436e-9a89-642ac6bbcb3f\") " pod="kube-system/cilium-operator-6c4d7847fc-ldfgc" Mar 17 17:38:06.644234 kubelet[3225]: I0317 17:38:06.644061 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2716884-45db-436e-9a89-642ac6bbcb3f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ldfgc\" (UID: \"b2716884-45db-436e-9a89-642ac6bbcb3f\") " pod="kube-system/cilium-operator-6c4d7847fc-ldfgc" Mar 17 17:38:06.920875 containerd[1966]: time="2025-03-17T17:38:06.920717301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ldfgc,Uid:b2716884-45db-436e-9a89-642ac6bbcb3f,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:06.949292 containerd[1966]: time="2025-03-17T17:38:06.948616291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mxcj,Uid:388d4b9e-3743-47a7-981a-d0d5eff2558d,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:06.984625 containerd[1966]: time="2025-03-17T17:38:06.984215114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:06.984625 containerd[1966]: time="2025-03-17T17:38:06.984403968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:06.984625 containerd[1966]: time="2025-03-17T17:38:06.984435244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:06.985356 containerd[1966]: time="2025-03-17T17:38:06.985124605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:06.994498 containerd[1966]: time="2025-03-17T17:38:06.994413503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kw79c,Uid:5e2b31e8-67d5-49ed-945b-41f7327688c7,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:07.050569 systemd[1]: Started cri-containerd-5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b.scope - libcontainer container 5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b. Mar 17 17:38:07.058429 containerd[1966]: time="2025-03-17T17:38:07.057919540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:07.058429 containerd[1966]: time="2025-03-17T17:38:07.058050142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:07.058429 containerd[1966]: time="2025-03-17T17:38:07.058089065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:07.061771 containerd[1966]: time="2025-03-17T17:38:07.060668770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:07.066482 containerd[1966]: time="2025-03-17T17:38:07.065617007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:07.066482 containerd[1966]: time="2025-03-17T17:38:07.065735554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:07.066482 containerd[1966]: time="2025-03-17T17:38:07.065761703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:07.066482 containerd[1966]: time="2025-03-17T17:38:07.065902486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:07.122638 systemd[1]: Started cri-containerd-e7b05e074fda80875c2dd0aa5a944c20f66e876f0df03803985b13ce1e6fce46.scope - libcontainer container e7b05e074fda80875c2dd0aa5a944c20f66e876f0df03803985b13ce1e6fce46. Mar 17 17:38:07.141609 systemd[1]: Started cri-containerd-b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f.scope - libcontainer container b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f. Mar 17 17:38:07.221647 containerd[1966]: time="2025-03-17T17:38:07.221476115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ldfgc,Uid:b2716884-45db-436e-9a89-642ac6bbcb3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\"" Mar 17 17:38:07.226035 containerd[1966]: time="2025-03-17T17:38:07.225803053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kw79c,Uid:5e2b31e8-67d5-49ed-945b-41f7327688c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\"" Mar 17 17:38:07.232208 containerd[1966]: time="2025-03-17T17:38:07.231998846Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:38:07.240622 containerd[1966]: time="2025-03-17T17:38:07.240552881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mxcj,Uid:388d4b9e-3743-47a7-981a-d0d5eff2558d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7b05e074fda80875c2dd0aa5a944c20f66e876f0df03803985b13ce1e6fce46\"" Mar 17 17:38:07.247675 containerd[1966]: time="2025-03-17T17:38:07.247412689Z" level=info msg="CreateContainer within sandbox \"e7b05e074fda80875c2dd0aa5a944c20f66e876f0df03803985b13ce1e6fce46\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:38:07.267307 containerd[1966]: time="2025-03-17T17:38:07.267204640Z" level=info msg="CreateContainer within sandbox \"e7b05e074fda80875c2dd0aa5a944c20f66e876f0df03803985b13ce1e6fce46\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7639e46dd39c9dacf1b1fdd1b3de497906242fec21905f3877570da8d226e91\"" Mar 17 17:38:07.270295 containerd[1966]: time="2025-03-17T17:38:07.268485116Z" level=info msg="StartContainer for \"c7639e46dd39c9dacf1b1fdd1b3de497906242fec21905f3877570da8d226e91\"" Mar 17 17:38:07.321577 systemd[1]: Started cri-containerd-c7639e46dd39c9dacf1b1fdd1b3de497906242fec21905f3877570da8d226e91.scope - libcontainer container c7639e46dd39c9dacf1b1fdd1b3de497906242fec21905f3877570da8d226e91. Mar 17 17:38:07.387337 containerd[1966]: time="2025-03-17T17:38:07.387230808Z" level=info msg="StartContainer for \"c7639e46dd39c9dacf1b1fdd1b3de497906242fec21905f3877570da8d226e91\" returns successfully" Mar 17 17:38:08.942523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106235314.mount: Deactivated successfully. Mar 17 17:38:09.046068 kubelet[3225]: I0317 17:38:09.045930 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6mxcj" podStartSLOduration=4.045613284 podStartE2EDuration="4.045613284s" podCreationTimestamp="2025-03-17 17:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:07.525391547 +0000 UTC m=+5.379932797" watchObservedRunningTime="2025-03-17 17:38:09.045613284 +0000 UTC m=+6.900154534" Mar 17 17:38:09.568594 containerd[1966]: time="2025-03-17T17:38:09.568331236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:09.569965 containerd[1966]: time="2025-03-17T17:38:09.569872807Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:38:09.571112 containerd[1966]: time="2025-03-17T17:38:09.571069781Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:09.575536 containerd[1966]: time="2025-03-17T17:38:09.575461804Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.343392639s" Mar 17 17:38:09.575536 containerd[1966]: time="2025-03-17T17:38:09.575523371Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:38:09.577290 containerd[1966]: time="2025-03-17T17:38:09.577213551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:38:09.581615 containerd[1966]: time="2025-03-17T17:38:09.581556746Z" level=info msg="CreateContainer within sandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:38:09.615352 containerd[1966]: time="2025-03-17T17:38:09.615218184Z" level=info msg="CreateContainer within sandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\"" Mar 17 17:38:09.616159 containerd[1966]: time="2025-03-17T17:38:09.616116894Z" level=info msg="StartContainer for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\"" Mar 17 17:38:09.676547 systemd[1]: Started cri-containerd-1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d.scope - libcontainer container 1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d. Mar 17 17:38:09.724911 containerd[1966]: time="2025-03-17T17:38:09.724837888Z" level=info msg="StartContainer for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" returns successfully" Mar 17 17:38:27.072554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655828652.mount: Deactivated successfully. Mar 17 17:38:29.658658 containerd[1966]: time="2025-03-17T17:38:29.658583722Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:29.660344 containerd[1966]: time="2025-03-17T17:38:29.660289307Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:38:29.661344 containerd[1966]: time="2025-03-17T17:38:29.661231394Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:38:29.664805 containerd[1966]: time="2025-03-17T17:38:29.664595691Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 20.087301592s" Mar 17 17:38:29.664805 containerd[1966]: time="2025-03-17T17:38:29.664654520Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:38:29.670756 containerd[1966]: time="2025-03-17T17:38:29.670684102Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:38:29.692314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710888512.mount: Deactivated successfully. Mar 17 17:38:29.697706 containerd[1966]: time="2025-03-17T17:38:29.697619335Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\"" Mar 17 17:38:29.699106 containerd[1966]: time="2025-03-17T17:38:29.699037688Z" level=info msg="StartContainer for \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\"" Mar 17 17:38:29.761556 systemd[1]: Started cri-containerd-500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf.scope - libcontainer container 500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf. Mar 17 17:38:29.812045 containerd[1966]: time="2025-03-17T17:38:29.811987675Z" level=info msg="StartContainer for \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\" returns successfully" Mar 17 17:38:29.834551 systemd[1]: cri-containerd-500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf.scope: Deactivated successfully. Mar 17 17:38:30.496462 containerd[1966]: time="2025-03-17T17:38:30.496025436Z" level=info msg="shim disconnected" id=500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf namespace=k8s.io Mar 17 17:38:30.496724 containerd[1966]: time="2025-03-17T17:38:30.496459069Z" level=warning msg="cleaning up after shim disconnected" id=500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf namespace=k8s.io Mar 17 17:38:30.496724 containerd[1966]: time="2025-03-17T17:38:30.496503851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:30.577566 containerd[1966]: time="2025-03-17T17:38:30.577510471Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:38:30.609384 containerd[1966]: time="2025-03-17T17:38:30.607715010Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\"" Mar 17 17:38:30.609932 containerd[1966]: time="2025-03-17T17:38:30.609859017Z" level=info msg="StartContainer for \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\"" Mar 17 17:38:30.612908 kubelet[3225]: I0317 17:38:30.612380 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ldfgc" podStartSLOduration=22.264732675 podStartE2EDuration="24.612356781s" podCreationTimestamp="2025-03-17 17:38:06 +0000 UTC" firstStartedPulling="2025-03-17 17:38:07.229184987 +0000 UTC m=+5.083726225" lastFinishedPulling="2025-03-17 17:38:09.576809093 +0000 UTC m=+7.431350331" observedRunningTime="2025-03-17 17:38:10.619524774 +0000 UTC m=+8.474066048" watchObservedRunningTime="2025-03-17 17:38:30.612356781 +0000 UTC m=+28.466898019" Mar 17 17:38:30.659815 systemd[1]: Started cri-containerd-9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9.scope - libcontainer container 9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9. Mar 17 17:38:30.688692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf-rootfs.mount: Deactivated successfully. Mar 17 17:38:30.713148 containerd[1966]: time="2025-03-17T17:38:30.713069437Z" level=info msg="StartContainer for \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\" returns successfully" Mar 17 17:38:30.736656 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:38:30.737816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:30.738401 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:38:30.752439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:38:30.757883 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:38:30.758825 systemd[1]: cri-containerd-9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9.scope: Deactivated successfully. Mar 17 17:38:30.802632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9-rootfs.mount: Deactivated successfully. Mar 17 17:38:30.808376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:38:30.816143 containerd[1966]: time="2025-03-17T17:38:30.816027383Z" level=info msg="shim disconnected" id=9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9 namespace=k8s.io Mar 17 17:38:30.816143 containerd[1966]: time="2025-03-17T17:38:30.816126865Z" level=warning msg="cleaning up after shim disconnected" id=9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9 namespace=k8s.io Mar 17 17:38:30.816143 containerd[1966]: time="2025-03-17T17:38:30.816148404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:31.578936 containerd[1966]: time="2025-03-17T17:38:31.578714397Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:38:31.606613 containerd[1966]: time="2025-03-17T17:38:31.606483519Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\"" Mar 17 17:38:31.611087 containerd[1966]: time="2025-03-17T17:38:31.609128537Z" level=info msg="StartContainer for \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\"" Mar 17 17:38:31.663935 systemd[1]: Started cri-containerd-7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f.scope - libcontainer container 7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f. Mar 17 17:38:31.727313 containerd[1966]: time="2025-03-17T17:38:31.727221967Z" level=info msg="StartContainer for \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\" returns successfully" Mar 17 17:38:31.732137 systemd[1]: cri-containerd-7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f.scope: Deactivated successfully. Mar 17 17:38:31.775594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f-rootfs.mount: Deactivated successfully. Mar 17 17:38:31.778850 containerd[1966]: time="2025-03-17T17:38:31.778511059Z" level=info msg="shim disconnected" id=7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f namespace=k8s.io Mar 17 17:38:31.778850 containerd[1966]: time="2025-03-17T17:38:31.778587549Z" level=warning msg="cleaning up after shim disconnected" id=7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f namespace=k8s.io Mar 17 17:38:31.778850 containerd[1966]: time="2025-03-17T17:38:31.778610445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:32.590742 containerd[1966]: time="2025-03-17T17:38:32.590510462Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:38:32.618852 containerd[1966]: time="2025-03-17T17:38:32.618673044Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\"" Mar 17 17:38:32.625226 containerd[1966]: time="2025-03-17T17:38:32.621515897Z" level=info msg="StartContainer for \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\"" Mar 17 17:38:32.674578 systemd[1]: Started cri-containerd-c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23.scope - libcontainer container c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23. Mar 17 17:38:32.729416 systemd[1]: cri-containerd-c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23.scope: Deactivated successfully. Mar 17 17:38:32.737256 containerd[1966]: time="2025-03-17T17:38:32.737187477Z" level=info msg="StartContainer for \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\" returns successfully" Mar 17 17:38:32.780143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23-rootfs.mount: Deactivated successfully. Mar 17 17:38:32.786687 containerd[1966]: time="2025-03-17T17:38:32.786609131Z" level=info msg="shim disconnected" id=c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23 namespace=k8s.io Mar 17 17:38:32.786687 containerd[1966]: time="2025-03-17T17:38:32.786684781Z" level=warning msg="cleaning up after shim disconnected" id=c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23 namespace=k8s.io Mar 17 17:38:32.787046 containerd[1966]: time="2025-03-17T17:38:32.786705251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:33.594547 containerd[1966]: time="2025-03-17T17:38:33.593801908Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:38:33.620475 containerd[1966]: time="2025-03-17T17:38:33.619741327Z" level=info msg="CreateContainer within sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\"" Mar 17 17:38:33.626655 containerd[1966]: time="2025-03-17T17:38:33.622210853Z" level=info msg="StartContainer for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\"" Mar 17 17:38:33.694564 systemd[1]: Started cri-containerd-4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b.scope - libcontainer container 4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b. Mar 17 17:38:33.748738 containerd[1966]: time="2025-03-17T17:38:33.748556764Z" level=info msg="StartContainer for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" returns successfully" Mar 17 17:38:33.974403 kubelet[3225]: I0317 17:38:33.973080 3225 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:38:34.038653 systemd[1]: Created slice kubepods-burstable-pod5554ff47_5a94_4f15_be53_7f1a82a56d8d.slice - libcontainer container kubepods-burstable-pod5554ff47_5a94_4f15_be53_7f1a82a56d8d.slice. Mar 17 17:38:34.059779 systemd[1]: Created slice kubepods-burstable-pod923ceb12_c7d5_4226_8fde_d2a87e414406.slice - libcontainer container kubepods-burstable-pod923ceb12_c7d5_4226_8fde_d2a87e414406.slice. Mar 17 17:38:34.150164 kubelet[3225]: I0317 17:38:34.149801 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5554ff47-5a94-4f15-be53-7f1a82a56d8d-config-volume\") pod \"coredns-668d6bf9bc-rz2mb\" (UID: \"5554ff47-5a94-4f15-be53-7f1a82a56d8d\") " pod="kube-system/coredns-668d6bf9bc-rz2mb" Mar 17 17:38:34.150164 kubelet[3225]: I0317 17:38:34.149868 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgp6\" (UniqueName: \"kubernetes.io/projected/923ceb12-c7d5-4226-8fde-d2a87e414406-kube-api-access-7pgp6\") pod \"coredns-668d6bf9bc-kq5x4\" (UID: \"923ceb12-c7d5-4226-8fde-d2a87e414406\") " pod="kube-system/coredns-668d6bf9bc-kq5x4" Mar 17 17:38:34.150164 kubelet[3225]: I0317 17:38:34.149910 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/923ceb12-c7d5-4226-8fde-d2a87e414406-config-volume\") pod \"coredns-668d6bf9bc-kq5x4\" (UID: \"923ceb12-c7d5-4226-8fde-d2a87e414406\") " pod="kube-system/coredns-668d6bf9bc-kq5x4" Mar 17 17:38:34.150164 kubelet[3225]: I0317 17:38:34.149956 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84bx\" (UniqueName: \"kubernetes.io/projected/5554ff47-5a94-4f15-be53-7f1a82a56d8d-kube-api-access-v84bx\") pod \"coredns-668d6bf9bc-rz2mb\" (UID: \"5554ff47-5a94-4f15-be53-7f1a82a56d8d\") " pod="kube-system/coredns-668d6bf9bc-rz2mb" Mar 17 17:38:34.354204 containerd[1966]: time="2025-03-17T17:38:34.354126045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rz2mb,Uid:5554ff47-5a94-4f15-be53-7f1a82a56d8d,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:34.385406 containerd[1966]: time="2025-03-17T17:38:34.385325137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kq5x4,Uid:923ceb12-c7d5-4226-8fde-d2a87e414406,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:36.759192 (udev-worker)[4205]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:38:36.761807 (udev-worker)[4204]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:38:36.761997 systemd-networkd[1861]: cilium_host: Link UP Mar 17 17:38:36.762372 systemd-networkd[1861]: cilium_net: Link UP Mar 17 17:38:36.764184 systemd-networkd[1861]: cilium_net: Gained carrier Mar 17 17:38:36.765875 systemd-networkd[1861]: cilium_host: Gained carrier Mar 17 17:38:36.940710 systemd-networkd[1861]: cilium_vxlan: Link UP Mar 17 17:38:36.940729 systemd-networkd[1861]: cilium_vxlan: Gained carrier Mar 17 17:38:36.988263 systemd-networkd[1861]: cilium_net: Gained IPv6LL Mar 17 17:38:37.116003 systemd-networkd[1861]: cilium_host: Gained IPv6LL Mar 17 17:38:37.433283 kernel: NET: Registered PF_ALG protocol family Mar 17 17:38:38.763123 systemd-networkd[1861]: lxc_health: Link UP Mar 17 17:38:38.776135 systemd-networkd[1861]: lxc_health: Gained carrier Mar 17 17:38:38.802492 systemd-networkd[1861]: cilium_vxlan: Gained IPv6LL Mar 17 17:38:39.036343 kubelet[3225]: I0317 17:38:39.035679 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kw79c" podStartSLOduration=11.599128471 podStartE2EDuration="34.035654772s" podCreationTimestamp="2025-03-17 17:38:05 +0000 UTC" firstStartedPulling="2025-03-17 17:38:07.230144867 +0000 UTC m=+5.084686105" lastFinishedPulling="2025-03-17 17:38:29.666671168 +0000 UTC m=+27.521212406" observedRunningTime="2025-03-17 17:38:34.65661849 +0000 UTC m=+32.511159752" watchObservedRunningTime="2025-03-17 17:38:39.035654772 +0000 UTC m=+36.890195998" Mar 17 17:38:39.473313 kernel: eth0: renamed from tmp7d81e Mar 17 17:38:39.486021 systemd-networkd[1861]: lxcc9a38e4524de: Link UP Mar 17 17:38:39.486944 systemd-networkd[1861]: lxcc9a38e4524de: Gained carrier Mar 17 17:38:39.514162 (udev-worker)[4574]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:38:39.521068 systemd-networkd[1861]: lxc03d68877607b: Link UP Mar 17 17:38:39.539291 kernel: eth0: renamed from tmpd3dd2 Mar 17 17:38:39.560513 systemd-networkd[1861]: lxc03d68877607b: Gained carrier Mar 17 17:38:40.466996 systemd-networkd[1861]: lxc_health: Gained IPv6LL Mar 17 17:38:40.914913 systemd-networkd[1861]: lxcc9a38e4524de: Gained IPv6LL Mar 17 17:38:41.298521 systemd-networkd[1861]: lxc03d68877607b: Gained IPv6LL Mar 17 17:38:44.040292 ntpd[1928]: Listen normally on 8 cilium_host 192.168.0.30:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 8 cilium_host 192.168.0.30:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 9 cilium_net [fe80::6c9a:3dff:fe89:4628%4]:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 10 cilium_host [fe80::88ee:dcff:fea1:aa19%5]:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 11 cilium_vxlan [fe80::827:d1ff:fe0d:451c%6]:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 12 lxc_health [fe80::cbd:a1ff:fe33:67b3%8]:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 13 lxcc9a38e4524de [fe80::406a:c2ff:fe3b:f57b%10]:123 Mar 17 17:38:44.040831 ntpd[1928]: 17 Mar 17:38:44 ntpd[1928]: Listen normally on 14 lxc03d68877607b [fe80::c46:96ff:fe24:7965%12]:123 Mar 17 17:38:44.040443 ntpd[1928]: Listen normally on 9 cilium_net [fe80::6c9a:3dff:fe89:4628%4]:123 Mar 17 17:38:44.040527 ntpd[1928]: Listen normally on 10 cilium_host [fe80::88ee:dcff:fea1:aa19%5]:123 Mar 17 17:38:44.040596 ntpd[1928]: Listen normally on 11 cilium_vxlan [fe80::827:d1ff:fe0d:451c%6]:123 Mar 17 17:38:44.040663 ntpd[1928]: Listen normally on 12 lxc_health [fe80::cbd:a1ff:fe33:67b3%8]:123 Mar 17 17:38:44.040728 ntpd[1928]: Listen normally on 13 lxcc9a38e4524de [fe80::406a:c2ff:fe3b:f57b%10]:123 Mar 17 17:38:44.040796 ntpd[1928]: Listen normally on 14 lxc03d68877607b [fe80::c46:96ff:fe24:7965%12]:123 Mar 17 17:38:47.674679 containerd[1966]: time="2025-03-17T17:38:47.673704459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:47.674679 containerd[1966]: time="2025-03-17T17:38:47.673823042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:47.674679 containerd[1966]: time="2025-03-17T17:38:47.673860981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:47.678629 containerd[1966]: time="2025-03-17T17:38:47.678219615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:47.735881 systemd[1]: run-containerd-runc-k8s.io-d3dd23a3c5d12c894cc1dbc18870e601c8473fae812ade3fcfca498ac5294cf2-runc.S2aw75.mount: Deactivated successfully. Mar 17 17:38:47.753545 systemd[1]: Started cri-containerd-d3dd23a3c5d12c894cc1dbc18870e601c8473fae812ade3fcfca498ac5294cf2.scope - libcontainer container d3dd23a3c5d12c894cc1dbc18870e601c8473fae812ade3fcfca498ac5294cf2. Mar 17 17:38:47.780829 containerd[1966]: time="2025-03-17T17:38:47.779783965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:47.780829 containerd[1966]: time="2025-03-17T17:38:47.779896185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:47.780829 containerd[1966]: time="2025-03-17T17:38:47.779948771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:47.784320 containerd[1966]: time="2025-03-17T17:38:47.783311580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:47.845612 systemd[1]: Started cri-containerd-7d81eab850a7f4603eb3e97b1d844fc678a964f93c81eb0cb0c8d82c6d353921.scope - libcontainer container 7d81eab850a7f4603eb3e97b1d844fc678a964f93c81eb0cb0c8d82c6d353921. Mar 17 17:38:47.903951 containerd[1966]: time="2025-03-17T17:38:47.903886890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kq5x4,Uid:923ceb12-c7d5-4226-8fde-d2a87e414406,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3dd23a3c5d12c894cc1dbc18870e601c8473fae812ade3fcfca498ac5294cf2\"" Mar 17 17:38:47.915571 containerd[1966]: time="2025-03-17T17:38:47.915390571Z" level=info msg="CreateContainer within sandbox \"d3dd23a3c5d12c894cc1dbc18870e601c8473fae812ade3fcfca498ac5294cf2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:38:47.944990 containerd[1966]: time="2025-03-17T17:38:47.944605131Z" level=info msg="CreateContainer within sandbox \"d3dd23a3c5d12c894cc1dbc18870e601c8473fae812ade3fcfca498ac5294cf2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c380ed96c3c17467b070c417f5b8a504f1ae99e6a36b4c764b45b18433edc19d\"" Mar 17 17:38:47.946885 containerd[1966]: time="2025-03-17T17:38:47.946513965Z" level=info msg="StartContainer for \"c380ed96c3c17467b070c417f5b8a504f1ae99e6a36b4c764b45b18433edc19d\"" Mar 17 17:38:47.994941 containerd[1966]: time="2025-03-17T17:38:47.994689914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rz2mb,Uid:5554ff47-5a94-4f15-be53-7f1a82a56d8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d81eab850a7f4603eb3e97b1d844fc678a964f93c81eb0cb0c8d82c6d353921\"" Mar 17 17:38:48.013264 containerd[1966]: time="2025-03-17T17:38:48.012705505Z" level=info msg="CreateContainer within sandbox \"7d81eab850a7f4603eb3e97b1d844fc678a964f93c81eb0cb0c8d82c6d353921\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:38:48.045526 systemd[1]: Started cri-containerd-c380ed96c3c17467b070c417f5b8a504f1ae99e6a36b4c764b45b18433edc19d.scope - libcontainer container c380ed96c3c17467b070c417f5b8a504f1ae99e6a36b4c764b45b18433edc19d. Mar 17 17:38:48.050007 containerd[1966]: time="2025-03-17T17:38:48.049868421Z" level=info msg="CreateContainer within sandbox \"7d81eab850a7f4603eb3e97b1d844fc678a964f93c81eb0cb0c8d82c6d353921\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19bb0ddaa5daf6dcf29e548eab209040f69aeae0f887d45a3b4beb6464dfb05e\"" Mar 17 17:38:48.052962 containerd[1966]: time="2025-03-17T17:38:48.052793935Z" level=info msg="StartContainer for \"19bb0ddaa5daf6dcf29e548eab209040f69aeae0f887d45a3b4beb6464dfb05e\"" Mar 17 17:38:48.114605 systemd[1]: Started cri-containerd-19bb0ddaa5daf6dcf29e548eab209040f69aeae0f887d45a3b4beb6464dfb05e.scope - libcontainer container 19bb0ddaa5daf6dcf29e548eab209040f69aeae0f887d45a3b4beb6464dfb05e. Mar 17 17:38:48.169341 containerd[1966]: time="2025-03-17T17:38:48.167480543Z" level=info msg="StartContainer for \"c380ed96c3c17467b070c417f5b8a504f1ae99e6a36b4c764b45b18433edc19d\" returns successfully" Mar 17 17:38:48.219887 containerd[1966]: time="2025-03-17T17:38:48.219581096Z" level=info msg="StartContainer for \"19bb0ddaa5daf6dcf29e548eab209040f69aeae0f887d45a3b4beb6464dfb05e\" returns successfully" Mar 17 17:38:48.705109 kubelet[3225]: I0317 17:38:48.704999 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rz2mb" podStartSLOduration=42.70497656 podStartE2EDuration="42.70497656s" podCreationTimestamp="2025-03-17 17:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:48.672819522 +0000 UTC m=+46.527360772" watchObservedRunningTime="2025-03-17 17:38:48.70497656 +0000 UTC m=+46.559517798" Mar 17 17:38:48.727620 kubelet[3225]: I0317 17:38:48.726079 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kq5x4" podStartSLOduration=42.726056239 podStartE2EDuration="42.726056239s" podCreationTimestamp="2025-03-17 17:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:48.724667025 +0000 UTC m=+46.579208275" watchObservedRunningTime="2025-03-17 17:38:48.726056239 +0000 UTC m=+46.580597477" Mar 17 17:38:49.400752 systemd[1]: Started sshd@7-172.31.25.124:22-147.75.109.163:41428.service - OpenSSH per-connection server daemon (147.75.109.163:41428). Mar 17 17:38:49.586988 sshd[4783]: Accepted publickey for core from 147.75.109.163 port 41428 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:38:49.589561 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:49.598796 systemd-logind[1935]: New session 8 of user core. Mar 17 17:38:49.602539 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:38:49.882892 sshd[4785]: Connection closed by 147.75.109.163 port 41428 Mar 17 17:38:49.883407 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:49.890287 systemd[1]: sshd@7-172.31.25.124:22-147.75.109.163:41428.service: Deactivated successfully. Mar 17 17:38:49.894172 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:38:49.896587 systemd-logind[1935]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:38:49.898532 systemd-logind[1935]: Removed session 8. Mar 17 17:38:54.927776 systemd[1]: Started sshd@8-172.31.25.124:22-147.75.109.163:44306.service - OpenSSH per-connection server daemon (147.75.109.163:44306). Mar 17 17:38:55.116424 sshd[4803]: Accepted publickey for core from 147.75.109.163 port 44306 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:38:55.119027 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:55.127175 systemd-logind[1935]: New session 9 of user core. Mar 17 17:38:55.133497 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:38:55.380080 sshd[4805]: Connection closed by 147.75.109.163 port 44306 Mar 17 17:38:55.380964 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:55.386862 systemd[1]: sshd@8-172.31.25.124:22-147.75.109.163:44306.service: Deactivated successfully. Mar 17 17:38:55.393117 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:38:55.394823 systemd-logind[1935]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:38:55.396587 systemd-logind[1935]: Removed session 9. Mar 17 17:39:00.423773 systemd[1]: Started sshd@9-172.31.25.124:22-147.75.109.163:44308.service - OpenSSH per-connection server daemon (147.75.109.163:44308). Mar 17 17:39:00.615394 sshd[4818]: Accepted publickey for core from 147.75.109.163 port 44308 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:00.618627 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:00.627643 systemd-logind[1935]: New session 10 of user core. Mar 17 17:39:00.634532 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:39:00.885692 sshd[4820]: Connection closed by 147.75.109.163 port 44308 Mar 17 17:39:00.885346 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:00.891689 systemd[1]: sshd@9-172.31.25.124:22-147.75.109.163:44308.service: Deactivated successfully. Mar 17 17:39:00.896029 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:39:00.900398 systemd-logind[1935]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:39:00.902600 systemd-logind[1935]: Removed session 10. Mar 17 17:39:05.929775 systemd[1]: Started sshd@10-172.31.25.124:22-147.75.109.163:55514.service - OpenSSH per-connection server daemon (147.75.109.163:55514). Mar 17 17:39:06.125339 sshd[4835]: Accepted publickey for core from 147.75.109.163 port 55514 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:06.127821 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:06.135712 systemd-logind[1935]: New session 11 of user core. Mar 17 17:39:06.145500 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:39:06.406359 sshd[4837]: Connection closed by 147.75.109.163 port 55514 Mar 17 17:39:06.407273 sshd-session[4835]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:06.414756 systemd[1]: sshd@10-172.31.25.124:22-147.75.109.163:55514.service: Deactivated successfully. Mar 17 17:39:06.418983 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:39:06.421388 systemd-logind[1935]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:39:06.423957 systemd-logind[1935]: Removed session 11. Mar 17 17:39:11.447874 systemd[1]: Started sshd@11-172.31.25.124:22-147.75.109.163:55516.service - OpenSSH per-connection server daemon (147.75.109.163:55516). Mar 17 17:39:11.642550 sshd[4855]: Accepted publickey for core from 147.75.109.163 port 55516 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:11.645014 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:11.653924 systemd-logind[1935]: New session 12 of user core. Mar 17 17:39:11.659515 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:39:11.903971 sshd[4857]: Connection closed by 147.75.109.163 port 55516 Mar 17 17:39:11.905269 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:11.911029 systemd[1]: sshd@11-172.31.25.124:22-147.75.109.163:55516.service: Deactivated successfully. Mar 17 17:39:11.915073 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:39:11.919720 systemd-logind[1935]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:39:11.921631 systemd-logind[1935]: Removed session 12. Mar 17 17:39:11.946738 systemd[1]: Started sshd@12-172.31.25.124:22-147.75.109.163:55526.service - OpenSSH per-connection server daemon (147.75.109.163:55526). Mar 17 17:39:12.128297 sshd[4870]: Accepted publickey for core from 147.75.109.163 port 55526 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:12.130750 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:12.140468 systemd-logind[1935]: New session 13 of user core. Mar 17 17:39:12.148511 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:39:12.478083 sshd[4872]: Connection closed by 147.75.109.163 port 55526 Mar 17 17:39:12.478700 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:12.490785 systemd[1]: sshd@12-172.31.25.124:22-147.75.109.163:55526.service: Deactivated successfully. Mar 17 17:39:12.502100 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:39:12.504713 systemd-logind[1935]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:39:12.540548 systemd[1]: Started sshd@13-172.31.25.124:22-147.75.109.163:55536.service - OpenSSH per-connection server daemon (147.75.109.163:55536). Mar 17 17:39:12.542674 systemd-logind[1935]: Removed session 13. Mar 17 17:39:12.723380 sshd[4881]: Accepted publickey for core from 147.75.109.163 port 55536 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:12.725589 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:12.733300 systemd-logind[1935]: New session 14 of user core. Mar 17 17:39:12.744588 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:39:13.001377 sshd[4884]: Connection closed by 147.75.109.163 port 55536 Mar 17 17:39:13.002207 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:13.008860 systemd[1]: sshd@13-172.31.25.124:22-147.75.109.163:55536.service: Deactivated successfully. Mar 17 17:39:13.012558 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:39:13.014989 systemd-logind[1935]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:39:13.016876 systemd-logind[1935]: Removed session 14. Mar 17 17:39:18.045776 systemd[1]: Started sshd@14-172.31.25.124:22-147.75.109.163:36800.service - OpenSSH per-connection server daemon (147.75.109.163:36800). Mar 17 17:39:18.238293 sshd[4897]: Accepted publickey for core from 147.75.109.163 port 36800 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:18.240718 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:18.249913 systemd-logind[1935]: New session 15 of user core. Mar 17 17:39:18.259557 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:39:18.505383 sshd[4899]: Connection closed by 147.75.109.163 port 36800 Mar 17 17:39:18.504290 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:18.511152 systemd[1]: sshd@14-172.31.25.124:22-147.75.109.163:36800.service: Deactivated successfully. Mar 17 17:39:18.511740 systemd-logind[1935]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:39:18.516043 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:39:18.521612 systemd-logind[1935]: Removed session 15. Mar 17 17:39:23.547758 systemd[1]: Started sshd@15-172.31.25.124:22-147.75.109.163:36804.service - OpenSSH per-connection server daemon (147.75.109.163:36804). Mar 17 17:39:23.734283 sshd[4912]: Accepted publickey for core from 147.75.109.163 port 36804 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:23.736707 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:23.754365 systemd-logind[1935]: New session 16 of user core. Mar 17 17:39:23.759537 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:39:24.006398 sshd[4914]: Connection closed by 147.75.109.163 port 36804 Mar 17 17:39:24.007461 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:24.014014 systemd[1]: sshd@15-172.31.25.124:22-147.75.109.163:36804.service: Deactivated successfully. Mar 17 17:39:24.018335 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:39:24.020702 systemd-logind[1935]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:39:24.022770 systemd-logind[1935]: Removed session 16. Mar 17 17:39:29.051769 systemd[1]: Started sshd@16-172.31.25.124:22-147.75.109.163:40294.service - OpenSSH per-connection server daemon (147.75.109.163:40294). Mar 17 17:39:29.230580 sshd[4926]: Accepted publickey for core from 147.75.109.163 port 40294 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:29.233036 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:29.241193 systemd-logind[1935]: New session 17 of user core. Mar 17 17:39:29.250533 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:39:29.490286 sshd[4928]: Connection closed by 147.75.109.163 port 40294 Mar 17 17:39:29.491669 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:29.502006 systemd[1]: sshd@16-172.31.25.124:22-147.75.109.163:40294.service: Deactivated successfully. Mar 17 17:39:29.507394 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:39:29.509533 systemd-logind[1935]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:39:29.511214 systemd-logind[1935]: Removed session 17. Mar 17 17:39:34.532780 systemd[1]: Started sshd@17-172.31.25.124:22-147.75.109.163:44000.service - OpenSSH per-connection server daemon (147.75.109.163:44000). Mar 17 17:39:34.723193 sshd[4939]: Accepted publickey for core from 147.75.109.163 port 44000 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:34.725670 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:34.734087 systemd-logind[1935]: New session 18 of user core. Mar 17 17:39:34.741565 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:39:34.985834 sshd[4941]: Connection closed by 147.75.109.163 port 44000 Mar 17 17:39:34.986731 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:34.995360 systemd[1]: sshd@17-172.31.25.124:22-147.75.109.163:44000.service: Deactivated successfully. Mar 17 17:39:34.999556 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:39:35.001476 systemd-logind[1935]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:39:35.003809 systemd-logind[1935]: Removed session 18. Mar 17 17:39:35.032735 systemd[1]: Started sshd@18-172.31.25.124:22-147.75.109.163:44006.service - OpenSSH per-connection server daemon (147.75.109.163:44006). Mar 17 17:39:35.217662 sshd[4953]: Accepted publickey for core from 147.75.109.163 port 44006 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:35.219535 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:35.227374 systemd-logind[1935]: New session 19 of user core. Mar 17 17:39:35.238522 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:39:35.533320 sshd[4955]: Connection closed by 147.75.109.163 port 44006 Mar 17 17:39:35.533634 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:35.540900 systemd[1]: sshd@18-172.31.25.124:22-147.75.109.163:44006.service: Deactivated successfully. Mar 17 17:39:35.545940 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:39:35.547973 systemd-logind[1935]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:39:35.549768 systemd-logind[1935]: Removed session 19. Mar 17 17:39:35.574769 systemd[1]: Started sshd@19-172.31.25.124:22-147.75.109.163:44008.service - OpenSSH per-connection server daemon (147.75.109.163:44008). Mar 17 17:39:35.755772 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 44008 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:35.758710 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:35.766530 systemd-logind[1935]: New session 20 of user core. Mar 17 17:39:35.774541 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:39:37.090337 sshd[4967]: Connection closed by 147.75.109.163 port 44008 Mar 17 17:39:37.089506 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:37.103449 systemd[1]: sshd@19-172.31.25.124:22-147.75.109.163:44008.service: Deactivated successfully. Mar 17 17:39:37.112470 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:39:37.118574 systemd-logind[1935]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:39:37.143939 systemd[1]: Started sshd@20-172.31.25.124:22-147.75.109.163:44018.service - OpenSSH per-connection server daemon (147.75.109.163:44018). Mar 17 17:39:37.145640 systemd-logind[1935]: Removed session 20. Mar 17 17:39:37.331939 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 44018 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:37.334947 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:37.343026 systemd-logind[1935]: New session 21 of user core. Mar 17 17:39:37.355536 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:39:37.843585 sshd[4986]: Connection closed by 147.75.109.163 port 44018 Mar 17 17:39:37.844068 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:37.851773 systemd[1]: sshd@20-172.31.25.124:22-147.75.109.163:44018.service: Deactivated successfully. Mar 17 17:39:37.856984 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:39:37.860645 systemd-logind[1935]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:39:37.863128 systemd-logind[1935]: Removed session 21. Mar 17 17:39:37.888766 systemd[1]: Started sshd@21-172.31.25.124:22-147.75.109.163:44034.service - OpenSSH per-connection server daemon (147.75.109.163:44034). Mar 17 17:39:38.075505 sshd[4998]: Accepted publickey for core from 147.75.109.163 port 44034 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:38.078014 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:38.087959 systemd-logind[1935]: New session 22 of user core. Mar 17 17:39:38.096526 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:39:38.345044 sshd[5000]: Connection closed by 147.75.109.163 port 44034 Mar 17 17:39:38.345925 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:38.352269 systemd[1]: sshd@21-172.31.25.124:22-147.75.109.163:44034.service: Deactivated successfully. Mar 17 17:39:38.357936 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:39:38.359959 systemd-logind[1935]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:39:38.362439 systemd-logind[1935]: Removed session 22. Mar 17 17:39:43.395716 systemd[1]: Started sshd@22-172.31.25.124:22-147.75.109.163:44044.service - OpenSSH per-connection server daemon (147.75.109.163:44044). Mar 17 17:39:43.576537 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 44044 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:43.579113 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:43.587968 systemd-logind[1935]: New session 23 of user core. Mar 17 17:39:43.597725 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:39:43.843329 sshd[5014]: Connection closed by 147.75.109.163 port 44044 Mar 17 17:39:43.844145 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:43.850544 systemd[1]: sshd@22-172.31.25.124:22-147.75.109.163:44044.service: Deactivated successfully. Mar 17 17:39:43.854845 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:39:43.856773 systemd-logind[1935]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:39:43.859279 systemd-logind[1935]: Removed session 23. Mar 17 17:39:48.892769 systemd[1]: Started sshd@23-172.31.25.124:22-147.75.109.163:47672.service - OpenSSH per-connection server daemon (147.75.109.163:47672). Mar 17 17:39:49.103065 sshd[5028]: Accepted publickey for core from 147.75.109.163 port 47672 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:49.105753 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:49.114109 systemd-logind[1935]: New session 24 of user core. Mar 17 17:39:49.121525 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:39:49.375454 sshd[5030]: Connection closed by 147.75.109.163 port 47672 Mar 17 17:39:49.376506 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:49.381640 systemd[1]: sshd@23-172.31.25.124:22-147.75.109.163:47672.service: Deactivated successfully. Mar 17 17:39:49.385155 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:39:49.389679 systemd-logind[1935]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:39:49.392200 systemd-logind[1935]: Removed session 24. Mar 17 17:39:54.416768 systemd[1]: Started sshd@24-172.31.25.124:22-147.75.109.163:38680.service - OpenSSH per-connection server daemon (147.75.109.163:38680). Mar 17 17:39:54.611472 sshd[5042]: Accepted publickey for core from 147.75.109.163 port 38680 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:39:54.614471 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:54.623920 systemd-logind[1935]: New session 25 of user core. Mar 17 17:39:54.629496 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:39:54.884019 sshd[5044]: Connection closed by 147.75.109.163 port 38680 Mar 17 17:39:54.882968 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:54.889780 systemd-logind[1935]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:39:54.891486 systemd[1]: sshd@24-172.31.25.124:22-147.75.109.163:38680.service: Deactivated successfully. Mar 17 17:39:54.895825 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:39:54.898363 systemd-logind[1935]: Removed session 25. Mar 17 17:39:59.934760 systemd[1]: Started sshd@25-172.31.25.124:22-147.75.109.163:38684.service - OpenSSH per-connection server daemon (147.75.109.163:38684). Mar 17 17:40:00.123004 sshd[5056]: Accepted publickey for core from 147.75.109.163 port 38684 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:40:00.125574 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:00.134347 systemd-logind[1935]: New session 26 of user core. Mar 17 17:40:00.141534 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:40:00.384552 sshd[5058]: Connection closed by 147.75.109.163 port 38684 Mar 17 17:40:00.384428 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:00.389828 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:40:00.391235 systemd[1]: sshd@25-172.31.25.124:22-147.75.109.163:38684.service: Deactivated successfully. Mar 17 17:40:00.397875 systemd-logind[1935]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:40:00.399577 systemd-logind[1935]: Removed session 26. Mar 17 17:40:00.427026 systemd[1]: Started sshd@26-172.31.25.124:22-147.75.109.163:38696.service - OpenSSH per-connection server daemon (147.75.109.163:38696). Mar 17 17:40:00.616428 sshd[5070]: Accepted publickey for core from 147.75.109.163 port 38696 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:40:00.619102 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:00.627344 systemd-logind[1935]: New session 27 of user core. Mar 17 17:40:00.641545 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:40:04.252196 containerd[1966]: time="2025-03-17T17:40:04.252132003Z" level=info msg="StopContainer for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" with timeout 30 (s)" Mar 17 17:40:04.255330 containerd[1966]: time="2025-03-17T17:40:04.253604763Z" level=info msg="Stop container \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" with signal terminated" Mar 17 17:40:04.279628 containerd[1966]: time="2025-03-17T17:40:04.279544755Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:40:04.307257 containerd[1966]: time="2025-03-17T17:40:04.307169247Z" level=info msg="StopContainer for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" with timeout 2 (s)" Mar 17 17:40:04.307979 containerd[1966]: time="2025-03-17T17:40:04.307699131Z" level=info msg="Stop container \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" with signal terminated" Mar 17 17:40:04.321515 systemd[1]: cri-containerd-1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d.scope: Deactivated successfully. Mar 17 17:40:04.341792 systemd-networkd[1861]: lxc_health: Link DOWN Mar 17 17:40:04.341828 systemd-networkd[1861]: lxc_health: Lost carrier Mar 17 17:40:04.377992 systemd[1]: cri-containerd-4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b.scope: Deactivated successfully. Mar 17 17:40:04.378641 systemd[1]: cri-containerd-4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b.scope: Consumed 14.239s CPU time, 123.4M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:40:04.400928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d-rootfs.mount: Deactivated successfully. Mar 17 17:40:04.413909 containerd[1966]: time="2025-03-17T17:40:04.413519740Z" level=info msg="shim disconnected" id=1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d namespace=k8s.io Mar 17 17:40:04.413909 containerd[1966]: time="2025-03-17T17:40:04.413836372Z" level=warning msg="cleaning up after shim disconnected" id=1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d namespace=k8s.io Mar 17 17:40:04.413909 containerd[1966]: time="2025-03-17T17:40:04.413862220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:04.434150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b-rootfs.mount: Deactivated successfully. Mar 17 17:40:04.440793 containerd[1966]: time="2025-03-17T17:40:04.440719648Z" level=info msg="shim disconnected" id=4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b namespace=k8s.io Mar 17 17:40:04.441986 containerd[1966]: time="2025-03-17T17:40:04.441908596Z" level=warning msg="cleaning up after shim disconnected" id=4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b namespace=k8s.io Mar 17 17:40:04.442376 containerd[1966]: time="2025-03-17T17:40:04.442338760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:04.453607 containerd[1966]: time="2025-03-17T17:40:04.453453568Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:40:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:40:04.458683 containerd[1966]: time="2025-03-17T17:40:04.458573968Z" level=info msg="StopContainer for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" returns successfully" Mar 17 17:40:04.460443 containerd[1966]: time="2025-03-17T17:40:04.460205656Z" level=info msg="StopPodSandbox for \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\"" Mar 17 17:40:04.460443 containerd[1966]: time="2025-03-17T17:40:04.460292524Z" level=info msg="Container to stop \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:40:04.466625 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b-shm.mount: Deactivated successfully. Mar 17 17:40:04.477341 systemd[1]: cri-containerd-5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b.scope: Deactivated successfully. Mar 17 17:40:04.505487 containerd[1966]: time="2025-03-17T17:40:04.503949076Z" level=info msg="StopContainer for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" returns successfully" Mar 17 17:40:04.507276 containerd[1966]: time="2025-03-17T17:40:04.506886820Z" level=info msg="StopPodSandbox for \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\"" Mar 17 17:40:04.507276 containerd[1966]: time="2025-03-17T17:40:04.506962084Z" level=info msg="Container to stop \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:40:04.507276 containerd[1966]: time="2025-03-17T17:40:04.506990824Z" level=info msg="Container to stop \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:40:04.507276 containerd[1966]: time="2025-03-17T17:40:04.507014872Z" level=info msg="Container to stop \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:40:04.507276 containerd[1966]: time="2025-03-17T17:40:04.507036952Z" level=info msg="Container to stop \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:40:04.507276 containerd[1966]: time="2025-03-17T17:40:04.507058432Z" level=info msg="Container to stop \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:40:04.511659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f-shm.mount: Deactivated successfully. Mar 17 17:40:04.524980 systemd[1]: cri-containerd-b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f.scope: Deactivated successfully. Mar 17 17:40:04.550044 containerd[1966]: time="2025-03-17T17:40:04.549840520Z" level=info msg="shim disconnected" id=5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b namespace=k8s.io Mar 17 17:40:04.550044 containerd[1966]: time="2025-03-17T17:40:04.549920344Z" level=warning msg="cleaning up after shim disconnected" id=5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b namespace=k8s.io Mar 17 17:40:04.550044 containerd[1966]: time="2025-03-17T17:40:04.549944404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:04.580712 containerd[1966]: time="2025-03-17T17:40:04.580381517Z" level=info msg="shim disconnected" id=b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f namespace=k8s.io Mar 17 17:40:04.580712 containerd[1966]: time="2025-03-17T17:40:04.580455065Z" level=warning msg="cleaning up after shim disconnected" id=b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f namespace=k8s.io Mar 17 17:40:04.580712 containerd[1966]: time="2025-03-17T17:40:04.580474925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:04.584474 containerd[1966]: time="2025-03-17T17:40:04.584317301Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:40:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:40:04.587949 containerd[1966]: time="2025-03-17T17:40:04.587897717Z" level=info msg="TearDown network for sandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" successfully" Mar 17 17:40:04.588451 containerd[1966]: time="2025-03-17T17:40:04.588116561Z" level=info msg="StopPodSandbox for \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" returns successfully" Mar 17 17:40:04.613729 containerd[1966]: time="2025-03-17T17:40:04.613637285Z" level=info msg="TearDown network for sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" successfully" Mar 17 17:40:04.613729 containerd[1966]: time="2025-03-17T17:40:04.613691909Z" level=info msg="StopPodSandbox for \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" returns successfully" Mar 17 17:40:04.732200 kubelet[3225]: I0317 17:40:04.730998 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e2b31e8-67d5-49ed-945b-41f7327688c7-clustermesh-secrets\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.732200 kubelet[3225]: I0317 17:40:04.731066 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-kernel\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.732200 kubelet[3225]: I0317 17:40:04.731107 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-etc-cni-netd\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.732200 kubelet[3225]: I0317 17:40:04.731145 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cni-path\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.732200 kubelet[3225]: I0317 17:40:04.731178 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-net\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.732200 kubelet[3225]: I0317 17:40:04.731211 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-cgroup\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733145 kubelet[3225]: I0317 17:40:04.731263 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-xtables-lock\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733145 kubelet[3225]: I0317 17:40:04.731301 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-hostproc\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733145 kubelet[3225]: I0317 17:40:04.731339 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-hubble-tls\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733145 kubelet[3225]: I0317 17:40:04.731370 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-lib-modules\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733145 kubelet[3225]: I0317 17:40:04.731402 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-run\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733145 kubelet[3225]: I0317 17:40:04.731441 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-config-path\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733537 kubelet[3225]: I0317 17:40:04.731478 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2716884-45db-436e-9a89-642ac6bbcb3f-cilium-config-path\") pod \"b2716884-45db-436e-9a89-642ac6bbcb3f\" (UID: \"b2716884-45db-436e-9a89-642ac6bbcb3f\") " Mar 17 17:40:04.733537 kubelet[3225]: I0317 17:40:04.731514 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kg96\" (UniqueName: \"kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-kube-api-access-6kg96\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733537 kubelet[3225]: I0317 17:40:04.731552 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5j84\" (UniqueName: \"kubernetes.io/projected/b2716884-45db-436e-9a89-642ac6bbcb3f-kube-api-access-b5j84\") pod \"b2716884-45db-436e-9a89-642ac6bbcb3f\" (UID: \"b2716884-45db-436e-9a89-642ac6bbcb3f\") " Mar 17 17:40:04.733537 kubelet[3225]: I0317 17:40:04.731597 3225 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-bpf-maps\") pod \"5e2b31e8-67d5-49ed-945b-41f7327688c7\" (UID: \"5e2b31e8-67d5-49ed-945b-41f7327688c7\") " Mar 17 17:40:04.733537 kubelet[3225]: I0317 17:40:04.731707 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.735429 kubelet[3225]: I0317 17:40:04.733934 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.735429 kubelet[3225]: I0317 17:40:04.734013 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.735429 kubelet[3225]: I0317 17:40:04.734052 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.735429 kubelet[3225]: I0317 17:40:04.734134 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.735429 kubelet[3225]: I0317 17:40:04.734169 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.736000 kubelet[3225]: I0317 17:40:04.734223 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.736000 kubelet[3225]: I0317 17:40:04.734303 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.740947 kubelet[3225]: I0317 17:40:04.740777 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2b31e8-67d5-49ed-945b-41f7327688c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:40:04.743665 kubelet[3225]: I0317 17:40:04.743209 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:40:04.748973 kubelet[3225]: I0317 17:40:04.748818 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:40:04.748973 kubelet[3225]: I0317 17:40:04.748915 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.748973 kubelet[3225]: I0317 17:40:04.748955 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:40:04.750937 kubelet[3225]: I0317 17:40:04.750472 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2716884-45db-436e-9a89-642ac6bbcb3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2716884-45db-436e-9a89-642ac6bbcb3f" (UID: "b2716884-45db-436e-9a89-642ac6bbcb3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:40:04.754610 kubelet[3225]: I0317 17:40:04.754541 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-kube-api-access-6kg96" (OuterVolumeSpecName: "kube-api-access-6kg96") pod "5e2b31e8-67d5-49ed-945b-41f7327688c7" (UID: "5e2b31e8-67d5-49ed-945b-41f7327688c7"). InnerVolumeSpecName "kube-api-access-6kg96". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:40:04.756173 kubelet[3225]: I0317 17:40:04.756044 3225 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2716884-45db-436e-9a89-642ac6bbcb3f-kube-api-access-b5j84" (OuterVolumeSpecName: "kube-api-access-b5j84") pod "b2716884-45db-436e-9a89-642ac6bbcb3f" (UID: "b2716884-45db-436e-9a89-642ac6bbcb3f"). InnerVolumeSpecName "kube-api-access-b5j84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.831859 3225 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-kernel\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.831910 3225 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e2b31e8-67d5-49ed-945b-41f7327688c7-clustermesh-secrets\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.831936 3225 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-etc-cni-netd\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.831958 3225 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cni-path\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.831979 3225 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-host-proc-sys-net\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.832000 3225 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-cgroup\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.832019 3225 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-hostproc\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832272 kubelet[3225]: I0317 17:40:04.832039 3225 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-hubble-tls\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832059 3225 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-lib-modules\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832079 3225 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-xtables-lock\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832098 3225 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-run\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832118 3225 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e2b31e8-67d5-49ed-945b-41f7327688c7-cilium-config-path\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832138 3225 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2716884-45db-436e-9a89-642ac6bbcb3f-cilium-config-path\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832160 3225 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6kg96\" (UniqueName: \"kubernetes.io/projected/5e2b31e8-67d5-49ed-945b-41f7327688c7-kube-api-access-6kg96\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832183 3225 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5j84\" (UniqueName: \"kubernetes.io/projected/b2716884-45db-436e-9a89-642ac6bbcb3f-kube-api-access-b5j84\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.832761 kubelet[3225]: I0317 17:40:04.832205 3225 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e2b31e8-67d5-49ed-945b-41f7327688c7-bpf-maps\") on node \"ip-172-31-25-124\" DevicePath \"\"" Mar 17 17:40:04.855485 kubelet[3225]: I0317 17:40:04.855418 3225 scope.go:117] "RemoveContainer" containerID="4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b" Mar 17 17:40:04.859805 containerd[1966]: time="2025-03-17T17:40:04.859721754Z" level=info msg="RemoveContainer for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\"" Mar 17 17:40:04.875619 containerd[1966]: time="2025-03-17T17:40:04.874112382Z" level=info msg="RemoveContainer for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" returns successfully" Mar 17 17:40:04.875565 systemd[1]: Removed slice kubepods-burstable-pod5e2b31e8_67d5_49ed_945b_41f7327688c7.slice - libcontainer container kubepods-burstable-pod5e2b31e8_67d5_49ed_945b_41f7327688c7.slice. Mar 17 17:40:04.875955 kubelet[3225]: I0317 17:40:04.875434 3225 scope.go:117] "RemoveContainer" containerID="c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23" Mar 17 17:40:04.875777 systemd[1]: kubepods-burstable-pod5e2b31e8_67d5_49ed_945b_41f7327688c7.slice: Consumed 14.387s CPU time, 123.8M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:40:04.880669 containerd[1966]: time="2025-03-17T17:40:04.880425426Z" level=info msg="RemoveContainer for \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\"" Mar 17 17:40:04.882748 systemd[1]: Removed slice kubepods-besteffort-podb2716884_45db_436e_9a89_642ac6bbcb3f.slice - libcontainer container kubepods-besteffort-podb2716884_45db_436e_9a89_642ac6bbcb3f.slice. Mar 17 17:40:04.888960 containerd[1966]: time="2025-03-17T17:40:04.888843558Z" level=info msg="RemoveContainer for \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\" returns successfully" Mar 17 17:40:04.891144 kubelet[3225]: I0317 17:40:04.891087 3225 scope.go:117] "RemoveContainer" containerID="7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f" Mar 17 17:40:04.893659 containerd[1966]: time="2025-03-17T17:40:04.893499774Z" level=info msg="RemoveContainer for \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\"" Mar 17 17:40:04.897920 containerd[1966]: time="2025-03-17T17:40:04.897862350Z" level=info msg="RemoveContainer for \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\" returns successfully" Mar 17 17:40:04.898263 kubelet[3225]: I0317 17:40:04.898198 3225 scope.go:117] "RemoveContainer" containerID="9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9" Mar 17 17:40:04.902361 containerd[1966]: time="2025-03-17T17:40:04.902236938Z" level=info msg="RemoveContainer for \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\"" Mar 17 17:40:04.912812 containerd[1966]: time="2025-03-17T17:40:04.912687042Z" level=info msg="RemoveContainer for \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\" returns successfully" Mar 17 17:40:04.913750 kubelet[3225]: I0317 17:40:04.913177 3225 scope.go:117] "RemoveContainer" containerID="500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf" Mar 17 17:40:04.917842 containerd[1966]: time="2025-03-17T17:40:04.917783814Z" level=info msg="RemoveContainer for \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\"" Mar 17 17:40:04.922331 containerd[1966]: time="2025-03-17T17:40:04.922229166Z" level=info msg="RemoveContainer for \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\" returns successfully" Mar 17 17:40:04.922974 kubelet[3225]: I0317 17:40:04.922821 3225 scope.go:117] "RemoveContainer" containerID="4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b" Mar 17 17:40:04.923673 containerd[1966]: time="2025-03-17T17:40:04.923613342Z" level=error msg="ContainerStatus for \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\": not found" Mar 17 17:40:04.927299 kubelet[3225]: E0317 17:40:04.924786 3225 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\": not found" containerID="4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b" Mar 17 17:40:04.928667 kubelet[3225]: I0317 17:40:04.928364 3225 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b"} err="failed to get container status \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4dd048a3751cb597b97901d59b2654f9300267753df80bd2603637277e71812b\": not found" Mar 17 17:40:04.928667 kubelet[3225]: I0317 17:40:04.928511 3225 scope.go:117] "RemoveContainer" containerID="c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23" Mar 17 17:40:04.932848 containerd[1966]: time="2025-03-17T17:40:04.931144590Z" level=error msg="ContainerStatus for \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\": not found" Mar 17 17:40:04.933014 kubelet[3225]: E0317 17:40:04.931476 3225 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\": not found" containerID="c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23" Mar 17 17:40:04.933014 kubelet[3225]: I0317 17:40:04.932682 3225 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23"} err="failed to get container status \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4b82d1a578ea5d89a88b5ce5037f3bc898c6801443cdb3b4b1b9e1726f75b23\": not found" Mar 17 17:40:04.933014 kubelet[3225]: I0317 17:40:04.932759 3225 scope.go:117] "RemoveContainer" containerID="7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f" Mar 17 17:40:04.937824 containerd[1966]: time="2025-03-17T17:40:04.934288710Z" level=error msg="ContainerStatus for \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\": not found" Mar 17 17:40:04.937824 containerd[1966]: time="2025-03-17T17:40:04.935462370Z" level=error msg="ContainerStatus for \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\": not found" Mar 17 17:40:04.938040 kubelet[3225]: E0317 17:40:04.934684 3225 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\": not found" containerID="7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f" Mar 17 17:40:04.938040 kubelet[3225]: I0317 17:40:04.934739 3225 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f"} err="failed to get container status \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a489adda3c3d81933b462d032fd5bd9e0ae051098eab7e74d550dc1abdf7c6f\": not found" Mar 17 17:40:04.938040 kubelet[3225]: I0317 17:40:04.934778 3225 scope.go:117] "RemoveContainer" containerID="9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9" Mar 17 17:40:04.938040 kubelet[3225]: E0317 17:40:04.935784 3225 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\": not found" containerID="9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9" Mar 17 17:40:04.938040 kubelet[3225]: I0317 17:40:04.935831 3225 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9"} err="failed to get container status \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c79abc7c8b9cdb172fe74856924a14a74235c26242f0f41fb249528b1cbebc9\": not found" Mar 17 17:40:04.938040 kubelet[3225]: I0317 17:40:04.935866 3225 scope.go:117] "RemoveContainer" containerID="500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf" Mar 17 17:40:04.940272 containerd[1966]: time="2025-03-17T17:40:04.939316158Z" level=error msg="ContainerStatus for \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\": not found" Mar 17 17:40:04.940898 kubelet[3225]: E0317 17:40:04.940674 3225 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\": not found" containerID="500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf" Mar 17 17:40:04.940898 kubelet[3225]: I0317 17:40:04.940727 3225 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf"} err="failed to get container status \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"500425624f9a1f05ebd84b241c02feb6d6132f5e5e6f5556031988cac87e4ccf\": not found" Mar 17 17:40:04.940898 kubelet[3225]: I0317 17:40:04.940764 3225 scope.go:117] "RemoveContainer" containerID="1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d" Mar 17 17:40:04.943269 containerd[1966]: time="2025-03-17T17:40:04.943198218Z" level=info msg="RemoveContainer for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\"" Mar 17 17:40:04.948385 containerd[1966]: time="2025-03-17T17:40:04.948314694Z" level=info msg="RemoveContainer for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" returns successfully" Mar 17 17:40:04.948949 kubelet[3225]: I0317 17:40:04.948786 3225 scope.go:117] "RemoveContainer" containerID="1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d" Mar 17 17:40:04.949464 containerd[1966]: time="2025-03-17T17:40:04.949399098Z" level=error msg="ContainerStatus for \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\": not found" Mar 17 17:40:04.949655 kubelet[3225]: E0317 17:40:04.949612 3225 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\": not found" containerID="1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d" Mar 17 17:40:04.949749 kubelet[3225]: I0317 17:40:04.949670 3225 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d"} err="failed to get container status \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fd32e14491d3e530bed9c36f090cc261eedd4982693c617de53f4645abecb2d\": not found" Mar 17 17:40:05.227554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f-rootfs.mount: Deactivated successfully. Mar 17 17:40:05.227761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b-rootfs.mount: Deactivated successfully. Mar 17 17:40:05.227913 systemd[1]: var-lib-kubelet-pods-b2716884\x2d45db\x2d436e\x2d9a89\x2d642ac6bbcb3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db5j84.mount: Deactivated successfully. Mar 17 17:40:05.228051 systemd[1]: var-lib-kubelet-pods-5e2b31e8\x2d67d5\x2d49ed\x2d945b\x2d41f7327688c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6kg96.mount: Deactivated successfully. Mar 17 17:40:05.228196 systemd[1]: var-lib-kubelet-pods-5e2b31e8\x2d67d5\x2d49ed\x2d945b\x2d41f7327688c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:40:05.228404 systemd[1]: var-lib-kubelet-pods-5e2b31e8\x2d67d5\x2d49ed\x2d945b\x2d41f7327688c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:40:06.155138 sshd[5072]: Connection closed by 147.75.109.163 port 38696 Mar 17 17:40:06.156088 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:06.163590 systemd[1]: sshd@26-172.31.25.124:22-147.75.109.163:38696.service: Deactivated successfully. Mar 17 17:40:06.167601 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:40:06.168599 systemd[1]: session-27.scope: Consumed 2.821s CPU time, 25.8M memory peak. Mar 17 17:40:06.169991 systemd-logind[1935]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:40:06.171907 systemd-logind[1935]: Removed session 27. Mar 17 17:40:06.200996 systemd[1]: Started sshd@27-172.31.25.124:22-147.75.109.163:46274.service - OpenSSH per-connection server daemon (147.75.109.163:46274). Mar 17 17:40:06.386006 sshd[5231]: Accepted publickey for core from 147.75.109.163 port 46274 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:40:06.388470 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:06.396577 systemd-logind[1935]: New session 28 of user core. Mar 17 17:40:06.405560 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:40:06.431855 kubelet[3225]: I0317 17:40:06.431790 3225 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2b31e8-67d5-49ed-945b-41f7327688c7" path="/var/lib/kubelet/pods/5e2b31e8-67d5-49ed-945b-41f7327688c7/volumes" Mar 17 17:40:06.433915 kubelet[3225]: I0317 17:40:06.433856 3225 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2716884-45db-436e-9a89-642ac6bbcb3f" path="/var/lib/kubelet/pods/b2716884-45db-436e-9a89-642ac6bbcb3f/volumes" Mar 17 17:40:07.040228 ntpd[1928]: Deleting interface #12 lxc_health, fe80::cbd:a1ff:fe33:67b3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Mar 17 17:40:07.040832 ntpd[1928]: 17 Mar 17:40:07 ntpd[1928]: Deleting interface #12 lxc_health, fe80::cbd:a1ff:fe33:67b3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=83 secs Mar 17 17:40:07.618807 kubelet[3225]: E0317 17:40:07.618416 3225 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:40:08.564645 sshd[5233]: Connection closed by 147.75.109.163 port 46274 Mar 17 17:40:08.569934 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:08.579520 systemd[1]: sshd@27-172.31.25.124:22-147.75.109.163:46274.service: Deactivated successfully. Mar 17 17:40:08.581313 kubelet[3225]: I0317 17:40:08.580618 3225 memory_manager.go:355] "RemoveStaleState removing state" podUID="5e2b31e8-67d5-49ed-945b-41f7327688c7" containerName="cilium-agent" Mar 17 17:40:08.581313 kubelet[3225]: I0317 17:40:08.580665 3225 memory_manager.go:355] "RemoveStaleState removing state" podUID="b2716884-45db-436e-9a89-642ac6bbcb3f" containerName="cilium-operator" Mar 17 17:40:08.586373 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:40:08.586798 systemd[1]: session-28.scope: Consumed 1.950s CPU time, 23.6M memory peak. Mar 17 17:40:08.590307 systemd-logind[1935]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:40:08.618750 systemd[1]: Started sshd@28-172.31.25.124:22-147.75.109.163:46288.service - OpenSSH per-connection server daemon (147.75.109.163:46288). Mar 17 17:40:08.624187 systemd-logind[1935]: Removed session 28. Mar 17 17:40:08.650020 systemd[1]: Created slice kubepods-burstable-pod3bf7e7ed_0c0c_488f_857e_f7ef210da8b5.slice - libcontainer container kubepods-burstable-pod3bf7e7ed_0c0c_488f_857e_f7ef210da8b5.slice. Mar 17 17:40:08.658802 kubelet[3225]: I0317 17:40:08.658294 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-hubble-tls\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.658802 kubelet[3225]: I0317 17:40:08.658359 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-cilium-run\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.658802 kubelet[3225]: I0317 17:40:08.658401 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-bpf-maps\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.658802 kubelet[3225]: I0317 17:40:08.658440 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-cni-path\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.658802 kubelet[3225]: I0317 17:40:08.658486 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-cilium-config-path\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.658802 kubelet[3225]: I0317 17:40:08.658523 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-cilium-cgroup\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661220 kubelet[3225]: I0317 17:40:08.658560 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-lib-modules\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661220 kubelet[3225]: I0317 17:40:08.658597 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-host-proc-sys-kernel\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661220 kubelet[3225]: I0317 17:40:08.658633 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zmz\" (UniqueName: \"kubernetes.io/projected/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-kube-api-access-77zmz\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661220 kubelet[3225]: I0317 17:40:08.658669 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-etc-cni-netd\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661220 kubelet[3225]: I0317 17:40:08.658709 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-host-proc-sys-net\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661647 kubelet[3225]: I0317 17:40:08.658755 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-cilium-ipsec-secrets\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661647 kubelet[3225]: I0317 17:40:08.658824 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-xtables-lock\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661647 kubelet[3225]: I0317 17:40:08.658869 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-hostproc\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.661647 kubelet[3225]: I0317 17:40:08.658905 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bf7e7ed-0c0c-488f-857e-f7ef210da8b5-clustermesh-secrets\") pod \"cilium-mslz6\" (UID: \"3bf7e7ed-0c0c-488f-857e-f7ef210da8b5\") " pod="kube-system/cilium-mslz6" Mar 17 17:40:08.842520 sshd[5245]: Accepted publickey for core from 147.75.109.163 port 46288 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:40:08.845149 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:08.853046 systemd-logind[1935]: New session 29 of user core. Mar 17 17:40:08.862811 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:40:08.967832 containerd[1966]: time="2025-03-17T17:40:08.967771642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mslz6,Uid:3bf7e7ed-0c0c-488f-857e-f7ef210da8b5,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:08.989491 sshd[5253]: Connection closed by 147.75.109.163 port 46288 Mar 17 17:40:08.991091 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:08.999568 systemd[1]: sshd@28-172.31.25.124:22-147.75.109.163:46288.service: Deactivated successfully. Mar 17 17:40:09.005613 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:40:09.010505 systemd-logind[1935]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:40:09.015831 containerd[1966]: time="2025-03-17T17:40:09.015339571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:09.015831 containerd[1966]: time="2025-03-17T17:40:09.015480079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:09.015831 containerd[1966]: time="2025-03-17T17:40:09.015520255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:09.019279 containerd[1966]: time="2025-03-17T17:40:09.016536715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:09.042791 systemd[1]: Started sshd@29-172.31.25.124:22-147.75.109.163:46294.service - OpenSSH per-connection server daemon (147.75.109.163:46294). Mar 17 17:40:09.045313 systemd-logind[1935]: Removed session 29. Mar 17 17:40:09.063389 systemd[1]: Started cri-containerd-20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e.scope - libcontainer container 20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e. Mar 17 17:40:09.112531 containerd[1966]: time="2025-03-17T17:40:09.112190599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mslz6,Uid:3bf7e7ed-0c0c-488f-857e-f7ef210da8b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\"" Mar 17 17:40:09.119429 containerd[1966]: time="2025-03-17T17:40:09.119333911Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:40:09.134741 containerd[1966]: time="2025-03-17T17:40:09.134683999Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95\"" Mar 17 17:40:09.136056 containerd[1966]: time="2025-03-17T17:40:09.135783691Z" level=info msg="StartContainer for \"60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95\"" Mar 17 17:40:09.180564 systemd[1]: Started cri-containerd-60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95.scope - libcontainer container 60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95. Mar 17 17:40:09.230378 containerd[1966]: time="2025-03-17T17:40:09.228624812Z" level=info msg="StartContainer for \"60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95\" returns successfully" Mar 17 17:40:09.244407 systemd[1]: cri-containerd-60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95.scope: Deactivated successfully. Mar 17 17:40:09.261011 sshd[5281]: Accepted publickey for core from 147.75.109.163 port 46294 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:40:09.265932 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:09.280234 systemd-logind[1935]: New session 30 of user core. Mar 17 17:40:09.285598 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:40:09.321894 containerd[1966]: time="2025-03-17T17:40:09.321756308Z" level=info msg="shim disconnected" id=60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95 namespace=k8s.io Mar 17 17:40:09.322228 containerd[1966]: time="2025-03-17T17:40:09.321882476Z" level=warning msg="cleaning up after shim disconnected" id=60858ca79021e69702807317fb6a0e813877f964f3e2ffc64e465227c2296e95 namespace=k8s.io Mar 17 17:40:09.322228 containerd[1966]: time="2025-03-17T17:40:09.321930020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:09.890492 containerd[1966]: time="2025-03-17T17:40:09.890201963Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:40:09.919707 containerd[1966]: time="2025-03-17T17:40:09.919600499Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725\"" Mar 17 17:40:09.920703 containerd[1966]: time="2025-03-17T17:40:09.920534147Z" level=info msg="StartContainer for \"afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725\"" Mar 17 17:40:09.980271 systemd[1]: Started cri-containerd-afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725.scope - libcontainer container afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725. Mar 17 17:40:10.036674 containerd[1966]: time="2025-03-17T17:40:10.036598760Z" level=info msg="StartContainer for \"afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725\" returns successfully" Mar 17 17:40:10.047617 systemd[1]: cri-containerd-afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725.scope: Deactivated successfully. Mar 17 17:40:10.083414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725-rootfs.mount: Deactivated successfully. Mar 17 17:40:10.093285 containerd[1966]: time="2025-03-17T17:40:10.093002048Z" level=info msg="shim disconnected" id=afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725 namespace=k8s.io Mar 17 17:40:10.093285 containerd[1966]: time="2025-03-17T17:40:10.093155780Z" level=warning msg="cleaning up after shim disconnected" id=afc8bfe2d9834b40782d67d51b304b825d7fb2e9dac55c843daa365dfbb12725 namespace=k8s.io Mar 17 17:40:10.093285 containerd[1966]: time="2025-03-17T17:40:10.093177944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:10.895743 containerd[1966]: time="2025-03-17T17:40:10.895580580Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:40:10.932898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573201747.mount: Deactivated successfully. Mar 17 17:40:10.937594 containerd[1966]: time="2025-03-17T17:40:10.937484988Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8\"" Mar 17 17:40:10.940323 containerd[1966]: time="2025-03-17T17:40:10.938614092Z" level=info msg="StartContainer for \"8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8\"" Mar 17 17:40:11.007579 systemd[1]: Started cri-containerd-8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8.scope - libcontainer container 8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8. Mar 17 17:40:11.077159 containerd[1966]: time="2025-03-17T17:40:11.077067273Z" level=info msg="StartContainer for \"8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8\" returns successfully" Mar 17 17:40:11.079588 systemd[1]: cri-containerd-8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8.scope: Deactivated successfully. Mar 17 17:40:11.121949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8-rootfs.mount: Deactivated successfully. Mar 17 17:40:11.130448 containerd[1966]: time="2025-03-17T17:40:11.130085781Z" level=info msg="shim disconnected" id=8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8 namespace=k8s.io Mar 17 17:40:11.130704 containerd[1966]: time="2025-03-17T17:40:11.130434681Z" level=warning msg="cleaning up after shim disconnected" id=8f6d8a179989b112064867076807e1376fe839b7aba6de8632c12eed6bc605a8 namespace=k8s.io Mar 17 17:40:11.130704 containerd[1966]: time="2025-03-17T17:40:11.130478997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:11.900979 containerd[1966]: time="2025-03-17T17:40:11.900865897Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:40:11.939079 containerd[1966]: time="2025-03-17T17:40:11.938912461Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2\"" Mar 17 17:40:11.940924 containerd[1966]: time="2025-03-17T17:40:11.940859617Z" level=info msg="StartContainer for \"08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2\"" Mar 17 17:40:12.001553 systemd[1]: Started cri-containerd-08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2.scope - libcontainer container 08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2. Mar 17 17:40:12.045905 systemd[1]: cri-containerd-08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2.scope: Deactivated successfully. Mar 17 17:40:12.051454 containerd[1966]: time="2025-03-17T17:40:12.050737450Z" level=info msg="StartContainer for \"08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2\" returns successfully" Mar 17 17:40:12.085930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2-rootfs.mount: Deactivated successfully. Mar 17 17:40:12.093795 containerd[1966]: time="2025-03-17T17:40:12.093698698Z" level=info msg="shim disconnected" id=08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2 namespace=k8s.io Mar 17 17:40:12.094458 containerd[1966]: time="2025-03-17T17:40:12.094380490Z" level=warning msg="cleaning up after shim disconnected" id=08c30c7c0420dc9c73a25bcf9bfb79b28e3668372c4c9db802536960b813fdd2 namespace=k8s.io Mar 17 17:40:12.094458 containerd[1966]: time="2025-03-17T17:40:12.094408870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:12.427281 kubelet[3225]: E0317 17:40:12.426323 3225 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rz2mb" podUID="5554ff47-5a94-4f15-be53-7f1a82a56d8d" Mar 17 17:40:12.427281 kubelet[3225]: E0317 17:40:12.426902 3225 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-kq5x4" podUID="923ceb12-c7d5-4226-8fde-d2a87e414406" Mar 17 17:40:12.619914 kubelet[3225]: E0317 17:40:12.619844 3225 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:40:12.910470 containerd[1966]: time="2025-03-17T17:40:12.910292762Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:40:12.952579 containerd[1966]: time="2025-03-17T17:40:12.952494266Z" level=info msg="CreateContainer within sandbox \"20a7c50662df3eb20930f8619a0c0a655b2492dc743f40aad408cca0656bcf8e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ec9d3d9dc96454fd5ce9c93fa5e8d3cb1938a93a41c675eb415b191c3414263\"" Mar 17 17:40:12.953289 containerd[1966]: time="2025-03-17T17:40:12.953207582Z" level=info msg="StartContainer for \"3ec9d3d9dc96454fd5ce9c93fa5e8d3cb1938a93a41c675eb415b191c3414263\"" Mar 17 17:40:13.033573 systemd[1]: Started cri-containerd-3ec9d3d9dc96454fd5ce9c93fa5e8d3cb1938a93a41c675eb415b191c3414263.scope - libcontainer container 3ec9d3d9dc96454fd5ce9c93fa5e8d3cb1938a93a41c675eb415b191c3414263. Mar 17 17:40:13.095901 containerd[1966]: time="2025-03-17T17:40:13.095664767Z" level=info msg="StartContainer for \"3ec9d3d9dc96454fd5ce9c93fa5e8d3cb1938a93a41c675eb415b191c3414263\" returns successfully" Mar 17 17:40:13.957115 kubelet[3225]: I0317 17:40:13.956211 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mslz6" podStartSLOduration=5.956187267 podStartE2EDuration="5.956187267s" podCreationTimestamp="2025-03-17 17:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:13.953375451 +0000 UTC m=+131.807916713" watchObservedRunningTime="2025-03-17 17:40:13.956187267 +0000 UTC m=+131.810728505" Mar 17 17:40:13.980314 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:40:14.427207 kubelet[3225]: E0317 17:40:14.426533 3225 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rz2mb" podUID="5554ff47-5a94-4f15-be53-7f1a82a56d8d" Mar 17 17:40:14.427412 kubelet[3225]: E0317 17:40:14.427281 3225 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-kq5x4" podUID="923ceb12-c7d5-4226-8fde-d2a87e414406" Mar 17 17:40:16.272748 kubelet[3225]: I0317 17:40:16.271178 3225 setters.go:602] "Node became not ready" node="ip-172-31-25-124" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:40:16Z","lastTransitionTime":"2025-03-17T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:40:16.432948 kubelet[3225]: E0317 17:40:16.432871 3225 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-kq5x4" podUID="923ceb12-c7d5-4226-8fde-d2a87e414406" Mar 17 17:40:16.433873 kubelet[3225]: E0317 17:40:16.433800 3225 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rz2mb" podUID="5554ff47-5a94-4f15-be53-7f1a82a56d8d" Mar 17 17:40:18.252288 (udev-worker)[6091]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:40:18.256811 (udev-worker)[6092]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:40:18.272482 systemd-networkd[1861]: lxc_health: Link UP Mar 17 17:40:18.307881 systemd-networkd[1861]: lxc_health: Gained carrier Mar 17 17:40:20.115005 systemd-networkd[1861]: lxc_health: Gained IPv6LL Mar 17 17:40:22.616309 systemd[1]: run-containerd-runc-k8s.io-3ec9d3d9dc96454fd5ce9c93fa5e8d3cb1938a93a41c675eb415b191c3414263-runc.HbldAZ.mount: Deactivated successfully. Mar 17 17:40:23.040347 ntpd[1928]: Listen normally on 15 lxc_health [fe80::78bb:18ff:feb2:5447%14]:123 Mar 17 17:40:23.040874 ntpd[1928]: 17 Mar 17:40:23 ntpd[1928]: Listen normally on 15 lxc_health [fe80::78bb:18ff:feb2:5447%14]:123 Mar 17 17:40:25.069280 sshd[5351]: Connection closed by 147.75.109.163 port 46294 Mar 17 17:40:25.068124 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:25.076694 systemd[1]: sshd@29-172.31.25.124:22-147.75.109.163:46294.service: Deactivated successfully. Mar 17 17:40:25.085172 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:40:25.089671 systemd-logind[1935]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:40:25.094418 systemd-logind[1935]: Removed session 30. Mar 17 17:40:51.416956 systemd[1]: cri-containerd-1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e.scope: Deactivated successfully. Mar 17 17:40:51.417563 systemd[1]: cri-containerd-1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e.scope: Consumed 4.282s CPU time, 55.4M memory peak. Mar 17 17:40:51.457891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e-rootfs.mount: Deactivated successfully. Mar 17 17:40:51.467457 containerd[1966]: time="2025-03-17T17:40:51.467291846Z" level=info msg="shim disconnected" id=1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e namespace=k8s.io Mar 17 17:40:51.468373 containerd[1966]: time="2025-03-17T17:40:51.467423546Z" level=warning msg="cleaning up after shim disconnected" id=1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e namespace=k8s.io Mar 17 17:40:51.468373 containerd[1966]: time="2025-03-17T17:40:51.467584286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:51.491066 containerd[1966]: time="2025-03-17T17:40:51.489044246Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:40:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:40:52.018060 kubelet[3225]: I0317 17:40:52.017997 3225 scope.go:117] "RemoveContainer" containerID="1e3097e7d104b423fea82f23599496fb10a7ba3bcd3c41db49f654d8ad7a1f9e" Mar 17 17:40:52.021813 containerd[1966]: time="2025-03-17T17:40:52.021595656Z" level=info msg="CreateContainer within sandbox \"00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:40:52.050702 containerd[1966]: time="2025-03-17T17:40:52.050582100Z" level=info msg="CreateContainer within sandbox \"00fbc1148dec3fc39517077abd5dc74d195440fbd59d4ccf90136f4d8a928f34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"121d8b28781ba01cce3773736a58f51ddecd73ba02820b74aa1b3252f4c164e0\"" Mar 17 17:40:52.051698 containerd[1966]: time="2025-03-17T17:40:52.051637236Z" level=info msg="StartContainer for \"121d8b28781ba01cce3773736a58f51ddecd73ba02820b74aa1b3252f4c164e0\"" Mar 17 17:40:52.110558 systemd[1]: Started cri-containerd-121d8b28781ba01cce3773736a58f51ddecd73ba02820b74aa1b3252f4c164e0.scope - libcontainer container 121d8b28781ba01cce3773736a58f51ddecd73ba02820b74aa1b3252f4c164e0. Mar 17 17:40:52.182340 containerd[1966]: time="2025-03-17T17:40:52.182222257Z" level=info msg="StartContainer for \"121d8b28781ba01cce3773736a58f51ddecd73ba02820b74aa1b3252f4c164e0\" returns successfully" Mar 17 17:40:55.333730 kubelet[3225]: E0317 17:40:55.333648 3225 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-124?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:40:56.389919 systemd[1]: cri-containerd-2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8.scope: Deactivated successfully. Mar 17 17:40:56.392056 systemd[1]: cri-containerd-2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8.scope: Consumed 5.460s CPU time, 22.7M memory peak. Mar 17 17:40:56.440135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8-rootfs.mount: Deactivated successfully. Mar 17 17:40:56.455072 containerd[1966]: time="2025-03-17T17:40:56.454980978Z" level=info msg="shim disconnected" id=2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8 namespace=k8s.io Mar 17 17:40:56.455072 containerd[1966]: time="2025-03-17T17:40:56.455060370Z" level=warning msg="cleaning up after shim disconnected" id=2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8 namespace=k8s.io Mar 17 17:40:56.455809 containerd[1966]: time="2025-03-17T17:40:56.455083038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:57.035454 kubelet[3225]: I0317 17:40:57.035167 3225 scope.go:117] "RemoveContainer" containerID="2383259a3abdc644c07bf4e5e42385978bec6cc2848651e14168c0c2857b23f8" Mar 17 17:40:57.038613 containerd[1966]: time="2025-03-17T17:40:57.038557193Z" level=info msg="CreateContainer within sandbox \"5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:40:57.073780 containerd[1966]: time="2025-03-17T17:40:57.073708685Z" level=info msg="CreateContainer within sandbox \"5eb4255955936db326166ae2e27b99741bb9bc620aba19bd386ff339f53a5178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6aa7271e3eac3eacbaa43765f70cc39da4778f8ce81c2d95c6864172e8dcc7fa\"" Mar 17 17:40:57.074740 containerd[1966]: time="2025-03-17T17:40:57.074654609Z" level=info msg="StartContainer for \"6aa7271e3eac3eacbaa43765f70cc39da4778f8ce81c2d95c6864172e8dcc7fa\"" Mar 17 17:40:57.123561 systemd[1]: Started cri-containerd-6aa7271e3eac3eacbaa43765f70cc39da4778f8ce81c2d95c6864172e8dcc7fa.scope - libcontainer container 6aa7271e3eac3eacbaa43765f70cc39da4778f8ce81c2d95c6864172e8dcc7fa. Mar 17 17:40:57.187790 containerd[1966]: time="2025-03-17T17:40:57.187714914Z" level=info msg="StartContainer for \"6aa7271e3eac3eacbaa43765f70cc39da4778f8ce81c2d95c6864172e8dcc7fa\" returns successfully" Mar 17 17:41:02.419257 containerd[1966]: time="2025-03-17T17:41:02.419186316Z" level=info msg="StopPodSandbox for \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\"" Mar 17 17:41:02.419931 containerd[1966]: time="2025-03-17T17:41:02.419360352Z" level=info msg="TearDown network for sandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" successfully" Mar 17 17:41:02.419931 containerd[1966]: time="2025-03-17T17:41:02.419384856Z" level=info msg="StopPodSandbox for \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" returns successfully" Mar 17 17:41:02.420473 containerd[1966]: time="2025-03-17T17:41:02.420412428Z" level=info msg="RemovePodSandbox for \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\"" Mar 17 17:41:02.420555 containerd[1966]: time="2025-03-17T17:41:02.420485484Z" level=info msg="Forcibly stopping sandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\"" Mar 17 17:41:02.420772 containerd[1966]: time="2025-03-17T17:41:02.420737160Z" level=info msg="TearDown network for sandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" successfully" Mar 17 17:41:02.428258 containerd[1966]: time="2025-03-17T17:41:02.427851528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:02.428258 containerd[1966]: time="2025-03-17T17:41:02.427997244Z" level=info msg="RemovePodSandbox \"5c0dad5a0ed6995fc6bd9bf356236cc399b7b65730fa985f61ebd2144eda987b\" returns successfully" Mar 17 17:41:02.428705 containerd[1966]: time="2025-03-17T17:41:02.428667000Z" level=info msg="StopPodSandbox for \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\"" Mar 17 17:41:02.429788 containerd[1966]: time="2025-03-17T17:41:02.429385644Z" level=info msg="TearDown network for sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" successfully" Mar 17 17:41:02.429788 containerd[1966]: time="2025-03-17T17:41:02.429431124Z" level=info msg="StopPodSandbox for \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" returns successfully" Mar 17 17:41:02.430289 containerd[1966]: time="2025-03-17T17:41:02.430222620Z" level=info msg="RemovePodSandbox for \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\"" Mar 17 17:41:02.430402 containerd[1966]: time="2025-03-17T17:41:02.430290396Z" level=info msg="Forcibly stopping sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\"" Mar 17 17:41:02.430402 containerd[1966]: time="2025-03-17T17:41:02.430389480Z" level=info msg="TearDown network for sandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" successfully" Mar 17 17:41:02.436702 containerd[1966]: time="2025-03-17T17:41:02.436629276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:02.436871 containerd[1966]: time="2025-03-17T17:41:02.436765932Z" level=info msg="RemovePodSandbox \"b6762dce8a89d456ee16d87bfdc1097c37f0b0a2fa0e6070a4ea8ae0129ba33f\" returns successfully" Mar 17 17:41:05.335012 kubelet[3225]: E0317 17:41:05.334473 3225 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-124?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"