Feb 13 19:50:20.224195 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:50:20.224242 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:50:20.224268 kernel: KASLR disabled due to lack of seed Feb 13 19:50:20.224285 kernel: efi: EFI v2.7 by EDK II Feb 13 19:50:20.224301 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:50:20.224316 kernel: ACPI: Early table checksum verification disabled Feb 13 19:50:20.224334 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:50:20.224349 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:50:20.224367 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:50:20.224382 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:50:20.224403 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:50:20.224419 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:50:20.224434 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:50:20.224450 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:50:20.224468 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:50:20.224489 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:50:20.224507 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:50:20.224523 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:50:20.224539 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:50:20.224555 kernel: printk: bootconsole [uart0] enabled Feb 13 19:50:20.224572 kernel: NUMA: Failed to initialise from firmware Feb 13 19:50:20.224588 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:50:20.224604 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:50:20.224621 kernel: Zone ranges: Feb 13 19:50:20.224637 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:50:20.224653 kernel: DMA32 empty Feb 13 19:50:20.224674 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:50:20.224690 kernel: Movable zone start for each node Feb 13 19:50:20.224706 kernel: Early memory node ranges Feb 13 19:50:20.224722 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:50:20.224738 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:50:20.224754 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:50:20.224771 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:50:20.224787 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:50:20.224803 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:50:20.224820 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:50:20.224836 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:50:20.224852 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:50:20.224873 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:50:20.224891 kernel: psci: probing for conduit method from ACPI. Feb 13 19:50:20.224915 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:50:20.224933 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:50:20.225018 kernel: psci: Trusted OS migration not required Feb 13 19:50:20.225048 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:50:20.225067 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:50:20.225084 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:50:20.225102 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:50:20.225119 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:50:20.225136 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:50:20.225155 kernel: CPU features: detected: Spectre-v2 Feb 13 19:50:20.225172 kernel: CPU features: detected: Spectre-v3a Feb 13 19:50:20.225189 kernel: CPU features: detected: Spectre-BHB Feb 13 19:50:20.225206 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:50:20.225224 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:50:20.225247 kernel: alternatives: applying boot alternatives Feb 13 19:50:20.225269 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:50:20.225287 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:50:20.225305 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:50:20.225322 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:50:20.225339 kernel: Fallback order for Node 0: 0 Feb 13 19:50:20.225357 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:50:20.225374 kernel: Policy zone: Normal Feb 13 19:50:20.225391 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:50:20.225409 kernel: software IO TLB: area num 2. Feb 13 19:50:20.225426 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:50:20.225449 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:50:20.225466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:50:20.225483 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:50:20.225502 kernel: rcu: RCU event tracing is enabled. Feb 13 19:50:20.225520 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:50:20.225538 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:50:20.225556 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:50:20.225573 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:50:20.225590 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:50:20.225607 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:50:20.225624 kernel: GICv3: 96 SPIs implemented Feb 13 19:50:20.225645 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:50:20.225663 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:50:20.225679 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:50:20.225697 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:50:20.225713 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:50:20.225731 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:50:20.225748 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:50:20.225765 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:50:20.225782 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:50:20.225800 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:50:20.225817 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:50:20.225834 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:50:20.225855 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:50:20.225873 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:50:20.225890 kernel: Console: colour dummy device 80x25 Feb 13 19:50:20.225908 kernel: printk: console [tty1] enabled Feb 13 19:50:20.225926 kernel: ACPI: Core revision 20230628 Feb 13 19:50:20.228010 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:50:20.228043 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:50:20.228062 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:50:20.228080 kernel: landlock: Up and running. Feb 13 19:50:20.228111 kernel: SELinux: Initializing. Feb 13 19:50:20.228130 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:50:20.228148 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:50:20.228165 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:20.228183 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:20.228201 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:50:20.228220 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:50:20.228238 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:50:20.228255 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:50:20.228277 kernel: Remapping and enabling EFI services. Feb 13 19:50:20.228295 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:50:20.228313 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:50:20.228330 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:50:20.228348 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:50:20.228365 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:50:20.228383 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:50:20.228400 kernel: SMP: Total of 2 processors activated. Feb 13 19:50:20.228418 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:50:20.228439 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:50:20.228457 kernel: CPU features: detected: CRC32 instructions Feb 13 19:50:20.228475 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:50:20.228504 kernel: alternatives: applying system-wide alternatives Feb 13 19:50:20.228527 kernel: devtmpfs: initialized Feb 13 19:50:20.228545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:50:20.228564 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:50:20.228582 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:50:20.228600 kernel: SMBIOS 3.0.0 present. Feb 13 19:50:20.228618 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:50:20.228641 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:50:20.228660 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:50:20.228678 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:50:20.228698 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:50:20.228716 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:50:20.228735 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Feb 13 19:50:20.228753 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:50:20.228776 kernel: cpuidle: using governor menu Feb 13 19:50:20.228795 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:50:20.228813 kernel: ASID allocator initialised with 65536 entries Feb 13 19:50:20.228832 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:50:20.228851 kernel: Serial: AMBA PL011 UART driver Feb 13 19:50:20.228869 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:50:20.228888 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:50:20.228906 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:50:20.228924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:50:20.228981 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:50:20.229020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:50:20.229043 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:50:20.229062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:50:20.229081 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:50:20.229100 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:50:20.229119 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:50:20.229138 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:50:20.229157 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:50:20.229184 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:50:20.229204 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:50:20.229223 kernel: ACPI: Interpreter enabled Feb 13 19:50:20.229242 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:50:20.229260 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:50:20.229279 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:50:20.229630 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:50:20.229850 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:50:20.232316 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:50:20.232553 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:50:20.232760 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:50:20.232788 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:50:20.232808 kernel: acpiphp: Slot [1] registered Feb 13 19:50:20.232827 kernel: acpiphp: Slot [2] registered Feb 13 19:50:20.232845 kernel: acpiphp: Slot [3] registered Feb 13 19:50:20.232864 kernel: acpiphp: Slot [4] registered Feb 13 19:50:20.232899 kernel: acpiphp: Slot [5] registered Feb 13 19:50:20.232918 kernel: acpiphp: Slot [6] registered Feb 13 19:50:20.232980 kernel: acpiphp: Slot [7] registered Feb 13 19:50:20.233021 kernel: acpiphp: Slot [8] registered Feb 13 19:50:20.233043 kernel: acpiphp: Slot [9] registered Feb 13 19:50:20.233062 kernel: acpiphp: Slot [10] registered Feb 13 19:50:20.233081 kernel: acpiphp: Slot [11] registered Feb 13 19:50:20.233099 kernel: acpiphp: Slot [12] registered Feb 13 19:50:20.233118 kernel: acpiphp: Slot [13] registered Feb 13 19:50:20.233136 kernel: acpiphp: Slot [14] registered Feb 13 19:50:20.233163 kernel: acpiphp: Slot [15] registered Feb 13 19:50:20.233182 kernel: acpiphp: Slot [16] registered Feb 13 19:50:20.233200 kernel: acpiphp: Slot [17] registered Feb 13 19:50:20.233219 kernel: acpiphp: Slot [18] registered Feb 13 19:50:20.233237 kernel: acpiphp: Slot [19] registered Feb 13 19:50:20.233255 kernel: acpiphp: Slot [20] registered Feb 13 19:50:20.233274 kernel: acpiphp: Slot [21] registered Feb 13 19:50:20.233292 kernel: acpiphp: Slot [22] registered Feb 13 19:50:20.233310 kernel: acpiphp: Slot [23] registered Feb 13 19:50:20.233333 kernel: acpiphp: Slot [24] registered Feb 13 19:50:20.233352 kernel: acpiphp: Slot [25] registered Feb 13 19:50:20.233370 kernel: acpiphp: Slot [26] registered Feb 13 19:50:20.233388 kernel: acpiphp: Slot [27] registered Feb 13 19:50:20.233407 kernel: acpiphp: Slot [28] registered Feb 13 19:50:20.233425 kernel: acpiphp: Slot [29] registered Feb 13 19:50:20.233444 kernel: acpiphp: Slot [30] registered Feb 13 19:50:20.233462 kernel: acpiphp: Slot [31] registered Feb 13 19:50:20.233481 kernel: PCI host bridge to bus 0000:00 Feb 13 19:50:20.233717 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:50:20.233914 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:50:20.236393 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:50:20.236602 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:50:20.236849 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:50:20.237166 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:50:20.237384 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:50:20.237614 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:50:20.237827 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:50:20.240169 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:50:20.240427 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:50:20.240643 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:50:20.240872 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:50:20.241181 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:50:20.241412 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:50:20.241629 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:50:20.241838 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:50:20.245542 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:50:20.245770 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:50:20.246019 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:50:20.246222 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:50:20.246403 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:50:20.246583 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:50:20.246609 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:50:20.246630 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:50:20.246650 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:50:20.246670 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:50:20.246689 kernel: iommu: Default domain type: Translated Feb 13 19:50:20.246709 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:50:20.246735 kernel: efivars: Registered efivars operations Feb 13 19:50:20.246755 kernel: vgaarb: loaded Feb 13 19:50:20.246773 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:50:20.246792 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:50:20.246812 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:50:20.246831 kernel: pnp: PnP ACPI init Feb 13 19:50:20.247245 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:50:20.247277 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:50:20.247303 kernel: NET: Registered PF_INET protocol family Feb 13 19:50:20.247322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:50:20.247341 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:50:20.247360 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:50:20.247379 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:50:20.247397 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:50:20.247415 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:50:20.247434 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:50:20.247452 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:50:20.247475 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:50:20.247493 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:50:20.247512 kernel: kvm [1]: HYP mode not available Feb 13 19:50:20.247530 kernel: Initialise system trusted keyrings Feb 13 19:50:20.247548 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:50:20.247567 kernel: Key type asymmetric registered Feb 13 19:50:20.247585 kernel: Asymmetric key parser 'x509' registered Feb 13 19:50:20.247604 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:50:20.247622 kernel: io scheduler mq-deadline registered Feb 13 19:50:20.247645 kernel: io scheduler kyber registered Feb 13 19:50:20.247664 kernel: io scheduler bfq registered Feb 13 19:50:20.247870 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:50:20.247897 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:50:20.247916 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:50:20.247952 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:50:20.247977 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:50:20.247996 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:50:20.248022 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:50:20.248233 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:50:20.248260 kernel: printk: console [ttyS0] disabled Feb 13 19:50:20.248280 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:50:20.248298 kernel: printk: console [ttyS0] enabled Feb 13 19:50:20.248317 kernel: printk: bootconsole [uart0] disabled Feb 13 19:50:20.248336 kernel: thunder_xcv, ver 1.0 Feb 13 19:50:20.248354 kernel: thunder_bgx, ver 1.0 Feb 13 19:50:20.248373 kernel: nicpf, ver 1.0 Feb 13 19:50:20.248397 kernel: nicvf, ver 1.0 Feb 13 19:50:20.248629 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:50:20.248828 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:50:19 UTC (1739476219) Feb 13 19:50:20.248855 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:50:20.248875 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:50:20.248893 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:50:20.248912 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:50:20.248931 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:50:20.248988 kernel: Segment Routing with IPv6 Feb 13 19:50:20.249026 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:50:20.249047 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:50:20.249066 kernel: Key type dns_resolver registered Feb 13 19:50:20.249085 kernel: registered taskstats version 1 Feb 13 19:50:20.249103 kernel: Loading compiled-in X.509 certificates Feb 13 19:50:20.249122 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:50:20.249141 kernel: Key type .fscrypt registered Feb 13 19:50:20.249159 kernel: Key type fscrypt-provisioning registered Feb 13 19:50:20.249183 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:50:20.249202 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:50:20.249220 kernel: ima: No architecture policies found Feb 13 19:50:20.249239 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:50:20.249257 kernel: clk: Disabling unused clocks Feb 13 19:50:20.249275 kernel: Freeing unused kernel memory: 39360K Feb 13 19:50:20.249294 kernel: Run /init as init process Feb 13 19:50:20.249312 kernel: with arguments: Feb 13 19:50:20.249330 kernel: /init Feb 13 19:50:20.249348 kernel: with environment: Feb 13 19:50:20.249371 kernel: HOME=/ Feb 13 19:50:20.249389 kernel: TERM=linux Feb 13 19:50:20.249408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:50:20.249430 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:20.249454 systemd[1]: Detected virtualization amazon. Feb 13 19:50:20.249475 systemd[1]: Detected architecture arm64. Feb 13 19:50:20.249494 systemd[1]: Running in initrd. Feb 13 19:50:20.249518 systemd[1]: No hostname configured, using default hostname. Feb 13 19:50:20.249538 systemd[1]: Hostname set to . Feb 13 19:50:20.249559 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:20.249579 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:50:20.249599 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:20.249619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:20.249641 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:50:20.249662 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:20.249687 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:50:20.249708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:50:20.249732 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:50:20.249753 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:50:20.249773 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:20.249793 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:20.249814 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:20.249838 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:20.249859 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:20.249879 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:20.249899 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:20.249920 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:20.250004 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:50:20.250030 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:50:20.250051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:20.250071 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:20.250099 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:20.250119 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:20.250140 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:50:20.250160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:20.250180 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:50:20.250200 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:50:20.250220 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:20.250241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:20.250266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:20.250286 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:20.250307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:20.250327 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:50:20.250386 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:50:20.250436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:20.250463 systemd-journald[251]: Journal started Feb 13 19:50:20.250515 systemd-journald[251]: Runtime Journal (/run/log/journal/ec277e4ac9ca84bea7b656e30d67055a) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:20.221366 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:50:20.258976 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:20.259095 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:50:20.266981 kernel: Bridge firewalling registered Feb 13 19:50:20.266979 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:50:20.269915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:20.275163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:20.280436 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:20.297672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:20.306903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:20.311233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:20.314697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:20.365077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:20.372826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:20.381417 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:20.392871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:50:20.397099 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:20.420393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:20.431562 dracut-cmdline[286]: dracut-dracut-053 Feb 13 19:50:20.438481 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:50:20.503442 systemd-resolved[290]: Positive Trust Anchors: Feb 13 19:50:20.503476 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:20.503536 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:20.646993 kernel: SCSI subsystem initialized Feb 13 19:50:20.655079 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:50:20.667082 kernel: iscsi: registered transport (tcp) Feb 13 19:50:20.690282 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:50:20.690368 kernel: QLogic iSCSI HBA Driver Feb 13 19:50:20.754005 kernel: random: crng init done Feb 13 19:50:20.754689 systemd-resolved[290]: Defaulting to hostname 'linux'. Feb 13 19:50:20.756917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:20.765813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:20.793101 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:20.805285 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:50:20.840668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:50:20.840744 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:50:20.840772 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:50:20.910995 kernel: raid6: neonx8 gen() 6786 MB/s Feb 13 19:50:20.927970 kernel: raid6: neonx4 gen() 6622 MB/s Feb 13 19:50:20.944970 kernel: raid6: neonx2 gen() 5494 MB/s Feb 13 19:50:20.961971 kernel: raid6: neonx1 gen() 3978 MB/s Feb 13 19:50:20.978971 kernel: raid6: int64x8 gen() 3832 MB/s Feb 13 19:50:20.995972 kernel: raid6: int64x4 gen() 3735 MB/s Feb 13 19:50:21.012971 kernel: raid6: int64x2 gen() 3619 MB/s Feb 13 19:50:21.030728 kernel: raid6: int64x1 gen() 2770 MB/s Feb 13 19:50:21.030782 kernel: raid6: using algorithm neonx8 gen() 6786 MB/s Feb 13 19:50:21.048700 kernel: raid6: .... xor() 4789 MB/s, rmw enabled Feb 13 19:50:21.048742 kernel: raid6: using neon recovery algorithm Feb 13 19:50:21.057462 kernel: xor: measuring software checksum speed Feb 13 19:50:21.057540 kernel: 8regs : 10963 MB/sec Feb 13 19:50:21.058573 kernel: 32regs : 11932 MB/sec Feb 13 19:50:21.059756 kernel: arm64_neon : 9579 MB/sec Feb 13 19:50:21.059788 kernel: xor: using function: 32regs (11932 MB/sec) Feb 13 19:50:21.145011 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:50:21.168875 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:21.179475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:21.220845 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 19:50:21.229892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:21.242353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:50:21.295617 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 19:50:21.357560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:21.367286 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:21.490459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:21.504932 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:50:21.555142 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:21.559767 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:21.575577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:21.594794 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:21.616537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:50:21.677969 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:21.698208 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:50:21.698286 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:50:21.739212 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:50:21.739482 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:50:21.739713 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:9d:26:5e:a3:0b Feb 13 19:50:21.724229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:21.724490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:21.728222 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:21.730321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:21.730587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:21.732797 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:21.750810 (udev-worker)[534]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:21.766313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:21.793980 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:50:21.794068 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:50:21.804070 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:50:21.808048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:21.819839 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:50:21.819916 kernel: GPT:9289727 != 16777215 Feb 13 19:50:21.819978 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:50:21.820007 kernel: GPT:9289727 != 16777215 Feb 13 19:50:21.820513 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:50:21.822281 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:21.824464 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:21.852099 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:21.937007 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (523) Feb 13 19:50:21.977604 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (516) Feb 13 19:50:22.008364 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:50:22.054470 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:50:22.089466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:22.115603 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:22.115924 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:22.140439 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:50:22.157351 disk-uuid[663]: Primary Header is updated. Feb 13 19:50:22.157351 disk-uuid[663]: Secondary Entries is updated. Feb 13 19:50:22.157351 disk-uuid[663]: Secondary Header is updated. Feb 13 19:50:22.168038 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:22.173701 kernel: GPT:disk_guids don't match. Feb 13 19:50:22.173779 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:50:22.174640 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:22.184475 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:23.183977 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:23.185687 disk-uuid[664]: The operation has completed successfully. Feb 13 19:50:23.384964 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:50:23.385179 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:50:23.444697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:50:23.456024 sh[1008]: Success Feb 13 19:50:23.484019 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:50:23.595820 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:50:23.603392 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:50:23.610663 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:50:23.656996 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:50:23.657061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:23.657088 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:50:23.659911 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:50:23.659961 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:50:23.784992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:50:23.811519 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:50:23.815979 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:50:23.825471 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:50:23.838505 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:50:23.872909 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:23.873023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:23.874478 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:23.883005 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:23.904684 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:50:23.907548 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:23.920151 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:50:23.929379 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:50:24.043987 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:24.061348 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:24.109511 systemd-networkd[1200]: lo: Link UP Feb 13 19:50:24.109526 systemd-networkd[1200]: lo: Gained carrier Feb 13 19:50:24.112459 systemd-networkd[1200]: Enumeration completed Feb 13 19:50:24.112608 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:24.113407 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:24.113414 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:24.120073 systemd[1]: Reached target network.target - Network. Feb 13 19:50:24.121331 systemd-networkd[1200]: eth0: Link UP Feb 13 19:50:24.121339 systemd-networkd[1200]: eth0: Gained carrier Feb 13 19:50:24.121357 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:24.145101 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.30.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:24.340774 ignition[1120]: Ignition 2.19.0 Feb 13 19:50:24.340802 ignition[1120]: Stage: fetch-offline Feb 13 19:50:24.342442 ignition[1120]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:24.342466 ignition[1120]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:24.347604 ignition[1120]: Ignition finished successfully Feb 13 19:50:24.351362 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:24.359303 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:50:24.396482 ignition[1210]: Ignition 2.19.0 Feb 13 19:50:24.396507 ignition[1210]: Stage: fetch Feb 13 19:50:24.397382 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:24.397410 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:24.397652 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:24.419377 ignition[1210]: PUT result: OK Feb 13 19:50:24.422965 ignition[1210]: parsed url from cmdline: "" Feb 13 19:50:24.422985 ignition[1210]: no config URL provided Feb 13 19:50:24.423003 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:50:24.423033 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:50:24.423069 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:24.424960 ignition[1210]: PUT result: OK Feb 13 19:50:24.425507 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:50:24.428853 ignition[1210]: GET result: OK Feb 13 19:50:24.429020 ignition[1210]: parsing config with SHA512: d1b14c8dfd0c94aadbb4e11d9767f5849069b07ecb4e84a7bca691f69c35574c35af6b3288262e4be33a8e946e31eea1ee9ee1ca16089db306f061ff7dfcdb06 Feb 13 19:50:24.443294 unknown[1210]: fetched base config from "system" Feb 13 19:50:24.443316 unknown[1210]: fetched base config from "system" Feb 13 19:50:24.443330 unknown[1210]: fetched user config from "aws" Feb 13 19:50:24.448855 ignition[1210]: fetch: fetch complete Feb 13 19:50:24.448869 ignition[1210]: fetch: fetch passed Feb 13 19:50:24.449029 ignition[1210]: Ignition finished successfully Feb 13 19:50:24.457281 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:50:24.469328 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:50:24.509650 ignition[1216]: Ignition 2.19.0 Feb 13 19:50:24.509681 ignition[1216]: Stage: kargs Feb 13 19:50:24.510892 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:24.510921 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:24.511392 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:24.514318 ignition[1216]: PUT result: OK Feb 13 19:50:24.522877 ignition[1216]: kargs: kargs passed Feb 13 19:50:24.523537 ignition[1216]: Ignition finished successfully Feb 13 19:50:24.528830 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:50:24.545459 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:50:24.570620 ignition[1222]: Ignition 2.19.0 Feb 13 19:50:24.570642 ignition[1222]: Stage: disks Feb 13 19:50:24.571462 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:24.571486 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:24.571634 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:24.575316 ignition[1222]: PUT result: OK Feb 13 19:50:24.585586 ignition[1222]: disks: disks passed Feb 13 19:50:24.585696 ignition[1222]: Ignition finished successfully Feb 13 19:50:24.591015 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:50:24.593491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:24.597286 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:24.601413 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:24.605319 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:24.608909 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:24.625336 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:50:24.666321 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:50:24.672222 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:50:24.682193 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:50:24.780991 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:50:24.781551 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:50:24.785099 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:24.801204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:24.814443 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:50:24.816710 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:50:24.816855 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:50:24.816927 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:24.849987 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Feb 13 19:50:24.850061 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:24.854661 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:24.857096 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:24.857719 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:50:24.880330 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:50:24.890020 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:24.892774 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:25.198859 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:50:25.220671 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:50:25.243226 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:50:25.251671 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:50:25.559766 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:25.570167 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:50:25.577299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:50:25.593242 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:50:25.596403 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:25.645039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:50:25.656748 ignition[1364]: INFO : Ignition 2.19.0 Feb 13 19:50:25.656748 ignition[1364]: INFO : Stage: mount Feb 13 19:50:25.660244 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:25.660244 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:25.660244 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:25.667413 ignition[1364]: INFO : PUT result: OK Feb 13 19:50:25.672364 ignition[1364]: INFO : mount: mount passed Feb 13 19:50:25.674332 ignition[1364]: INFO : Ignition finished successfully Feb 13 19:50:25.678399 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:50:25.697268 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:50:25.790512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:25.825147 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Feb 13 19:50:25.825232 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:25.828381 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:25.828430 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:25.835346 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:25.837912 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:25.883522 ignition[1392]: INFO : Ignition 2.19.0 Feb 13 19:50:25.883522 ignition[1392]: INFO : Stage: files Feb 13 19:50:25.887618 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:25.887618 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:25.887618 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:25.894295 ignition[1392]: INFO : PUT result: OK Feb 13 19:50:25.898990 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:50:25.902053 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:50:25.902053 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:50:25.908685 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:50:25.911553 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:50:25.911553 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:50:25.911447 unknown[1392]: wrote ssh authorized keys file for user: core Feb 13 19:50:25.925716 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:50:25.931256 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:50:26.032139 systemd-networkd[1200]: eth0: Gained IPv6LL Feb 13 19:50:26.042769 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:50:26.197183 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:50:26.197183 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:50:26.197183 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:50:26.698362 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:50:26.847846 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:50:26.847846 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:26.855306 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:50:27.139541 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:50:27.492008 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:27.492008 ignition[1392]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:27.499134 ignition[1392]: INFO : files: files passed Feb 13 19:50:27.499134 ignition[1392]: INFO : Ignition finished successfully Feb 13 19:50:27.504160 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:50:27.531443 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:50:27.539286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:50:27.557956 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:50:27.558482 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:50:27.572857 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:27.572857 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:27.580527 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:27.588073 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:27.593446 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:50:27.604248 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:50:27.673864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:50:27.676015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:50:27.679039 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:50:27.682487 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:50:27.684455 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:50:27.702413 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:50:27.732043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:27.750413 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:50:27.773063 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:27.773463 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:27.774249 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:50:27.774921 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:50:27.775191 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:27.776384 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:50:27.777148 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:50:27.777847 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:50:27.778592 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:27.779315 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:27.780056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:50:27.780775 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:27.781540 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:50:27.782275 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:50:27.782957 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:50:27.783639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:50:27.783852 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:27.785904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:27.786710 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:27.787367 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:50:27.834420 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:27.838959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:50:27.839446 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:27.862468 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:50:27.862726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:27.865221 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:50:27.865419 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:50:27.883770 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:50:27.891580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:50:27.893569 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:50:27.894027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:27.901579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:50:27.901843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:27.921607 ignition[1444]: INFO : Ignition 2.19.0 Feb 13 19:50:27.924228 ignition[1444]: INFO : Stage: umount Feb 13 19:50:27.924228 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:27.924228 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:27.924228 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:27.938316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:50:27.938826 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:50:27.945329 ignition[1444]: INFO : PUT result: OK Feb 13 19:50:27.951286 ignition[1444]: INFO : umount: umount passed Feb 13 19:50:27.953067 ignition[1444]: INFO : Ignition finished successfully Feb 13 19:50:27.958092 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:50:27.959482 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:50:27.965990 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:50:27.966128 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:50:27.968267 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:50:27.968383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:50:27.971024 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:50:27.971118 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:50:27.986104 systemd[1]: Stopped target network.target - Network. Feb 13 19:50:27.987880 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:50:27.989480 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:27.995863 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:50:28.000285 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:50:28.003999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:28.014506 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:50:28.016667 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:50:28.018219 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:50:28.018392 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:28.019705 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:50:28.019774 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:28.020356 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:50:28.020446 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:50:28.022420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:50:28.022502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:28.023578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:50:28.024207 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:50:28.026730 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:50:28.050079 systemd-networkd[1200]: eth0: DHCPv6 lease lost Feb 13 19:50:28.051822 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:50:28.052610 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:50:28.059118 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:50:28.059395 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:50:28.072189 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:50:28.072312 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:28.084747 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:50:28.093530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:50:28.093722 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:28.096502 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:50:28.096593 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:28.098715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:50:28.098802 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:28.101765 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:50:28.101846 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:28.157144 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:28.173708 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:50:28.176159 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:50:28.186887 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:50:28.189184 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:28.196423 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:50:28.199564 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:28.204751 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:50:28.205913 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:50:28.208392 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:50:28.208543 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:28.209454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:50:28.209522 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:28.209930 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:50:28.210083 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:28.210608 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:50:28.210689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:28.211450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:28.211523 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:28.224413 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:50:28.247153 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:50:28.247282 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:28.249709 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:50:28.249790 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:28.252341 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:50:28.252421 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:28.255633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:28.255719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:28.295545 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:50:28.297048 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:50:28.300367 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:50:28.326376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:50:28.346082 systemd[1]: Switching root. Feb 13 19:50:28.400224 systemd-journald[251]: Journal stopped Feb 13 19:50:30.827502 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:50:30.827622 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:50:30.827667 kernel: SELinux: policy capability open_perms=1 Feb 13 19:50:30.827699 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:50:30.827736 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:50:30.827766 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:50:30.827796 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:50:30.827826 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:50:30.827857 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:50:30.827887 kernel: audit: type=1403 audit(1739476228.965:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:50:30.827927 systemd[1]: Successfully loaded SELinux policy in 75.611ms. Feb 13 19:50:30.828023 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.745ms. Feb 13 19:50:30.828069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:30.828117 systemd[1]: Detected virtualization amazon. Feb 13 19:50:30.828155 systemd[1]: Detected architecture arm64. Feb 13 19:50:30.828187 systemd[1]: Detected first boot. Feb 13 19:50:30.828228 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:30.828264 zram_generator::config[1487]: No configuration found. Feb 13 19:50:30.828303 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:50:30.828340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:50:30.828371 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:50:30.828405 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:50:30.828440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:50:30.828475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:50:30.828508 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:50:30.828537 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:50:30.828567 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:50:30.828605 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:50:30.828643 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:50:30.828681 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:50:30.828713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:30.828743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:30.828774 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:50:30.828804 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:50:30.828839 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:50:30.828871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:30.828901 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:50:30.828933 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:30.829004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:50:30.829035 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:50:30.829068 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:30.829101 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:50:30.829133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:30.829167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:30.829201 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:30.829233 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:30.829268 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:50:30.829298 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:50:30.829330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:30.829360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:30.829392 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:30.829423 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:50:30.829455 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:50:30.829485 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:50:30.829517 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:50:30.829558 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:50:30.829588 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:50:30.829620 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:50:30.829650 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:50:30.829683 systemd[1]: Reached target machines.target - Containers. Feb 13 19:50:30.829716 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:50:30.829747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:30.829779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:30.829809 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:50:30.829844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:30.829874 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:30.829903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:30.829932 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:50:30.830020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:30.830057 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:50:30.830088 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:50:30.830119 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:50:30.830156 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:50:30.830187 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:50:30.830224 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:30.830258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:30.830289 kernel: loop: module loaded Feb 13 19:50:30.830321 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:50:30.830354 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:50:30.830383 kernel: fuse: init (API version 7.39) Feb 13 19:50:30.830411 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:30.830448 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:50:30.830477 systemd[1]: Stopped verity-setup.service. Feb 13 19:50:30.830572 systemd-journald[1569]: Collecting audit messages is disabled. Feb 13 19:50:30.830651 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:50:30.830687 systemd-journald[1569]: Journal started Feb 13 19:50:30.830735 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec277e4ac9ca84bea7b656e30d67055a) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:30.282759 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:50:30.346958 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:50:30.347749 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:50:30.840335 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:30.841709 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:50:30.846814 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:50:30.851644 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:50:30.856610 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:50:30.861633 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:50:30.873307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:30.879332 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:50:30.879668 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:50:30.885563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:30.887189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:30.892760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:30.894338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:30.900474 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:50:30.900837 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:50:30.907538 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:30.911089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:30.916732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:30.922109 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:50:30.927955 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:50:30.950977 kernel: ACPI: bus type drm_connector registered Feb 13 19:50:30.953228 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:30.955091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:30.968932 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:50:30.984389 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:50:30.995363 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:50:31.013448 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:50:31.015876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:50:31.015997 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:31.023538 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:50:31.036484 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:50:31.048353 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:50:31.051421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:31.060316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:50:31.066371 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:50:31.069295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:31.081901 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:50:31.084366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:31.090380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:31.096399 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:50:31.106316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:31.116163 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:50:31.118898 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:50:31.133650 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:50:31.156125 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec277e4ac9ca84bea7b656e30d67055a is 53.357ms for 914 entries. Feb 13 19:50:31.156125 systemd-journald[1569]: System Journal (/var/log/journal/ec277e4ac9ca84bea7b656e30d67055a) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:50:31.244703 systemd-journald[1569]: Received client request to flush runtime journal. Feb 13 19:50:31.244894 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 19:50:31.211508 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:50:31.214563 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:50:31.228511 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:50:31.250862 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:50:31.269843 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:50:31.302092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:31.329059 kernel: loop1: detected capacity change from 0 to 52536 Feb 13 19:50:31.332696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:31.350465 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:50:31.354581 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:50:31.355979 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:50:31.360210 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Feb 13 19:50:31.360235 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Feb 13 19:50:31.384055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:31.399582 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:50:31.420278 udevadm[1633]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:50:31.449014 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:50:31.497929 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:50:31.511842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:31.519000 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 19:50:31.555721 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Feb 13 19:50:31.556338 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Feb 13 19:50:31.569209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:31.639381 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 19:50:31.680172 kernel: loop5: detected capacity change from 0 to 52536 Feb 13 19:50:31.690986 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 19:50:31.724303 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 19:50:31.740300 (sd-merge)[1645]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:50:31.742000 (sd-merge)[1645]: Merged extensions into '/usr'. Feb 13 19:50:31.753206 systemd[1]: Reloading requested from client PID 1616 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:50:31.753501 systemd[1]: Reloading... Feb 13 19:50:31.947981 zram_generator::config[1674]: No configuration found. Feb 13 19:50:32.356075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:32.526897 systemd[1]: Reloading finished in 772 ms. Feb 13 19:50:32.566654 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:50:32.570651 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:50:32.586334 systemd[1]: Starting ensure-sysext.service... Feb 13 19:50:32.598274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:32.606692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:32.638199 systemd[1]: Reloading requested from client PID 1723 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:50:32.638241 systemd[1]: Reloading... Feb 13 19:50:32.654883 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:50:32.655607 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:50:32.657589 systemd-tmpfiles[1724]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:50:32.658179 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. Feb 13 19:50:32.658316 systemd-tmpfiles[1724]: ACLs are not supported, ignoring. Feb 13 19:50:32.666656 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:32.666689 systemd-tmpfiles[1724]: Skipping /boot Feb 13 19:50:32.695239 systemd-tmpfiles[1724]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:32.695266 systemd-tmpfiles[1724]: Skipping /boot Feb 13 19:50:32.779230 systemd-udevd[1725]: Using default interface naming scheme 'v255'. Feb 13 19:50:32.830865 zram_generator::config[1749]: No configuration found. Feb 13 19:50:33.044179 ldconfig[1611]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:50:33.113214 (udev-worker)[1769]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:33.293340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:33.299984 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1763) Feb 13 19:50:33.481227 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:50:33.481817 systemd[1]: Reloading finished in 842 ms. Feb 13 19:50:33.518630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:33.523575 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:50:33.580900 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:33.658230 systemd[1]: Finished ensure-sysext.service. Feb 13 19:50:33.687149 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:50:33.700385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:33.711422 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:33.723548 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:50:33.727386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:33.731389 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:50:33.735613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:33.742113 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:33.749386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:33.762301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:33.764551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:33.771583 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:50:33.783544 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:50:33.792184 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:33.801789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:33.804792 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:50:33.811153 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:50:33.836292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:33.839852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:33.843081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:33.847140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:33.862249 lvm[1924]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:33.906193 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:50:33.909394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:33.911249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:33.953067 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:33.954766 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:33.960920 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:33.961356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:33.964848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:33.973785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:50:33.982722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:50:33.993396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:50:34.000217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:34.011620 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:50:34.020601 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:50:34.036271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:50:34.076448 lvm[1957]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:34.098566 augenrules[1964]: No rules Feb 13 19:50:34.108092 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:34.139125 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:50:34.144894 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:50:34.161853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:34.178023 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:50:34.181101 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:50:34.195544 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:50:34.271543 systemd-networkd[1933]: lo: Link UP Feb 13 19:50:34.272220 systemd-networkd[1933]: lo: Gained carrier Feb 13 19:50:34.275202 systemd-networkd[1933]: Enumeration completed Feb 13 19:50:34.275851 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:34.278821 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:34.279122 systemd-networkd[1933]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:34.281515 systemd-networkd[1933]: eth0: Link UP Feb 13 19:50:34.282161 systemd-networkd[1933]: eth0: Gained carrier Feb 13 19:50:34.282312 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:34.287320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:50:34.291054 systemd-networkd[1933]: eth0: DHCPv4 address 172.31.30.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:34.307280 systemd-resolved[1936]: Positive Trust Anchors: Feb 13 19:50:34.307324 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:34.307388 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:34.317080 systemd-resolved[1936]: Defaulting to hostname 'linux'. Feb 13 19:50:34.320439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:34.322921 systemd[1]: Reached target network.target - Network. Feb 13 19:50:34.324806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:34.327106 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:34.329412 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:50:34.331823 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:50:34.334418 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:50:34.336588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:50:34.338915 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:50:34.341348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:50:34.341410 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:34.343089 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:34.347539 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:50:34.353681 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:50:34.367885 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:50:34.371839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:50:34.374348 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:34.376339 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:34.378157 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:34.378212 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:34.385336 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:50:34.397580 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:50:34.404353 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:50:34.410478 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:50:34.416344 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:50:34.420195 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:50:34.429455 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:50:34.435307 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:50:34.445059 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:50:34.450469 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:50:34.457272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:50:34.461814 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:50:34.478489 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:50:34.481833 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:50:34.484166 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:50:34.489307 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:50:34.496405 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:50:34.513225 jq[1988]: false Feb 13 19:50:34.534676 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:50:34.535367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:50:34.586353 dbus-daemon[1987]: [system] SELinux support is enabled Feb 13 19:50:34.588656 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:50:34.596431 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:50:34.597588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:50:34.601173 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:50:34.601215 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:50:34.619637 dbus-daemon[1987]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1933 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:34.637861 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:50:34.643613 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:34.643688 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: ---------------------------------------------------- Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: corporation. Support and training for ntp-4 are Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: available at https://www.nwtime.org/support Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: ---------------------------------------------------- Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: proto: precision = 0.096 usec (-23) Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: basedate set to 2025-02-01 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Listen normally on 3 eth0 172.31.30.61:123 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: bind(21) AF_INET6 fe80::49d:26ff:fe5e:a30b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: unable to create socket on eth0 (5) for fe80::49d:26ff:fe5e:a30b%2#123 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: failed to init interface for address fe80::49d:26ff:fe5e:a30b%2 Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: Listening on routing socket on fd #21 for interface updates Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:34.664003 ntpd[1991]: 13 Feb 19:50:34 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:34.681481 jq[1999]: true Feb 13 19:50:34.643708 ntpd[1991]: ---------------------------------------------------- Feb 13 19:50:34.674349 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:50:34.717897 extend-filesystems[1989]: Found loop4 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found loop5 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found loop6 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found loop7 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p1 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p2 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p3 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found usr Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p4 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p6 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p7 Feb 13 19:50:34.717897 extend-filesystems[1989]: Found nvme0n1p9 Feb 13 19:50:34.717897 extend-filesystems[1989]: Checking size of /dev/nvme0n1p9 Feb 13 19:50:34.643729 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:34.811487 tar[2014]: linux-arm64/helm Feb 13 19:50:34.675537 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:50:34.643749 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:34.702485 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:50:34.812462 jq[2024]: true Feb 13 19:50:34.643769 ntpd[1991]: corporation. Support and training for ntp-4 are Feb 13 19:50:34.702820 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:50:34.643787 ntpd[1991]: available at https://www.nwtime.org/support Feb 13 19:50:34.757383 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:50:34.643805 ntpd[1991]: ---------------------------------------------------- Feb 13 19:50:34.806861 (ntainerd)[2023]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:50:34.648336 ntpd[1991]: proto: precision = 0.096 usec (-23) Feb 13 19:50:34.650022 ntpd[1991]: basedate set to 2025-02-01 Feb 13 19:50:34.650060 ntpd[1991]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:34.653862 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:34.653973 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:34.654258 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:34.654321 ntpd[1991]: Listen normally on 3 eth0 172.31.30.61:123 Feb 13 19:50:34.654394 ntpd[1991]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:34.654467 ntpd[1991]: bind(21) AF_INET6 fe80::49d:26ff:fe5e:a30b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:34.654507 ntpd[1991]: unable to create socket on eth0 (5) for fe80::49d:26ff:fe5e:a30b%2#123 Feb 13 19:50:34.654535 ntpd[1991]: failed to init interface for address fe80::49d:26ff:fe5e:a30b%2 Feb 13 19:50:34.654589 ntpd[1991]: Listening on routing socket on fd #21 for interface updates Feb 13 19:50:34.659754 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:34.659807 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:34.835584 update_engine[1997]: I20250213 19:50:34.822881 1997 main.cc:92] Flatcar Update Engine starting Feb 13 19:50:34.847260 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:50:34.868723 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:50:34.877544 update_engine[1997]: I20250213 19:50:34.870596 1997 update_check_scheduler.cc:74] Next update check in 11m43s Feb 13 19:50:34.885976 extend-filesystems[1989]: Resized partition /dev/nvme0n1p9 Feb 13 19:50:34.899973 extend-filesystems[2040]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:50:34.997210 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:50:34.913735 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:50:34.965545 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:50:35.030653 systemd-logind[1996]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:50:35.030717 systemd-logind[1996]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:50:35.031446 systemd-logind[1996]: New seat seat0. Feb 13 19:50:35.037026 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:50:35.062705 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:50:35.087392 extend-filesystems[2040]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:50:35.087392 extend-filesystems[2040]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:50:35.087392 extend-filesystems[2040]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:50:35.149373 extend-filesystems[1989]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:50:35.093723 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:50:35.163291 coreos-metadata[1986]: Feb 13 19:50:35.158 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:35.094232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:50:35.172270 coreos-metadata[1986]: Feb 13 19:50:35.165 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:50:35.174503 coreos-metadata[1986]: Feb 13 19:50:35.174 INFO Fetch successful Feb 13 19:50:35.174503 coreos-metadata[1986]: Feb 13 19:50:35.174 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:50:35.182965 coreos-metadata[1986]: Feb 13 19:50:35.180 INFO Fetch successful Feb 13 19:50:35.182965 coreos-metadata[1986]: Feb 13 19:50:35.180 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:50:35.186299 coreos-metadata[1986]: Feb 13 19:50:35.185 INFO Fetch successful Feb 13 19:50:35.186299 coreos-metadata[1986]: Feb 13 19:50:35.186 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:50:35.187183 coreos-metadata[1986]: Feb 13 19:50:35.186 INFO Fetch successful Feb 13 19:50:35.187183 coreos-metadata[1986]: Feb 13 19:50:35.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:50:35.197318 coreos-metadata[1986]: Feb 13 19:50:35.193 INFO Fetch failed with 404: resource not found Feb 13 19:50:35.197318 coreos-metadata[1986]: Feb 13 19:50:35.193 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:50:35.197620 bash[2069]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:35.210012 coreos-metadata[1986]: Feb 13 19:50:35.200 INFO Fetch successful Feb 13 19:50:35.210012 coreos-metadata[1986]: Feb 13 19:50:35.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:50:35.201102 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:50:35.218018 coreos-metadata[1986]: Feb 13 19:50:35.214 INFO Fetch successful Feb 13 19:50:35.218018 coreos-metadata[1986]: Feb 13 19:50:35.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:50:35.220964 coreos-metadata[1986]: Feb 13 19:50:35.218 INFO Fetch successful Feb 13 19:50:35.220964 coreos-metadata[1986]: Feb 13 19:50:35.218 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:50:35.223680 coreos-metadata[1986]: Feb 13 19:50:35.223 INFO Fetch successful Feb 13 19:50:35.223680 coreos-metadata[1986]: Feb 13 19:50:35.223 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:50:35.227973 coreos-metadata[1986]: Feb 13 19:50:35.226 INFO Fetch successful Feb 13 19:50:35.240750 systemd[1]: Starting sshkeys.service... Feb 13 19:50:35.297120 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1763) Feb 13 19:50:35.304370 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:50:35.326325 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:50:35.429006 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:50:35.434767 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:50:35.508982 containerd[2023]: time="2025-02-13T19:50:35.506767175Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:50:35.554207 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:50:35.554645 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:50:35.562263 dbus-daemon[1987]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:35.583900 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:50:35.646591 ntpd[1991]: bind(24) AF_INET6 fe80::49d:26ff:fe5e:a30b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:35.648620 ntpd[1991]: 13 Feb 19:50:35 ntpd[1991]: bind(24) AF_INET6 fe80::49d:26ff:fe5e:a30b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:35.648620 ntpd[1991]: 13 Feb 19:50:35 ntpd[1991]: unable to create socket on eth0 (6) for fe80::49d:26ff:fe5e:a30b%2#123 Feb 13 19:50:35.648620 ntpd[1991]: 13 Feb 19:50:35 ntpd[1991]: failed to init interface for address fe80::49d:26ff:fe5e:a30b%2 Feb 13 19:50:35.648266 ntpd[1991]: unable to create socket on eth0 (6) for fe80::49d:26ff:fe5e:a30b%2#123 Feb 13 19:50:35.648299 ntpd[1991]: failed to init interface for address fe80::49d:26ff:fe5e:a30b%2 Feb 13 19:50:35.651493 polkitd[2124]: Started polkitd version 121 Feb 13 19:50:35.676423 polkitd[2124]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:50:35.678460 polkitd[2124]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:50:35.681212 polkitd[2124]: Finished loading, compiling and executing 2 rules Feb 13 19:50:35.682421 dbus-daemon[1987]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:50:35.682693 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:50:35.688888 polkitd[2124]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:50:35.714307 systemd-hostnamed[2032]: Hostname set to (transient) Feb 13 19:50:35.714308 systemd-resolved[1936]: System hostname changed to 'ip-172-31-30-61'. Feb 13 19:50:35.724180 locksmithd[2039]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:50:35.798050 containerd[2023]: time="2025-02-13T19:50:35.796057308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810044365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810117469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810154837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810493201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810528841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810647209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810679201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.810986485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.811020517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.811058701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:35.812569 containerd[2023]: time="2025-02-13T19:50:35.811085881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.813190 containerd[2023]: time="2025-02-13T19:50:35.811252645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.813190 containerd[2023]: time="2025-02-13T19:50:35.811635781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:35.813190 containerd[2023]: time="2025-02-13T19:50:35.811839793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:35.813190 containerd[2023]: time="2025-02-13T19:50:35.811870537Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:50:35.820964 containerd[2023]: time="2025-02-13T19:50:35.819245965Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:50:35.820964 containerd[2023]: time="2025-02-13T19:50:35.819385477Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:50:35.823187 coreos-metadata[2080]: Feb 13 19:50:35.823 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:35.826926 coreos-metadata[2080]: Feb 13 19:50:35.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:50:35.826926 coreos-metadata[2080]: Feb 13 19:50:35.826 INFO Fetch successful Feb 13 19:50:35.826926 coreos-metadata[2080]: Feb 13 19:50:35.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:50:35.826926 coreos-metadata[2080]: Feb 13 19:50:35.827 INFO Fetch successful Feb 13 19:50:35.833257 containerd[2023]: time="2025-02-13T19:50:35.829865653Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:50:35.833257 containerd[2023]: time="2025-02-13T19:50:35.831642745Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:50:35.833257 containerd[2023]: time="2025-02-13T19:50:35.831697393Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:50:35.833257 containerd[2023]: time="2025-02-13T19:50:35.831757561Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:50:35.833257 containerd[2023]: time="2025-02-13T19:50:35.831795397Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:50:35.833257 containerd[2023]: time="2025-02-13T19:50:35.832097725Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:50:35.836694 unknown[2080]: wrote ssh authorized keys file for user: core Feb 13 19:50:35.838899 containerd[2023]: time="2025-02-13T19:50:35.836867725Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:50:35.839889 containerd[2023]: time="2025-02-13T19:50:35.839812933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:50:35.840040 containerd[2023]: time="2025-02-13T19:50:35.839888017Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:50:35.840040 containerd[2023]: time="2025-02-13T19:50:35.839921485Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:50:35.840040 containerd[2023]: time="2025-02-13T19:50:35.839987521Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840040 containerd[2023]: time="2025-02-13T19:50:35.840023257Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840258 containerd[2023]: time="2025-02-13T19:50:35.840060673Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840258 containerd[2023]: time="2025-02-13T19:50:35.840106933Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840258 containerd[2023]: time="2025-02-13T19:50:35.840149893Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840258 containerd[2023]: time="2025-02-13T19:50:35.840189169Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840258 containerd[2023]: time="2025-02-13T19:50:35.840218521Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840258 containerd[2023]: time="2025-02-13T19:50:35.840247417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840289237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840334957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840365437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840401725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840431845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840462841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840502 containerd[2023]: time="2025-02-13T19:50:35.840490597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840527845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840565249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840599161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840639385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840668665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840697405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840731641Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:50:35.840808 containerd[2023]: time="2025-02-13T19:50:35.840779617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.841201 containerd[2023]: time="2025-02-13T19:50:35.840813721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.841201 containerd[2023]: time="2025-02-13T19:50:35.840841969Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849164749Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849427357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849457837Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849492733Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849520333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849557005Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849583897Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:50:35.850978 containerd[2023]: time="2025-02-13T19:50:35.849616081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:50:35.851466 containerd[2023]: time="2025-02-13T19:50:35.850308541Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:50:35.851466 containerd[2023]: time="2025-02-13T19:50:35.850424065Z" level=info msg="Connect containerd service" Feb 13 19:50:35.851466 containerd[2023]: time="2025-02-13T19:50:35.850487917Z" level=info msg="using legacy CRI server" Feb 13 19:50:35.851466 containerd[2023]: time="2025-02-13T19:50:35.850506217Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:50:35.851466 containerd[2023]: time="2025-02-13T19:50:35.850659865Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:50:35.857122 containerd[2023]: time="2025-02-13T19:50:35.857023645Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:50:35.863968 containerd[2023]: time="2025-02-13T19:50:35.860473885Z" level=info msg="Start subscribing containerd event" Feb 13 19:50:35.870008 containerd[2023]: time="2025-02-13T19:50:35.868042429Z" level=info msg="Start recovering state" Feb 13 19:50:35.870008 containerd[2023]: time="2025-02-13T19:50:35.868261057Z" level=info msg="Start event monitor" Feb 13 19:50:35.870008 containerd[2023]: time="2025-02-13T19:50:35.868340701Z" level=info msg="Start snapshots syncer" Feb 13 19:50:35.870008 containerd[2023]: time="2025-02-13T19:50:35.868369081Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:50:35.870008 containerd[2023]: time="2025-02-13T19:50:35.868399021Z" level=info msg="Start streaming server" Feb 13 19:50:35.873719 containerd[2023]: time="2025-02-13T19:50:35.873538045Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:50:35.873979 containerd[2023]: time="2025-02-13T19:50:35.873807385Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:50:35.884566 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:50:35.890239 containerd[2023]: time="2025-02-13T19:50:35.886833385Z" level=info msg="containerd successfully booted in 0.384830s" Feb 13 19:50:35.921672 update-ssh-keys[2177]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:35.924220 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:50:35.937708 systemd[1]: Finished sshkeys.service. Feb 13 19:50:36.208476 systemd-networkd[1933]: eth0: Gained IPv6LL Feb 13 19:50:36.220627 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:50:36.228251 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:50:36.248846 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:50:36.269227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:36.278997 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:50:36.422184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:50:36.447617 amazon-ssm-agent[2193]: Initializing new seelog logger Feb 13 19:50:36.449220 amazon-ssm-agent[2193]: New Seelog Logger Creation Complete Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 processing appconfig overrides Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 processing appconfig overrides Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 processing appconfig overrides Feb 13 19:50:36.452574 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO Proxy environment variables: Feb 13 19:50:36.460377 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.460522 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:36.460820 amazon-ssm-agent[2193]: 2025/02/13 19:50:36 processing appconfig overrides Feb 13 19:50:36.558120 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO https_proxy: Feb 13 19:50:36.659096 tar[2014]: linux-arm64/LICENSE Feb 13 19:50:36.659096 tar[2014]: linux-arm64/README.md Feb 13 19:50:36.666581 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO http_proxy: Feb 13 19:50:36.706231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:50:36.762088 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO no_proxy: Feb 13 19:50:36.861016 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:50:36.958417 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:50:37.058223 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO Agent will take identity from EC2 Feb 13 19:50:37.157593 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:37.257015 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:37.356085 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:37.374395 sshd_keygen[2020]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:50:37.456151 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:50:37.461729 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:50:37.484520 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:50:37.498535 systemd[1]: Started sshd@0-172.31.30.61:22-139.178.89.65:53076.service - OpenSSH per-connection server daemon (139.178.89.65:53076). Feb 13 19:50:37.514433 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:50:37.514735 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [Registrar] Starting registrar module Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:37 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:37 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:37 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:50:37.516438 amazon-ssm-agent[2193]: 2025-02-13 19:50:37 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:50:37.517437 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:50:37.519826 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:50:37.531588 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:50:37.554854 amazon-ssm-agent[2193]: 2025-02-13 19:50:37 INFO [CredentialRefresher] Next credential rotation will be in 30.891628929366668 minutes Feb 13 19:50:37.581127 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:50:37.597421 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:50:37.611501 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:50:37.615619 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:50:37.764322 sshd[2224]: Accepted publickey for core from 139.178.89.65 port 53076 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:37.768279 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:37.788171 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:50:37.802643 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:50:37.812598 systemd-logind[1996]: New session 1 of user core. Feb 13 19:50:37.850925 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:50:37.866682 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:50:37.886520 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:50:37.921426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:37.925029 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:50:37.943881 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:50:38.119437 systemd[2235]: Queued start job for default target default.target. Feb 13 19:50:38.129543 systemd[2235]: Created slice app.slice - User Application Slice. Feb 13 19:50:38.129870 systemd[2235]: Reached target paths.target - Paths. Feb 13 19:50:38.129909 systemd[2235]: Reached target timers.target - Timers. Feb 13 19:50:38.134285 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:50:38.185339 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:50:38.185586 systemd[2235]: Reached target sockets.target - Sockets. Feb 13 19:50:38.185619 systemd[2235]: Reached target basic.target - Basic System. Feb 13 19:50:38.185701 systemd[2235]: Reached target default.target - Main User Target. Feb 13 19:50:38.185765 systemd[2235]: Startup finished in 284ms. Feb 13 19:50:38.186119 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:50:38.202281 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:50:38.205408 systemd[1]: Startup finished in 1.241s (kernel) + 9.137s (initrd) + 9.313s (userspace) = 19.692s. Feb 13 19:50:38.381847 systemd[1]: Started sshd@1-172.31.30.61:22-139.178.89.65:53956.service - OpenSSH per-connection server daemon (139.178.89.65:53956). Feb 13 19:50:38.551301 amazon-ssm-agent[2193]: 2025-02-13 19:50:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:50:38.571011 sshd[2260]: Accepted publickey for core from 139.178.89.65 port 53956 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:38.571457 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:38.592482 systemd-logind[1996]: New session 2 of user core. Feb 13 19:50:38.599274 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:50:38.644537 ntpd[1991]: Listen normally on 7 eth0 [fe80::49d:26ff:fe5e:a30b%2]:123 Feb 13 19:50:38.645323 ntpd[1991]: 13 Feb 19:50:38 ntpd[1991]: Listen normally on 7 eth0 [fe80::49d:26ff:fe5e:a30b%2]:123 Feb 13 19:50:38.652393 amazon-ssm-agent[2193]: 2025-02-13 19:50:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2263) started Feb 13 19:50:38.742077 sshd[2260]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:38.754660 amazon-ssm-agent[2193]: 2025-02-13 19:50:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:50:38.760595 systemd[1]: sshd@1-172.31.30.61:22-139.178.89.65:53956.service: Deactivated successfully. Feb 13 19:50:38.768542 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:50:38.771238 systemd-logind[1996]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:50:38.790125 systemd[1]: Started sshd@2-172.31.30.61:22-139.178.89.65:53964.service - OpenSSH per-connection server daemon (139.178.89.65:53964). Feb 13 19:50:38.793867 systemd-logind[1996]: Removed session 2. Feb 13 19:50:38.986885 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 53964 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:38.990185 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:39.003297 systemd-logind[1996]: New session 3 of user core. Feb 13 19:50:39.011403 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:50:39.068898 kubelet[2244]: E0213 19:50:39.068837 2244 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:50:39.073712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:50:39.074094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:50:39.074770 systemd[1]: kubelet.service: Consumed 1.376s CPU time. Feb 13 19:50:39.138136 sshd[2275]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:39.145343 systemd[1]: sshd@2-172.31.30.61:22-139.178.89.65:53964.service: Deactivated successfully. Feb 13 19:50:39.150881 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:50:39.152894 systemd-logind[1996]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:50:39.154804 systemd-logind[1996]: Removed session 3. Feb 13 19:50:39.184517 systemd[1]: Started sshd@3-172.31.30.61:22-139.178.89.65:53972.service - OpenSSH per-connection server daemon (139.178.89.65:53972). Feb 13 19:50:39.353063 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 53972 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:39.356265 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:39.367299 systemd-logind[1996]: New session 4 of user core. Feb 13 19:50:39.374325 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:50:39.504897 sshd[2288]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:39.510539 systemd[1]: sshd@3-172.31.30.61:22-139.178.89.65:53972.service: Deactivated successfully. Feb 13 19:50:39.514172 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:50:39.518310 systemd-logind[1996]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:50:39.520101 systemd-logind[1996]: Removed session 4. Feb 13 19:50:39.545556 systemd[1]: Started sshd@4-172.31.30.61:22-139.178.89.65:53982.service - OpenSSH per-connection server daemon (139.178.89.65:53982). Feb 13 19:50:39.719799 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 53982 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:39.722427 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:39.731354 systemd-logind[1996]: New session 5 of user core. Feb 13 19:50:39.742214 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:50:39.880395 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:50:39.881764 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:39.904742 sudo[2298]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:39.929225 sshd[2295]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:39.938896 systemd[1]: sshd@4-172.31.30.61:22-139.178.89.65:53982.service: Deactivated successfully. Feb 13 19:50:39.943282 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:50:39.945181 systemd-logind[1996]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:50:39.947068 systemd-logind[1996]: Removed session 5. Feb 13 19:50:39.966520 systemd[1]: Started sshd@5-172.31.30.61:22-139.178.89.65:53996.service - OpenSSH per-connection server daemon (139.178.89.65:53996). Feb 13 19:50:40.144412 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 53996 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:40.147097 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:40.155442 systemd-logind[1996]: New session 6 of user core. Feb 13 19:50:40.165243 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:50:40.269736 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:50:40.270411 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:40.276883 sudo[2307]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:40.287683 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:50:40.288395 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:40.317477 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:40.321849 auditctl[2310]: No rules Feb 13 19:50:40.324471 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:50:40.326045 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:40.338668 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:40.385995 augenrules[2328]: No rules Feb 13 19:50:40.389271 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:40.391674 sudo[2306]: pam_unix(sudo:session): session closed for user root Feb 13 19:50:40.415417 sshd[2303]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:40.421783 systemd[1]: sshd@5-172.31.30.61:22-139.178.89.65:53996.service: Deactivated successfully. Feb 13 19:50:40.425508 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:50:40.429305 systemd-logind[1996]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:50:40.431742 systemd-logind[1996]: Removed session 6. Feb 13 19:50:40.455681 systemd[1]: Started sshd@6-172.31.30.61:22-139.178.89.65:53998.service - OpenSSH per-connection server daemon (139.178.89.65:53998). Feb 13 19:50:40.628912 sshd[2336]: Accepted publickey for core from 139.178.89.65 port 53998 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:40.631866 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:40.642328 systemd-logind[1996]: New session 7 of user core. Feb 13 19:50:40.649231 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:50:40.754103 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:50:40.754716 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:41.359480 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:50:41.377460 (dockerd)[2355]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:50:41.855023 dockerd[2355]: time="2025-02-13T19:50:41.854609899Z" level=info msg="Starting up" Feb 13 19:50:42.115504 dockerd[2355]: time="2025-02-13T19:50:42.115352136Z" level=info msg="Loading containers: start." Feb 13 19:50:42.315004 kernel: Initializing XFRM netlink socket Feb 13 19:50:42.387282 (udev-worker)[2379]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:42.490546 systemd-networkd[1933]: docker0: Link UP Feb 13 19:50:42.519340 dockerd[2355]: time="2025-02-13T19:50:42.519278497Z" level=info msg="Loading containers: done." Feb 13 19:50:42.547212 dockerd[2355]: time="2025-02-13T19:50:42.546465616Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:50:42.547212 dockerd[2355]: time="2025-02-13T19:50:42.546613542Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:50:42.547212 dockerd[2355]: time="2025-02-13T19:50:42.546800500Z" level=info msg="Daemon has completed initialization" Feb 13 19:50:42.598065 dockerd[2355]: time="2025-02-13T19:50:42.597640087Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:50:42.598795 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:50:43.939407 containerd[2023]: time="2025-02-13T19:50:43.938775669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:50:44.693221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376198740.mount: Deactivated successfully. Feb 13 19:50:46.539109 containerd[2023]: time="2025-02-13T19:50:46.539049634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:46.542133 containerd[2023]: time="2025-02-13T19:50:46.542046992Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 19:50:46.542650 containerd[2023]: time="2025-02-13T19:50:46.542575376Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:46.548675 containerd[2023]: time="2025-02-13T19:50:46.548546188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:46.551090 containerd[2023]: time="2025-02-13T19:50:46.551030950Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.612179439s" Feb 13 19:50:46.551371 containerd[2023]: time="2025-02-13T19:50:46.551291973Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:50:46.594495 containerd[2023]: time="2025-02-13T19:50:46.594404282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:50:48.776340 containerd[2023]: time="2025-02-13T19:50:48.776156973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:48.778505 containerd[2023]: time="2025-02-13T19:50:48.778427535Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 19:50:48.779712 containerd[2023]: time="2025-02-13T19:50:48.779609010Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:48.788986 containerd[2023]: time="2025-02-13T19:50:48.787963301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:48.790201 containerd[2023]: time="2025-02-13T19:50:48.790142018Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.195648483s" Feb 13 19:50:48.790318 containerd[2023]: time="2025-02-13T19:50:48.790207967Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:50:48.835407 containerd[2023]: time="2025-02-13T19:50:48.835311700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:50:49.324662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:50:49.333384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:49.660915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:49.679903 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:50:49.769182 kubelet[2577]: E0213 19:50:49.769022 2577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:50:49.776376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:50:49.776722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:50:50.352820 containerd[2023]: time="2025-02-13T19:50:50.352634324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:50.354694 containerd[2023]: time="2025-02-13T19:50:50.354628833Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 19:50:50.355964 containerd[2023]: time="2025-02-13T19:50:50.355900545Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:50.362794 containerd[2023]: time="2025-02-13T19:50:50.362697033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:50.365472 containerd[2023]: time="2025-02-13T19:50:50.365194641Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.529814988s" Feb 13 19:50:50.365472 containerd[2023]: time="2025-02-13T19:50:50.365263364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:50:50.405496 containerd[2023]: time="2025-02-13T19:50:50.405425763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:50:51.635350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551449609.mount: Deactivated successfully. Feb 13 19:50:52.118091 containerd[2023]: time="2025-02-13T19:50:52.117488867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:52.119973 containerd[2023]: time="2025-02-13T19:50:52.119651736Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:50:52.121350 containerd[2023]: time="2025-02-13T19:50:52.121275607Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:52.125084 containerd[2023]: time="2025-02-13T19:50:52.125018490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:52.126998 containerd[2023]: time="2025-02-13T19:50:52.126549807Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.721057182s" Feb 13 19:50:52.126998 containerd[2023]: time="2025-02-13T19:50:52.126605599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:50:52.167348 containerd[2023]: time="2025-02-13T19:50:52.167275804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:50:52.768227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829085105.mount: Deactivated successfully. Feb 13 19:50:53.944455 containerd[2023]: time="2025-02-13T19:50:53.943423959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:53.946824 containerd[2023]: time="2025-02-13T19:50:53.946709605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:50:53.947290 containerd[2023]: time="2025-02-13T19:50:53.947244532Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:53.953396 containerd[2023]: time="2025-02-13T19:50:53.953325103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:53.956998 containerd[2023]: time="2025-02-13T19:50:53.956822751Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.789474514s" Feb 13 19:50:53.956998 containerd[2023]: time="2025-02-13T19:50:53.956897428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:50:54.000743 containerd[2023]: time="2025-02-13T19:50:54.000109748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:50:54.491004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084236054.mount: Deactivated successfully. Feb 13 19:50:54.499643 containerd[2023]: time="2025-02-13T19:50:54.499564778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:54.502160 containerd[2023]: time="2025-02-13T19:50:54.502095739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 19:50:54.503582 containerd[2023]: time="2025-02-13T19:50:54.503493658Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:54.507859 containerd[2023]: time="2025-02-13T19:50:54.507758009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:54.509867 containerd[2023]: time="2025-02-13T19:50:54.509594158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 509.390356ms" Feb 13 19:50:54.509867 containerd[2023]: time="2025-02-13T19:50:54.509651415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:50:54.549265 containerd[2023]: time="2025-02-13T19:50:54.548867970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:50:55.095140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628479293.mount: Deactivated successfully. Feb 13 19:50:58.042155 containerd[2023]: time="2025-02-13T19:50:58.042068958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:58.068091 containerd[2023]: time="2025-02-13T19:50:58.067999349Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 19:50:58.113795 containerd[2023]: time="2025-02-13T19:50:58.113354031Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:58.148839 containerd[2023]: time="2025-02-13T19:50:58.148778310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:58.152853 containerd[2023]: time="2025-02-13T19:50:58.152358488Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.603436551s" Feb 13 19:50:58.152853 containerd[2023]: time="2025-02-13T19:50:58.152421783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:51:00.028034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:51:00.041225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:00.410459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:00.415255 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:00.498091 kubelet[2775]: E0213 19:51:00.498008 2775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:00.503583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:00.504447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:05.750970 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:51:07.370786 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:07.379479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:07.439667 systemd[1]: Reloading requested from client PID 2793 ('systemctl') (unit session-7.scope)... Feb 13 19:51:07.439695 systemd[1]: Reloading... Feb 13 19:51:07.668025 zram_generator::config[2836]: No configuration found. Feb 13 19:51:07.958009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:08.139520 systemd[1]: Reloading finished in 699 ms. Feb 13 19:51:08.251809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:51:08.252111 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:51:08.254053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:08.262720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:08.569201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:08.587779 (kubelet)[2897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:08.686597 kubelet[2897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:08.686597 kubelet[2897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:08.686597 kubelet[2897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:08.686597 kubelet[2897]: I0213 19:51:08.685087 2897 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:11.569737 kubelet[2897]: I0213 19:51:11.569689 2897 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:11.570566 kubelet[2897]: I0213 19:51:11.570334 2897 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:11.570928 kubelet[2897]: I0213 19:51:11.570857 2897 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:11.598399 kubelet[2897]: E0213 19:51:11.598335 2897 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.599157 kubelet[2897]: I0213 19:51:11.598908 2897 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:11.616416 kubelet[2897]: I0213 19:51:11.616378 2897 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:11.619199 kubelet[2897]: I0213 19:51:11.619135 2897 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:11.619658 kubelet[2897]: I0213 19:51:11.619350 2897 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:11.620418 kubelet[2897]: I0213 19:51:11.619873 2897 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:11.620418 kubelet[2897]: I0213 19:51:11.619899 2897 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:11.620418 kubelet[2897]: I0213 19:51:11.620165 2897 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:11.621681 kubelet[2897]: I0213 19:51:11.621655 2897 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:11.621813 kubelet[2897]: I0213 19:51:11.621793 2897 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:11.622068 kubelet[2897]: I0213 19:51:11.622048 2897 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:11.622202 kubelet[2897]: I0213 19:51:11.622181 2897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:11.623928 kubelet[2897]: W0213 19:51:11.623833 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-61&limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.624098 kubelet[2897]: E0213 19:51:11.623956 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-61&limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.624648 kubelet[2897]: W0213 19:51:11.624563 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.624752 kubelet[2897]: E0213 19:51:11.624652 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.625553 kubelet[2897]: I0213 19:51:11.625506 2897 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:51:11.626981 kubelet[2897]: I0213 19:51:11.625870 2897 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:11.626981 kubelet[2897]: W0213 19:51:11.625976 2897 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:51:11.627152 kubelet[2897]: I0213 19:51:11.627053 2897 server.go:1264] "Started kubelet" Feb 13 19:51:11.634604 kubelet[2897]: I0213 19:51:11.634538 2897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:11.635260 kubelet[2897]: E0213 19:51:11.635005 2897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.61:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-61.1823dc73f18c8eba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-61,UID:ip-172-31-30-61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-61,},FirstTimestamp:2025-02-13 19:51:11.627013818 +0000 UTC m=+3.031204835,LastTimestamp:2025-02-13 19:51:11.627013818 +0000 UTC m=+3.031204835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-61,}" Feb 13 19:51:11.638357 kubelet[2897]: I0213 19:51:11.638295 2897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:11.640290 kubelet[2897]: I0213 19:51:11.640249 2897 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:11.642201 kubelet[2897]: I0213 19:51:11.642115 2897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:11.642710 kubelet[2897]: I0213 19:51:11.642679 2897 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:11.645663 kubelet[2897]: I0213 19:51:11.645605 2897 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:11.646287 kubelet[2897]: I0213 19:51:11.646258 2897 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:11.648724 kubelet[2897]: I0213 19:51:11.648685 2897 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:11.650400 kubelet[2897]: E0213 19:51:11.650262 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-61?timeout=10s\": dial tcp 172.31.30.61:6443: connect: connection refused" interval="200ms" Feb 13 19:51:11.650892 kubelet[2897]: W0213 19:51:11.650813 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.651095 kubelet[2897]: E0213 19:51:11.651070 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.651534 kubelet[2897]: I0213 19:51:11.651501 2897 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:11.651841 kubelet[2897]: I0213 19:51:11.651805 2897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:11.655421 kubelet[2897]: I0213 19:51:11.654985 2897 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:11.685154 kubelet[2897]: E0213 19:51:11.685095 2897 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:11.687907 kubelet[2897]: I0213 19:51:11.687636 2897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:11.694512 kubelet[2897]: I0213 19:51:11.694290 2897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:11.694512 kubelet[2897]: I0213 19:51:11.694396 2897 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:11.694512 kubelet[2897]: I0213 19:51:11.694441 2897 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:11.694512 kubelet[2897]: E0213 19:51:11.694513 2897 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:11.696504 kubelet[2897]: W0213 19:51:11.695986 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.696504 kubelet[2897]: E0213 19:51:11.696081 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:11.707659 kubelet[2897]: I0213 19:51:11.706894 2897 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:11.707659 kubelet[2897]: I0213 19:51:11.707009 2897 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:11.707659 kubelet[2897]: I0213 19:51:11.707071 2897 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:11.711523 kubelet[2897]: I0213 19:51:11.711422 2897 policy_none.go:49] "None policy: Start" Feb 13 19:51:11.712860 kubelet[2897]: I0213 19:51:11.712789 2897 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:11.712860 kubelet[2897]: I0213 19:51:11.712845 2897 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:11.723347 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:51:11.742392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:51:11.749177 kubelet[2897]: I0213 19:51:11.749112 2897 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-61" Feb 13 19:51:11.749980 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:51:11.751553 kubelet[2897]: E0213 19:51:11.751479 2897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.61:6443/api/v1/nodes\": dial tcp 172.31.30.61:6443: connect: connection refused" node="ip-172-31-30-61" Feb 13 19:51:11.761193 kubelet[2897]: I0213 19:51:11.761129 2897 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:11.761542 kubelet[2897]: I0213 19:51:11.761466 2897 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:11.761829 kubelet[2897]: I0213 19:51:11.761691 2897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:11.765886 kubelet[2897]: E0213 19:51:11.765795 2897 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-61\" not found" Feb 13 19:51:11.796008 kubelet[2897]: I0213 19:51:11.795533 2897 topology_manager.go:215] "Topology Admit Handler" podUID="e09b4eed7166e26c0a7966af78f6b9c7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-61" Feb 13 19:51:11.799176 kubelet[2897]: I0213 19:51:11.799063 2897 topology_manager.go:215] "Topology Admit Handler" podUID="493572c1cdb53aea770af4abff8ef406" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:11.804215 kubelet[2897]: I0213 19:51:11.804151 2897 topology_manager.go:215] "Topology Admit Handler" podUID="2cc199f74d398e4c7af420c64d31bbf6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-61" Feb 13 19:51:11.821222 systemd[1]: Created slice kubepods-burstable-pode09b4eed7166e26c0a7966af78f6b9c7.slice - libcontainer container kubepods-burstable-pode09b4eed7166e26c0a7966af78f6b9c7.slice. Feb 13 19:51:11.849723 kubelet[2897]: I0213 19:51:11.849406 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e09b4eed7166e26c0a7966af78f6b9c7-ca-certs\") pod \"kube-apiserver-ip-172-31-30-61\" (UID: \"e09b4eed7166e26c0a7966af78f6b9c7\") " pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:11.849723 kubelet[2897]: I0213 19:51:11.849471 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e09b4eed7166e26c0a7966af78f6b9c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-61\" (UID: \"e09b4eed7166e26c0a7966af78f6b9c7\") " pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:11.849723 kubelet[2897]: I0213 19:51:11.849514 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:11.850044 kubelet[2897]: I0213 19:51:11.849774 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cc199f74d398e4c7af420c64d31bbf6-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-61\" (UID: \"2cc199f74d398e4c7af420c64d31bbf6\") " pod="kube-system/kube-scheduler-ip-172-31-30-61" Feb 13 19:51:11.850044 kubelet[2897]: I0213 19:51:11.849873 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e09b4eed7166e26c0a7966af78f6b9c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-61\" (UID: \"e09b4eed7166e26c0a7966af78f6b9c7\") " pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:11.850044 kubelet[2897]: I0213 19:51:11.850007 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:11.850208 kubelet[2897]: I0213 19:51:11.850082 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:11.850208 kubelet[2897]: I0213 19:51:11.850171 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:11.850327 kubelet[2897]: I0213 19:51:11.850273 2897 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:11.852721 systemd[1]: Created slice kubepods-burstable-pod493572c1cdb53aea770af4abff8ef406.slice - libcontainer container kubepods-burstable-pod493572c1cdb53aea770af4abff8ef406.slice. Feb 13 19:51:11.853681 kubelet[2897]: E0213 19:51:11.853457 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-61?timeout=10s\": dial tcp 172.31.30.61:6443: connect: connection refused" interval="400ms" Feb 13 19:51:11.866150 systemd[1]: Created slice kubepods-burstable-pod2cc199f74d398e4c7af420c64d31bbf6.slice - libcontainer container kubepods-burstable-pod2cc199f74d398e4c7af420c64d31bbf6.slice. Feb 13 19:51:11.954701 kubelet[2897]: I0213 19:51:11.954664 2897 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-61" Feb 13 19:51:11.955496 kubelet[2897]: E0213 19:51:11.955429 2897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.61:6443/api/v1/nodes\": dial tcp 172.31.30.61:6443: connect: connection refused" node="ip-172-31-30-61" Feb 13 19:51:12.144705 containerd[2023]: time="2025-02-13T19:51:12.144591977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-61,Uid:e09b4eed7166e26c0a7966af78f6b9c7,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:12.160373 containerd[2023]: time="2025-02-13T19:51:12.160193691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-61,Uid:493572c1cdb53aea770af4abff8ef406,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:12.171850 containerd[2023]: time="2025-02-13T19:51:12.171204753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-61,Uid:2cc199f74d398e4c7af420c64d31bbf6,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:12.254840 kubelet[2897]: E0213 19:51:12.254773 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-61?timeout=10s\": dial tcp 172.31.30.61:6443: connect: connection refused" interval="800ms" Feb 13 19:51:12.358661 kubelet[2897]: I0213 19:51:12.358604 2897 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-61" Feb 13 19:51:12.359380 kubelet[2897]: E0213 19:51:12.359310 2897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.61:6443/api/v1/nodes\": dial tcp 172.31.30.61:6443: connect: connection refused" node="ip-172-31-30-61" Feb 13 19:51:12.585694 kubelet[2897]: W0213 19:51:12.585544 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-61&limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:12.586723 kubelet[2897]: E0213 19:51:12.585749 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-61&limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:12.677083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160684149.mount: Deactivated successfully. Feb 13 19:51:12.692111 containerd[2023]: time="2025-02-13T19:51:12.691677067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:12.698234 containerd[2023]: time="2025-02-13T19:51:12.698165134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:51:12.700540 containerd[2023]: time="2025-02-13T19:51:12.700477261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:12.702653 containerd[2023]: time="2025-02-13T19:51:12.702601423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:12.705015 containerd[2023]: time="2025-02-13T19:51:12.704884003Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:12.707709 containerd[2023]: time="2025-02-13T19:51:12.707645359Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:12.709192 containerd[2023]: time="2025-02-13T19:51:12.709117931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:12.712993 containerd[2023]: time="2025-02-13T19:51:12.712854667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:12.718824 containerd[2023]: time="2025-02-13T19:51:12.717908400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.198645ms" Feb 13 19:51:12.722381 containerd[2023]: time="2025-02-13T19:51:12.722294805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.964996ms" Feb 13 19:51:12.741744 containerd[2023]: time="2025-02-13T19:51:12.741680269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 581.373781ms" Feb 13 19:51:12.944722 containerd[2023]: time="2025-02-13T19:51:12.944137484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:12.944722 containerd[2023]: time="2025-02-13T19:51:12.944265144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:12.944722 containerd[2023]: time="2025-02-13T19:51:12.944303455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:12.944722 containerd[2023]: time="2025-02-13T19:51:12.944458813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:12.955445 containerd[2023]: time="2025-02-13T19:51:12.954712633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:12.955445 containerd[2023]: time="2025-02-13T19:51:12.954825513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:12.955445 containerd[2023]: time="2025-02-13T19:51:12.954881305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:12.955445 containerd[2023]: time="2025-02-13T19:51:12.955262592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:12.958490 containerd[2023]: time="2025-02-13T19:51:12.957905052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:12.958490 containerd[2023]: time="2025-02-13T19:51:12.958069463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:12.958490 containerd[2023]: time="2025-02-13T19:51:12.958104184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:12.961475 containerd[2023]: time="2025-02-13T19:51:12.960926675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:13.009103 systemd[1]: Started cri-containerd-adb4481f90c78b1d66f36e864a1efdb2a34c77522ee90b3f92d89795e6c8e2fd.scope - libcontainer container adb4481f90c78b1d66f36e864a1efdb2a34c77522ee90b3f92d89795e6c8e2fd. Feb 13 19:51:13.023436 systemd[1]: Started cri-containerd-ce0c0214ee9a02073e3cae8ae1a36a1895eafae41c4b1217eb321192c3c2e534.scope - libcontainer container ce0c0214ee9a02073e3cae8ae1a36a1895eafae41c4b1217eb321192c3c2e534. Feb 13 19:51:13.054254 systemd[1]: Started cri-containerd-175a7d6fbcf7dd3a1c50b3b601dbcb28654683e7263995b8c92d04bb2a62e476.scope - libcontainer container 175a7d6fbcf7dd3a1c50b3b601dbcb28654683e7263995b8c92d04bb2a62e476. Feb 13 19:51:13.057040 kubelet[2897]: E0213 19:51:13.055896 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-61?timeout=10s\": dial tcp 172.31.30.61:6443: connect: connection refused" interval="1.6s" Feb 13 19:51:13.099860 kubelet[2897]: W0213 19:51:13.099724 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:13.099860 kubelet[2897]: E0213 19:51:13.099825 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:13.140182 kubelet[2897]: W0213 19:51:13.139812 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:13.140852 kubelet[2897]: E0213 19:51:13.140416 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:13.164726 kubelet[2897]: I0213 19:51:13.164684 2897 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-61" Feb 13 19:51:13.165459 kubelet[2897]: E0213 19:51:13.165412 2897 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.61:6443/api/v1/nodes\": dial tcp 172.31.30.61:6443: connect: connection refused" node="ip-172-31-30-61" Feb 13 19:51:13.167434 containerd[2023]: time="2025-02-13T19:51:13.167210706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-61,Uid:2cc199f74d398e4c7af420c64d31bbf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"175a7d6fbcf7dd3a1c50b3b601dbcb28654683e7263995b8c92d04bb2a62e476\"" Feb 13 19:51:13.171159 containerd[2023]: time="2025-02-13T19:51:13.170706457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-61,Uid:e09b4eed7166e26c0a7966af78f6b9c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce0c0214ee9a02073e3cae8ae1a36a1895eafae41c4b1217eb321192c3c2e534\"" Feb 13 19:51:13.182898 containerd[2023]: time="2025-02-13T19:51:13.182492616Z" level=info msg="CreateContainer within sandbox \"175a7d6fbcf7dd3a1c50b3b601dbcb28654683e7263995b8c92d04bb2a62e476\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:51:13.184868 containerd[2023]: time="2025-02-13T19:51:13.184819066Z" level=info msg="CreateContainer within sandbox \"ce0c0214ee9a02073e3cae8ae1a36a1895eafae41c4b1217eb321192c3c2e534\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:51:13.188367 containerd[2023]: time="2025-02-13T19:51:13.188168368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-61,Uid:493572c1cdb53aea770af4abff8ef406,Namespace:kube-system,Attempt:0,} returns sandbox id \"adb4481f90c78b1d66f36e864a1efdb2a34c77522ee90b3f92d89795e6c8e2fd\"" Feb 13 19:51:13.195212 containerd[2023]: time="2025-02-13T19:51:13.194615806Z" level=info msg="CreateContainer within sandbox \"adb4481f90c78b1d66f36e864a1efdb2a34c77522ee90b3f92d89795e6c8e2fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:51:13.236181 containerd[2023]: time="2025-02-13T19:51:13.235925907Z" level=info msg="CreateContainer within sandbox \"175a7d6fbcf7dd3a1c50b3b601dbcb28654683e7263995b8c92d04bb2a62e476\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a87f4565dac0c1ac7d1c637894cb3563e71792937a266c98771a197837e551d4\"" Feb 13 19:51:13.237161 containerd[2023]: time="2025-02-13T19:51:13.237095424Z" level=info msg="StartContainer for \"a87f4565dac0c1ac7d1c637894cb3563e71792937a266c98771a197837e551d4\"" Feb 13 19:51:13.245599 containerd[2023]: time="2025-02-13T19:51:13.245518978Z" level=info msg="CreateContainer within sandbox \"adb4481f90c78b1d66f36e864a1efdb2a34c77522ee90b3f92d89795e6c8e2fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75a5465dde650d7b0431e11fa21e3d6c7bd23561660d954427e8723cf87e8c02\"" Feb 13 19:51:13.250082 containerd[2023]: time="2025-02-13T19:51:13.249933116Z" level=info msg="StartContainer for \"75a5465dde650d7b0431e11fa21e3d6c7bd23561660d954427e8723cf87e8c02\"" Feb 13 19:51:13.251681 containerd[2023]: time="2025-02-13T19:51:13.251509611Z" level=info msg="CreateContainer within sandbox \"ce0c0214ee9a02073e3cae8ae1a36a1895eafae41c4b1217eb321192c3c2e534\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7607aa28ce1ef8a4509f87e411014fc27abeea26df473a9643446569e57bff20\"" Feb 13 19:51:13.251992 kubelet[2897]: W0213 19:51:13.251519 2897 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:13.251992 kubelet[2897]: E0213 19:51:13.251627 2897 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:13.254497 containerd[2023]: time="2025-02-13T19:51:13.254192100Z" level=info msg="StartContainer for \"7607aa28ce1ef8a4509f87e411014fc27abeea26df473a9643446569e57bff20\"" Feb 13 19:51:13.306508 systemd[1]: Started cri-containerd-a87f4565dac0c1ac7d1c637894cb3563e71792937a266c98771a197837e551d4.scope - libcontainer container a87f4565dac0c1ac7d1c637894cb3563e71792937a266c98771a197837e551d4. Feb 13 19:51:13.349246 systemd[1]: Started cri-containerd-75a5465dde650d7b0431e11fa21e3d6c7bd23561660d954427e8723cf87e8c02.scope - libcontainer container 75a5465dde650d7b0431e11fa21e3d6c7bd23561660d954427e8723cf87e8c02. Feb 13 19:51:13.374283 systemd[1]: Started cri-containerd-7607aa28ce1ef8a4509f87e411014fc27abeea26df473a9643446569e57bff20.scope - libcontainer container 7607aa28ce1ef8a4509f87e411014fc27abeea26df473a9643446569e57bff20. Feb 13 19:51:13.507314 containerd[2023]: time="2025-02-13T19:51:13.506566794Z" level=info msg="StartContainer for \"a87f4565dac0c1ac7d1c637894cb3563e71792937a266c98771a197837e551d4\" returns successfully" Feb 13 19:51:13.507314 containerd[2023]: time="2025-02-13T19:51:13.506566746Z" level=info msg="StartContainer for \"7607aa28ce1ef8a4509f87e411014fc27abeea26df473a9643446569e57bff20\" returns successfully" Feb 13 19:51:13.517421 containerd[2023]: time="2025-02-13T19:51:13.517211698Z" level=info msg="StartContainer for \"75a5465dde650d7b0431e11fa21e3d6c7bd23561660d954427e8723cf87e8c02\" returns successfully" Feb 13 19:51:13.631368 kubelet[2897]: E0213 19:51:13.631308 2897 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.61:6443: connect: connection refused Feb 13 19:51:14.769092 kubelet[2897]: I0213 19:51:14.769045 2897 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-61" Feb 13 19:51:18.371460 kubelet[2897]: I0213 19:51:18.371024 2897 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-61" Feb 13 19:51:18.464370 kubelet[2897]: E0213 19:51:18.464308 2897 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 19:51:18.628449 kubelet[2897]: I0213 19:51:18.627858 2897 apiserver.go:52] "Watching apiserver" Feb 13 19:51:18.643064 kubelet[2897]: E0213 19:51:18.642993 2897 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:18.647605 kubelet[2897]: I0213 19:51:18.647521 2897 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:20.520932 update_engine[1997]: I20250213 19:51:20.520026 1997 update_attempter.cc:509] Updating boot flags... Feb 13 19:51:20.558579 systemd[1]: Reloading requested from client PID 3181 ('systemctl') (unit session-7.scope)... Feb 13 19:51:20.558620 systemd[1]: Reloading... Feb 13 19:51:20.714016 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3210) Feb 13 19:51:20.867250 zram_generator::config[3274]: No configuration found. Feb 13 19:51:21.125333 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3209) Feb 13 19:51:21.275107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:21.486008 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3209) Feb 13 19:51:21.554964 systemd[1]: Reloading finished in 995 ms. Feb 13 19:51:21.782842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:21.818297 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:51:21.821227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:21.821421 systemd[1]: kubelet.service: Consumed 3.828s CPU time, 111.4M memory peak, 0B memory swap peak. Feb 13 19:51:21.835677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:22.238362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:22.261439 (kubelet)[3545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:22.375154 kubelet[3545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:22.375154 kubelet[3545]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:22.375154 kubelet[3545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:22.375761 kubelet[3545]: I0213 19:51:22.375341 3545 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:22.388369 kubelet[3545]: I0213 19:51:22.387801 3545 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:22.388369 kubelet[3545]: I0213 19:51:22.387883 3545 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:22.388369 kubelet[3545]: I0213 19:51:22.388366 3545 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:22.392106 kubelet[3545]: I0213 19:51:22.391805 3545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:51:22.398451 kubelet[3545]: I0213 19:51:22.396677 3545 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:22.442708 kubelet[3545]: I0213 19:51:22.441933 3545 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:22.444999 kubelet[3545]: I0213 19:51:22.444207 3545 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:22.445549 kubelet[3545]: I0213 19:51:22.445206 3545 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:22.446996 kubelet[3545]: I0213 19:51:22.446022 3545 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:22.446996 kubelet[3545]: I0213 19:51:22.446058 3545 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:22.446996 kubelet[3545]: I0213 19:51:22.446147 3545 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:22.446996 kubelet[3545]: I0213 19:51:22.446392 3545 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:22.446996 kubelet[3545]: I0213 19:51:22.446425 3545 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:22.446996 kubelet[3545]: I0213 19:51:22.446485 3545 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:22.448790 kubelet[3545]: I0213 19:51:22.448741 3545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:22.459084 sudo[3559]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:51:22.459735 sudo[3559]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:51:22.475779 kubelet[3545]: I0213 19:51:22.470880 3545 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:51:22.475779 kubelet[3545]: I0213 19:51:22.471216 3545 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:22.475779 kubelet[3545]: I0213 19:51:22.472140 3545 server.go:1264] "Started kubelet" Feb 13 19:51:22.492095 kubelet[3545]: I0213 19:51:22.491970 3545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:22.512116 kubelet[3545]: I0213 19:51:22.512029 3545 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:22.522267 kubelet[3545]: I0213 19:51:22.521361 3545 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:22.527416 kubelet[3545]: I0213 19:51:22.512310 3545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:22.533001 kubelet[3545]: I0213 19:51:22.531763 3545 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:22.534326 kubelet[3545]: I0213 19:51:22.532090 3545 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:22.540021 kubelet[3545]: I0213 19:51:22.532113 3545 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:22.544719 kubelet[3545]: I0213 19:51:22.536410 3545 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:22.544919 kubelet[3545]: I0213 19:51:22.544877 3545 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:22.545152 kubelet[3545]: I0213 19:51:22.545108 3545 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:22.551330 kubelet[3545]: E0213 19:51:22.551256 3545 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:22.554032 kubelet[3545]: I0213 19:51:22.553122 3545 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:22.621648 kubelet[3545]: I0213 19:51:22.620619 3545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:22.657977 kubelet[3545]: I0213 19:51:22.654192 3545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:22.659230 kubelet[3545]: I0213 19:51:22.659181 3545 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:22.659712 kubelet[3545]: I0213 19:51:22.659687 3545 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:22.660018 kubelet[3545]: E0213 19:51:22.659920 3545 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:22.681982 kubelet[3545]: I0213 19:51:22.681501 3545 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-61" Feb 13 19:51:22.761080 kubelet[3545]: E0213 19:51:22.760867 3545 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:51:22.782880 kubelet[3545]: I0213 19:51:22.782455 3545 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-61" Feb 13 19:51:22.786581 kubelet[3545]: I0213 19:51:22.786496 3545 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-61" Feb 13 19:51:22.864009 kubelet[3545]: I0213 19:51:22.863972 3545 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:22.864341 kubelet[3545]: I0213 19:51:22.864283 3545 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:22.864648 kubelet[3545]: I0213 19:51:22.864540 3545 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:22.866869 kubelet[3545]: I0213 19:51:22.866213 3545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:51:22.866869 kubelet[3545]: I0213 19:51:22.866250 3545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:51:22.866869 kubelet[3545]: I0213 19:51:22.866286 3545 policy_none.go:49] "None policy: Start" Feb 13 19:51:22.871535 kubelet[3545]: I0213 19:51:22.870906 3545 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:22.871535 kubelet[3545]: I0213 19:51:22.871028 3545 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:22.871535 kubelet[3545]: I0213 19:51:22.871389 3545 state_mem.go:75] "Updated machine memory state" Feb 13 19:51:22.911862 kubelet[3545]: I0213 19:51:22.911799 3545 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:22.919144 kubelet[3545]: I0213 19:51:22.919002 3545 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:22.923476 kubelet[3545]: I0213 19:51:22.922690 3545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:22.963788 kubelet[3545]: I0213 19:51:22.963705 3545 topology_manager.go:215] "Topology Admit Handler" podUID="e09b4eed7166e26c0a7966af78f6b9c7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-61" Feb 13 19:51:22.964834 kubelet[3545]: I0213 19:51:22.963929 3545 topology_manager.go:215] "Topology Admit Handler" podUID="493572c1cdb53aea770af4abff8ef406" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:22.964834 kubelet[3545]: I0213 19:51:22.964047 3545 topology_manager.go:215] "Topology Admit Handler" podUID="2cc199f74d398e4c7af420c64d31bbf6" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-61" Feb 13 19:51:23.054906 kubelet[3545]: I0213 19:51:23.054740 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:23.054906 kubelet[3545]: I0213 19:51:23.054813 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e09b4eed7166e26c0a7966af78f6b9c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-61\" (UID: \"e09b4eed7166e26c0a7966af78f6b9c7\") " pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:23.054906 kubelet[3545]: I0213 19:51:23.054857 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e09b4eed7166e26c0a7966af78f6b9c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-61\" (UID: \"e09b4eed7166e26c0a7966af78f6b9c7\") " pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:23.054906 kubelet[3545]: I0213 19:51:23.054900 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:23.055311 kubelet[3545]: I0213 19:51:23.054964 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:23.055311 kubelet[3545]: I0213 19:51:23.055006 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:23.055311 kubelet[3545]: I0213 19:51:23.055280 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/493572c1cdb53aea770af4abff8ef406-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-61\" (UID: \"493572c1cdb53aea770af4abff8ef406\") " pod="kube-system/kube-controller-manager-ip-172-31-30-61" Feb 13 19:51:23.055469 kubelet[3545]: I0213 19:51:23.055322 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2cc199f74d398e4c7af420c64d31bbf6-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-61\" (UID: \"2cc199f74d398e4c7af420c64d31bbf6\") " pod="kube-system/kube-scheduler-ip-172-31-30-61" Feb 13 19:51:23.055469 kubelet[3545]: I0213 19:51:23.055362 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e09b4eed7166e26c0a7966af78f6b9c7-ca-certs\") pod \"kube-apiserver-ip-172-31-30-61\" (UID: \"e09b4eed7166e26c0a7966af78f6b9c7\") " pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:23.450104 kubelet[3545]: I0213 19:51:23.450031 3545 apiserver.go:52] "Watching apiserver" Feb 13 19:51:23.543831 kubelet[3545]: I0213 19:51:23.543282 3545 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:23.553581 sudo[3559]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:23.784013 kubelet[3545]: E0213 19:51:23.783199 3545 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-61\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-61" Feb 13 19:51:23.851064 kubelet[3545]: I0213 19:51:23.850932 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-61" podStartSLOduration=1.850913162 podStartE2EDuration="1.850913162s" podCreationTimestamp="2025-02-13 19:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:23.818441386 +0000 UTC m=+1.549144805" watchObservedRunningTime="2025-02-13 19:51:23.850913162 +0000 UTC m=+1.581616569" Feb 13 19:51:23.870380 kubelet[3545]: I0213 19:51:23.870288 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-61" podStartSLOduration=1.870267195 podStartE2EDuration="1.870267195s" podCreationTimestamp="2025-02-13 19:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:23.852377102 +0000 UTC m=+1.583080521" watchObservedRunningTime="2025-02-13 19:51:23.870267195 +0000 UTC m=+1.600970602" Feb 13 19:51:24.015929 kubelet[3545]: I0213 19:51:24.015839 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-61" podStartSLOduration=2.015819788 podStartE2EDuration="2.015819788s" podCreationTimestamp="2025-02-13 19:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:23.872359913 +0000 UTC m=+1.603063344" watchObservedRunningTime="2025-02-13 19:51:24.015819788 +0000 UTC m=+1.746523219" Feb 13 19:51:26.411263 sudo[2339]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:26.435314 sshd[2336]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:26.442650 systemd[1]: sshd@6-172.31.30.61:22-139.178.89.65:53998.service: Deactivated successfully. Feb 13 19:51:26.445750 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:51:26.446113 systemd[1]: session-7.scope: Consumed 13.403s CPU time, 188.2M memory peak, 0B memory swap peak. Feb 13 19:51:26.447615 systemd-logind[1996]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:51:26.450385 systemd-logind[1996]: Removed session 7. Feb 13 19:51:35.760246 kubelet[3545]: I0213 19:51:35.760195 3545 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:51:35.761893 kubelet[3545]: I0213 19:51:35.761165 3545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:51:35.762030 containerd[2023]: time="2025-02-13T19:51:35.760685035Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:51:36.487729 kubelet[3545]: I0213 19:51:36.487605 3545 topology_manager.go:215] "Topology Admit Handler" podUID="93bee23d-a489-4d4c-91ea-20e678c427ac" podNamespace="kube-system" podName="kube-proxy-5q4ph" Feb 13 19:51:36.508476 systemd[1]: Created slice kubepods-besteffort-pod93bee23d_a489_4d4c_91ea_20e678c427ac.slice - libcontainer container kubepods-besteffort-pod93bee23d_a489_4d4c_91ea_20e678c427ac.slice. Feb 13 19:51:36.527082 kubelet[3545]: I0213 19:51:36.524988 3545 topology_manager.go:215] "Topology Admit Handler" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" podNamespace="kube-system" podName="cilium-tq9lw" Feb 13 19:51:36.539310 kubelet[3545]: I0213 19:51:36.539253 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-xtables-lock\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.539799 kubelet[3545]: I0213 19:51:36.539579 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-bpf-maps\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.539799 kubelet[3545]: I0213 19:51:36.539662 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-hostproc\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.539799 kubelet[3545]: I0213 19:51:36.539727 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cni-path\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.540311 kubelet[3545]: I0213 19:51:36.540049 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-config-path\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.540311 kubelet[3545]: I0213 19:51:36.540112 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-cgroup\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.540311 kubelet[3545]: I0213 19:51:36.540268 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-net\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.541056 kubelet[3545]: I0213 19:51:36.540726 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-etc-cni-netd\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.541056 kubelet[3545]: I0213 19:51:36.540985 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdnk\" (UniqueName: \"kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-kube-api-access-lzdnk\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.542440 kubelet[3545]: I0213 19:51:36.542185 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93bee23d-a489-4d4c-91ea-20e678c427ac-xtables-lock\") pod \"kube-proxy-5q4ph\" (UID: \"93bee23d-a489-4d4c-91ea-20e678c427ac\") " pod="kube-system/kube-proxy-5q4ph" Feb 13 19:51:36.542440 kubelet[3545]: I0213 19:51:36.542362 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93bee23d-a489-4d4c-91ea-20e678c427ac-lib-modules\") pod \"kube-proxy-5q4ph\" (UID: \"93bee23d-a489-4d4c-91ea-20e678c427ac\") " pod="kube-system/kube-proxy-5q4ph" Feb 13 19:51:36.542440 kubelet[3545]: I0213 19:51:36.542403 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/997007d6-5e4e-4700-9480-ddf89d70f8e6-clustermesh-secrets\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.543335 kubelet[3545]: I0213 19:51:36.543027 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93bee23d-a489-4d4c-91ea-20e678c427ac-kube-proxy\") pod \"kube-proxy-5q4ph\" (UID: \"93bee23d-a489-4d4c-91ea-20e678c427ac\") " pod="kube-system/kube-proxy-5q4ph" Feb 13 19:51:36.543335 kubelet[3545]: I0213 19:51:36.543174 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-lib-modules\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.543335 kubelet[3545]: I0213 19:51:36.543222 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-kernel\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.543335 kubelet[3545]: I0213 19:51:36.543285 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-hubble-tls\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.544777 kubelet[3545]: I0213 19:51:36.543675 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrvg\" (UniqueName: \"kubernetes.io/projected/93bee23d-a489-4d4c-91ea-20e678c427ac-kube-api-access-ngrvg\") pod \"kube-proxy-5q4ph\" (UID: \"93bee23d-a489-4d4c-91ea-20e678c427ac\") " pod="kube-system/kube-proxy-5q4ph" Feb 13 19:51:36.544777 kubelet[3545]: I0213 19:51:36.543757 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-run\") pod \"cilium-tq9lw\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " pod="kube-system/cilium-tq9lw" Feb 13 19:51:36.548815 systemd[1]: Created slice kubepods-burstable-pod997007d6_5e4e_4700_9480_ddf89d70f8e6.slice - libcontainer container kubepods-burstable-pod997007d6_5e4e_4700_9480_ddf89d70f8e6.slice. Feb 13 19:51:36.827363 containerd[2023]: time="2025-02-13T19:51:36.825487830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5q4ph,Uid:93bee23d-a489-4d4c-91ea-20e678c427ac,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:36.864919 containerd[2023]: time="2025-02-13T19:51:36.861615157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tq9lw,Uid:997007d6-5e4e-4700-9480-ddf89d70f8e6,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:36.891338 kubelet[3545]: I0213 19:51:36.888057 3545 topology_manager.go:215] "Topology Admit Handler" podUID="32a79e45-696d-4f0b-8ffd-47c888e2c44a" podNamespace="kube-system" podName="cilium-operator-599987898-sk7ws" Feb 13 19:51:36.917490 systemd[1]: Created slice kubepods-besteffort-pod32a79e45_696d_4f0b_8ffd_47c888e2c44a.slice - libcontainer container kubepods-besteffort-pod32a79e45_696d_4f0b_8ffd_47c888e2c44a.slice. Feb 13 19:51:36.948970 kubelet[3545]: I0213 19:51:36.948397 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdvq\" (UniqueName: \"kubernetes.io/projected/32a79e45-696d-4f0b-8ffd-47c888e2c44a-kube-api-access-qwdvq\") pod \"cilium-operator-599987898-sk7ws\" (UID: \"32a79e45-696d-4f0b-8ffd-47c888e2c44a\") " pod="kube-system/cilium-operator-599987898-sk7ws" Feb 13 19:51:36.948970 kubelet[3545]: I0213 19:51:36.948469 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32a79e45-696d-4f0b-8ffd-47c888e2c44a-cilium-config-path\") pod \"cilium-operator-599987898-sk7ws\" (UID: \"32a79e45-696d-4f0b-8ffd-47c888e2c44a\") " pod="kube-system/cilium-operator-599987898-sk7ws" Feb 13 19:51:36.963434 containerd[2023]: time="2025-02-13T19:51:36.962228211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:36.963434 containerd[2023]: time="2025-02-13T19:51:36.962344273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:36.963434 containerd[2023]: time="2025-02-13T19:51:36.962393245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:36.969512 containerd[2023]: time="2025-02-13T19:51:36.967780938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:37.038394 containerd[2023]: time="2025-02-13T19:51:37.038202600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:37.038394 containerd[2023]: time="2025-02-13T19:51:37.038317509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:37.042176 containerd[2023]: time="2025-02-13T19:51:37.038356637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:37.042176 containerd[2023]: time="2025-02-13T19:51:37.040997476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:37.113292 systemd[1]: Started cri-containerd-72f2666bb3a7bdd87b3d66f1282f303e8d299ba5fa32656f17b7e66a7f7d6ffa.scope - libcontainer container 72f2666bb3a7bdd87b3d66f1282f303e8d299ba5fa32656f17b7e66a7f7d6ffa. Feb 13 19:51:37.170414 systemd[1]: Started cri-containerd-5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a.scope - libcontainer container 5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a. Feb 13 19:51:37.227190 containerd[2023]: time="2025-02-13T19:51:37.227134900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-sk7ws,Uid:32a79e45-696d-4f0b-8ffd-47c888e2c44a,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:37.267407 containerd[2023]: time="2025-02-13T19:51:37.267212093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tq9lw,Uid:997007d6-5e4e-4700-9480-ddf89d70f8e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\"" Feb 13 19:51:37.277790 containerd[2023]: time="2025-02-13T19:51:37.277731390Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:51:37.298140 containerd[2023]: time="2025-02-13T19:51:37.297700237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5q4ph,Uid:93bee23d-a489-4d4c-91ea-20e678c427ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"72f2666bb3a7bdd87b3d66f1282f303e8d299ba5fa32656f17b7e66a7f7d6ffa\"" Feb 13 19:51:37.313416 containerd[2023]: time="2025-02-13T19:51:37.313169188Z" level=info msg="CreateContainer within sandbox \"72f2666bb3a7bdd87b3d66f1282f303e8d299ba5fa32656f17b7e66a7f7d6ffa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:51:37.331507 containerd[2023]: time="2025-02-13T19:51:37.331344483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:37.331507 containerd[2023]: time="2025-02-13T19:51:37.331459045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:37.332636 containerd[2023]: time="2025-02-13T19:51:37.332526150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:37.333143 containerd[2023]: time="2025-02-13T19:51:37.332980733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:37.365198 containerd[2023]: time="2025-02-13T19:51:37.364376207Z" level=info msg="CreateContainer within sandbox \"72f2666bb3a7bdd87b3d66f1282f303e8d299ba5fa32656f17b7e66a7f7d6ffa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff46598e7838b0039c7fc3f75340c4c728cbd727b6b55ee3fb8e1d1931005725\"" Feb 13 19:51:37.366568 containerd[2023]: time="2025-02-13T19:51:37.366492985Z" level=info msg="StartContainer for \"ff46598e7838b0039c7fc3f75340c4c728cbd727b6b55ee3fb8e1d1931005725\"" Feb 13 19:51:37.369431 systemd[1]: Started cri-containerd-64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7.scope - libcontainer container 64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7. Feb 13 19:51:37.447288 systemd[1]: Started cri-containerd-ff46598e7838b0039c7fc3f75340c4c728cbd727b6b55ee3fb8e1d1931005725.scope - libcontainer container ff46598e7838b0039c7fc3f75340c4c728cbd727b6b55ee3fb8e1d1931005725. Feb 13 19:51:37.479250 containerd[2023]: time="2025-02-13T19:51:37.478712888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-sk7ws,Uid:32a79e45-696d-4f0b-8ffd-47c888e2c44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\"" Feb 13 19:51:37.531035 containerd[2023]: time="2025-02-13T19:51:37.530750230Z" level=info msg="StartContainer for \"ff46598e7838b0039c7fc3f75340c4c728cbd727b6b55ee3fb8e1d1931005725\" returns successfully" Feb 13 19:51:37.833089 kubelet[3545]: I0213 19:51:37.832816 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5q4ph" podStartSLOduration=1.832795271 podStartE2EDuration="1.832795271s" podCreationTimestamp="2025-02-13 19:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:37.832438753 +0000 UTC m=+15.563142172" watchObservedRunningTime="2025-02-13 19:51:37.832795271 +0000 UTC m=+15.563498678" Feb 13 19:51:44.156960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751856299.mount: Deactivated successfully. Feb 13 19:51:46.604131 containerd[2023]: time="2025-02-13T19:51:46.604061950Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:46.606682 containerd[2023]: time="2025-02-13T19:51:46.606617067Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:51:46.608370 containerd[2023]: time="2025-02-13T19:51:46.608134229Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:46.611602 containerd[2023]: time="2025-02-13T19:51:46.611404699Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.333088329s" Feb 13 19:51:46.611602 containerd[2023]: time="2025-02-13T19:51:46.611468691Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:51:46.616683 containerd[2023]: time="2025-02-13T19:51:46.615989827Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:51:46.619976 containerd[2023]: time="2025-02-13T19:51:46.619446474Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:51:46.640359 containerd[2023]: time="2025-02-13T19:51:46.640299384Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\"" Feb 13 19:51:46.643641 containerd[2023]: time="2025-02-13T19:51:46.643411555Z" level=info msg="StartContainer for \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\"" Feb 13 19:51:46.646684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854613424.mount: Deactivated successfully. Feb 13 19:51:46.728412 systemd[1]: Started cri-containerd-b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff.scope - libcontainer container b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff. Feb 13 19:51:46.783305 containerd[2023]: time="2025-02-13T19:51:46.782202609Z" level=info msg="StartContainer for \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\" returns successfully" Feb 13 19:51:46.811351 systemd[1]: cri-containerd-b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff.scope: Deactivated successfully. Feb 13 19:51:47.632432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff-rootfs.mount: Deactivated successfully. Feb 13 19:51:48.201738 containerd[2023]: time="2025-02-13T19:51:48.201488463Z" level=info msg="shim disconnected" id=b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff namespace=k8s.io Feb 13 19:51:48.201738 containerd[2023]: time="2025-02-13T19:51:48.201686502Z" level=warning msg="cleaning up after shim disconnected" id=b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff namespace=k8s.io Feb 13 19:51:48.201738 containerd[2023]: time="2025-02-13T19:51:48.201708749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:48.868195 containerd[2023]: time="2025-02-13T19:51:48.868067098Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:51:48.928314 containerd[2023]: time="2025-02-13T19:51:48.928205225Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\"" Feb 13 19:51:48.929438 containerd[2023]: time="2025-02-13T19:51:48.929379772Z" level=info msg="StartContainer for \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\"" Feb 13 19:51:48.992293 systemd[1]: Started cri-containerd-ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af.scope - libcontainer container ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af. Feb 13 19:51:49.054991 containerd[2023]: time="2025-02-13T19:51:49.053407840Z" level=info msg="StartContainer for \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\" returns successfully" Feb 13 19:51:49.077802 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:49.078639 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:49.078763 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:49.089040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:49.090293 systemd[1]: cri-containerd-ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af.scope: Deactivated successfully. Feb 13 19:51:49.138449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:49.161789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af-rootfs.mount: Deactivated successfully. Feb 13 19:51:49.171452 containerd[2023]: time="2025-02-13T19:51:49.171322104Z" level=info msg="shim disconnected" id=ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af namespace=k8s.io Feb 13 19:51:49.171715 containerd[2023]: time="2025-02-13T19:51:49.171456931Z" level=warning msg="cleaning up after shim disconnected" id=ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af namespace=k8s.io Feb 13 19:51:49.171715 containerd[2023]: time="2025-02-13T19:51:49.171479839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:49.876693 containerd[2023]: time="2025-02-13T19:51:49.876506033Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:51:49.944774 containerd[2023]: time="2025-02-13T19:51:49.943300118Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\"" Feb 13 19:51:49.948061 containerd[2023]: time="2025-02-13T19:51:49.947988437Z" level=info msg="StartContainer for \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\"" Feb 13 19:51:50.053906 systemd[1]: Started cri-containerd-61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037.scope - libcontainer container 61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037. Feb 13 19:51:50.155585 containerd[2023]: time="2025-02-13T19:51:50.153398120Z" level=info msg="StartContainer for \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\" returns successfully" Feb 13 19:51:50.164284 systemd[1]: cri-containerd-61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037.scope: Deactivated successfully. Feb 13 19:51:50.308518 containerd[2023]: time="2025-02-13T19:51:50.308435789Z" level=info msg="shim disconnected" id=61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037 namespace=k8s.io Feb 13 19:51:50.311594 containerd[2023]: time="2025-02-13T19:51:50.311522483Z" level=warning msg="cleaning up after shim disconnected" id=61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037 namespace=k8s.io Feb 13 19:51:50.311594 containerd[2023]: time="2025-02-13T19:51:50.311568874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:50.476750 containerd[2023]: time="2025-02-13T19:51:50.476174305Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:50.478611 containerd[2023]: time="2025-02-13T19:51:50.478535297Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:51:50.481239 containerd[2023]: time="2025-02-13T19:51:50.481152977Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:50.485134 containerd[2023]: time="2025-02-13T19:51:50.484626373Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.867964246s" Feb 13 19:51:50.485134 containerd[2023]: time="2025-02-13T19:51:50.484772030Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:51:50.490518 containerd[2023]: time="2025-02-13T19:51:50.490441515Z" level=info msg="CreateContainer within sandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:51:50.517394 containerd[2023]: time="2025-02-13T19:51:50.517257757Z" level=info msg="CreateContainer within sandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\"" Feb 13 19:51:50.518702 containerd[2023]: time="2025-02-13T19:51:50.518375071Z" level=info msg="StartContainer for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\"" Feb 13 19:51:50.572260 systemd[1]: Started cri-containerd-28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78.scope - libcontainer container 28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78. Feb 13 19:51:50.624784 containerd[2023]: time="2025-02-13T19:51:50.624690783Z" level=info msg="StartContainer for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" returns successfully" Feb 13 19:51:50.893564 containerd[2023]: time="2025-02-13T19:51:50.893460235Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:51:50.905931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037-rootfs.mount: Deactivated successfully. Feb 13 19:51:50.946686 containerd[2023]: time="2025-02-13T19:51:50.946321921Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\"" Feb 13 19:51:50.952622 containerd[2023]: time="2025-02-13T19:51:50.952368575Z" level=info msg="StartContainer for \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\"" Feb 13 19:51:51.071417 systemd[1]: Started cri-containerd-ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124.scope - libcontainer container ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124. Feb 13 19:51:51.089562 kubelet[3545]: I0213 19:51:51.086240 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-sk7ws" podStartSLOduration=2.08360981 podStartE2EDuration="15.086198773s" podCreationTimestamp="2025-02-13 19:51:36 +0000 UTC" firstStartedPulling="2025-02-13 19:51:37.483712678 +0000 UTC m=+15.214416122" lastFinishedPulling="2025-02-13 19:51:50.48630169 +0000 UTC m=+28.217005085" observedRunningTime="2025-02-13 19:51:50.943295208 +0000 UTC m=+28.673998651" watchObservedRunningTime="2025-02-13 19:51:51.086198773 +0000 UTC m=+28.816902180" Feb 13 19:51:51.216551 containerd[2023]: time="2025-02-13T19:51:51.215070830Z" level=info msg="StartContainer for \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\" returns successfully" Feb 13 19:51:51.217776 systemd[1]: cri-containerd-ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124.scope: Deactivated successfully. Feb 13 19:51:51.323724 containerd[2023]: time="2025-02-13T19:51:51.323370726Z" level=info msg="shim disconnected" id=ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124 namespace=k8s.io Feb 13 19:51:51.323724 containerd[2023]: time="2025-02-13T19:51:51.323449161Z" level=warning msg="cleaning up after shim disconnected" id=ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124 namespace=k8s.io Feb 13 19:51:51.323724 containerd[2023]: time="2025-02-13T19:51:51.323471144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:51.901079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124-rootfs.mount: Deactivated successfully. Feb 13 19:51:51.911517 containerd[2023]: time="2025-02-13T19:51:51.911414044Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:51:51.958704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774278572.mount: Deactivated successfully. Feb 13 19:51:51.988597 containerd[2023]: time="2025-02-13T19:51:51.988437373Z" level=info msg="CreateContainer within sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\"" Feb 13 19:51:51.993543 containerd[2023]: time="2025-02-13T19:51:51.990690407Z" level=info msg="StartContainer for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\"" Feb 13 19:51:52.124327 systemd[1]: Started cri-containerd-468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8.scope - libcontainer container 468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8. Feb 13 19:51:52.288140 containerd[2023]: time="2025-02-13T19:51:52.287906371Z" level=info msg="StartContainer for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" returns successfully" Feb 13 19:51:52.686016 kubelet[3545]: I0213 19:51:52.685886 3545 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:51:52.760077 kubelet[3545]: I0213 19:51:52.760006 3545 topology_manager.go:215] "Topology Admit Handler" podUID="02db176c-94ac-4bd7-9ded-2526ab9b67fc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zbx54" Feb 13 19:51:52.778768 systemd[1]: Created slice kubepods-burstable-pod02db176c_94ac_4bd7_9ded_2526ab9b67fc.slice - libcontainer container kubepods-burstable-pod02db176c_94ac_4bd7_9ded_2526ab9b67fc.slice. Feb 13 19:51:52.783573 kubelet[3545]: I0213 19:51:52.783503 3545 topology_manager.go:215] "Topology Admit Handler" podUID="71ba84da-414f-4c3b-8969-436637ea143d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w9xtt" Feb 13 19:51:52.809110 systemd[1]: Created slice kubepods-burstable-pod71ba84da_414f_4c3b_8969_436637ea143d.slice - libcontainer container kubepods-burstable-pod71ba84da_414f_4c3b_8969_436637ea143d.slice. Feb 13 19:51:52.863723 kubelet[3545]: I0213 19:51:52.863639 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02db176c-94ac-4bd7-9ded-2526ab9b67fc-config-volume\") pod \"coredns-7db6d8ff4d-zbx54\" (UID: \"02db176c-94ac-4bd7-9ded-2526ab9b67fc\") " pod="kube-system/coredns-7db6d8ff4d-zbx54" Feb 13 19:51:52.868133 kubelet[3545]: I0213 19:51:52.868066 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddh89\" (UniqueName: \"kubernetes.io/projected/71ba84da-414f-4c3b-8969-436637ea143d-kube-api-access-ddh89\") pod \"coredns-7db6d8ff4d-w9xtt\" (UID: \"71ba84da-414f-4c3b-8969-436637ea143d\") " pod="kube-system/coredns-7db6d8ff4d-w9xtt" Feb 13 19:51:52.868318 kubelet[3545]: I0213 19:51:52.868218 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v2db\" (UniqueName: \"kubernetes.io/projected/02db176c-94ac-4bd7-9ded-2526ab9b67fc-kube-api-access-4v2db\") pod \"coredns-7db6d8ff4d-zbx54\" (UID: \"02db176c-94ac-4bd7-9ded-2526ab9b67fc\") " pod="kube-system/coredns-7db6d8ff4d-zbx54" Feb 13 19:51:52.868318 kubelet[3545]: I0213 19:51:52.868352 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ba84da-414f-4c3b-8969-436637ea143d-config-volume\") pod \"coredns-7db6d8ff4d-w9xtt\" (UID: \"71ba84da-414f-4c3b-8969-436637ea143d\") " pod="kube-system/coredns-7db6d8ff4d-w9xtt" Feb 13 19:51:53.094233 containerd[2023]: time="2025-02-13T19:51:53.090614486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zbx54,Uid:02db176c-94ac-4bd7-9ded-2526ab9b67fc,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:53.122323 containerd[2023]: time="2025-02-13T19:51:53.120448928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w9xtt,Uid:71ba84da-414f-4c3b-8969-436637ea143d,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:55.808659 systemd-networkd[1933]: cilium_host: Link UP Feb 13 19:51:55.812559 (udev-worker)[4346]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:55.814911 systemd-networkd[1933]: cilium_net: Link UP Feb 13 19:51:55.815706 (udev-worker)[4344]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:55.816323 systemd-networkd[1933]: cilium_net: Gained carrier Feb 13 19:51:55.816740 systemd-networkd[1933]: cilium_host: Gained carrier Feb 13 19:51:55.829304 systemd-networkd[1933]: cilium_net: Gained IPv6LL Feb 13 19:51:56.022890 (udev-worker)[4391]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:56.049031 systemd-networkd[1933]: cilium_vxlan: Link UP Feb 13 19:51:56.049239 systemd-networkd[1933]: cilium_vxlan: Gained carrier Feb 13 19:51:56.601997 kernel: NET: Registered PF_ALG protocol family Feb 13 19:51:56.656205 systemd-networkd[1933]: cilium_host: Gained IPv6LL Feb 13 19:51:57.436886 systemd[1]: Started sshd@7-172.31.30.61:22-139.178.89.65:52174.service - OpenSSH per-connection server daemon (139.178.89.65:52174). Feb 13 19:51:57.621995 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 52174 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:57.624670 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:57.635423 systemd-logind[1996]: New session 8 of user core. Feb 13 19:51:57.645423 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:51:57.990145 sshd[4557]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:57.998780 systemd[1]: sshd@7-172.31.30.61:22-139.178.89.65:52174.service: Deactivated successfully. Feb 13 19:51:58.000164 systemd-networkd[1933]: cilium_vxlan: Gained IPv6LL Feb 13 19:51:58.008147 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:51:58.015665 systemd-logind[1996]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:51:58.022441 systemd-logind[1996]: Removed session 8. Feb 13 19:51:58.339233 systemd-networkd[1933]: lxc_health: Link UP Feb 13 19:51:58.346326 systemd-networkd[1933]: lxc_health: Gained carrier Feb 13 19:51:58.761369 systemd-networkd[1933]: lxca630486d50e3: Link UP Feb 13 19:51:58.764993 kernel: eth0: renamed from tmp48225 Feb 13 19:51:58.772412 systemd-networkd[1933]: lxca630486d50e3: Gained carrier Feb 13 19:51:58.841659 systemd-networkd[1933]: lxcf6a19a201248: Link UP Feb 13 19:51:58.857018 kernel: eth0: renamed from tmp6d100 Feb 13 19:51:58.873283 systemd-networkd[1933]: lxcf6a19a201248: Gained carrier Feb 13 19:51:58.944575 kubelet[3545]: I0213 19:51:58.943190 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tq9lw" podStartSLOduration=13.604597432 podStartE2EDuration="22.943168998s" podCreationTimestamp="2025-02-13 19:51:36 +0000 UTC" firstStartedPulling="2025-02-13 19:51:37.275387278 +0000 UTC m=+15.006090685" lastFinishedPulling="2025-02-13 19:51:46.61395876 +0000 UTC m=+24.344662251" observedRunningTime="2025-02-13 19:51:52.991739169 +0000 UTC m=+30.722442612" watchObservedRunningTime="2025-02-13 19:51:58.943168998 +0000 UTC m=+36.673872405" Feb 13 19:51:59.729234 systemd-networkd[1933]: lxc_health: Gained IPv6LL Feb 13 19:52:00.176189 systemd-networkd[1933]: lxca630486d50e3: Gained IPv6LL Feb 13 19:52:00.178743 systemd-networkd[1933]: lxcf6a19a201248: Gained IPv6LL Feb 13 19:52:02.644708 ntpd[1991]: Listen normally on 8 cilium_host 192.168.0.148:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 8 cilium_host 192.168.0.148:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 9 cilium_net [fe80::588f:1bff:fe8c:9877%4]:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 10 cilium_host [fe80::3cfd:9aff:fe6e:ec3e%5]:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 11 cilium_vxlan [fe80::5cad:aff:fec8:8918%6]:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 12 lxc_health [fe80::74e6:a6ff:fe88:3c3a%8]:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 13 lxca630486d50e3 [fe80::76:bfff:fe6d:c540%10]:123 Feb 13 19:52:02.645524 ntpd[1991]: 13 Feb 19:52:02 ntpd[1991]: Listen normally on 14 lxcf6a19a201248 [fe80::84a4:8ff:fed3:8b88%12]:123 Feb 13 19:52:02.644932 ntpd[1991]: Listen normally on 9 cilium_net [fe80::588f:1bff:fe8c:9877%4]:123 Feb 13 19:52:02.645137 ntpd[1991]: Listen normally on 10 cilium_host [fe80::3cfd:9aff:fe6e:ec3e%5]:123 Feb 13 19:52:02.645251 ntpd[1991]: Listen normally on 11 cilium_vxlan [fe80::5cad:aff:fec8:8918%6]:123 Feb 13 19:52:02.645336 ntpd[1991]: Listen normally on 12 lxc_health [fe80::74e6:a6ff:fe88:3c3a%8]:123 Feb 13 19:52:02.645413 ntpd[1991]: Listen normally on 13 lxca630486d50e3 [fe80::76:bfff:fe6d:c540%10]:123 Feb 13 19:52:02.645487 ntpd[1991]: Listen normally on 14 lxcf6a19a201248 [fe80::84a4:8ff:fed3:8b88%12]:123 Feb 13 19:52:03.044754 systemd[1]: Started sshd@8-172.31.30.61:22-139.178.89.65:52190.service - OpenSSH per-connection server daemon (139.178.89.65:52190). Feb 13 19:52:03.256229 sshd[4760]: Accepted publickey for core from 139.178.89.65 port 52190 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:03.260364 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:03.274079 systemd-logind[1996]: New session 9 of user core. Feb 13 19:52:03.288384 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:52:03.633885 sshd[4760]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:03.644123 systemd[1]: sshd@8-172.31.30.61:22-139.178.89.65:52190.service: Deactivated successfully. Feb 13 19:52:03.652736 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:52:03.660634 systemd-logind[1996]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:52:03.664594 systemd-logind[1996]: Removed session 9. Feb 13 19:52:08.686626 systemd[1]: Started sshd@9-172.31.30.61:22-139.178.89.65:37554.service - OpenSSH per-connection server daemon (139.178.89.65:37554). Feb 13 19:52:08.889465 sshd[4779]: Accepted publickey for core from 139.178.89.65 port 37554 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:08.896052 sshd[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:08.914253 systemd-logind[1996]: New session 10 of user core. Feb 13 19:52:08.924364 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:52:09.314604 sshd[4779]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:09.328481 systemd[1]: sshd@9-172.31.30.61:22-139.178.89.65:37554.service: Deactivated successfully. Feb 13 19:52:09.337013 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:52:09.345047 systemd-logind[1996]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:52:09.353119 systemd-logind[1996]: Removed session 10. Feb 13 19:52:09.663465 containerd[2023]: time="2025-02-13T19:52:09.661569755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:09.663465 containerd[2023]: time="2025-02-13T19:52:09.661671890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:09.663465 containerd[2023]: time="2025-02-13T19:52:09.661720310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:09.663465 containerd[2023]: time="2025-02-13T19:52:09.661917977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:09.688746 containerd[2023]: time="2025-02-13T19:52:09.684518888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:09.690502 containerd[2023]: time="2025-02-13T19:52:09.690095530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:09.690502 containerd[2023]: time="2025-02-13T19:52:09.690294842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:09.693016 containerd[2023]: time="2025-02-13T19:52:09.690757049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:09.762675 systemd[1]: run-containerd-runc-k8s.io-482259277f32999281d6b33ab3d85de47068fba14eb1ed2ea3cb0d61ae6fc3c9-runc.tRc6nd.mount: Deactivated successfully. Feb 13 19:52:09.796306 systemd[1]: Started cri-containerd-482259277f32999281d6b33ab3d85de47068fba14eb1ed2ea3cb0d61ae6fc3c9.scope - libcontainer container 482259277f32999281d6b33ab3d85de47068fba14eb1ed2ea3cb0d61ae6fc3c9. Feb 13 19:52:09.832312 systemd[1]: Started cri-containerd-6d10010c03f7b3e5ab3d92e474e02143f4b42dca6619a2e4a604b7024e69d11c.scope - libcontainer container 6d10010c03f7b3e5ab3d92e474e02143f4b42dca6619a2e4a604b7024e69d11c. Feb 13 19:52:10.020746 containerd[2023]: time="2025-02-13T19:52:10.018677109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zbx54,Uid:02db176c-94ac-4bd7-9ded-2526ab9b67fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"482259277f32999281d6b33ab3d85de47068fba14eb1ed2ea3cb0d61ae6fc3c9\"" Feb 13 19:52:10.032419 containerd[2023]: time="2025-02-13T19:52:10.032343924Z" level=info msg="CreateContainer within sandbox \"482259277f32999281d6b33ab3d85de47068fba14eb1ed2ea3cb0d61ae6fc3c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:10.037715 containerd[2023]: time="2025-02-13T19:52:10.037611052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w9xtt,Uid:71ba84da-414f-4c3b-8969-436637ea143d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d10010c03f7b3e5ab3d92e474e02143f4b42dca6619a2e4a604b7024e69d11c\"" Feb 13 19:52:10.054347 containerd[2023]: time="2025-02-13T19:52:10.054225435Z" level=info msg="CreateContainer within sandbox \"6d10010c03f7b3e5ab3d92e474e02143f4b42dca6619a2e4a604b7024e69d11c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:10.095619 containerd[2023]: time="2025-02-13T19:52:10.095456320Z" level=info msg="CreateContainer within sandbox \"482259277f32999281d6b33ab3d85de47068fba14eb1ed2ea3cb0d61ae6fc3c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e400bd296232845193e1679bc95b25174f39fd89695c1557fd36474d37895970\"" Feb 13 19:52:10.097804 containerd[2023]: time="2025-02-13T19:52:10.097303612Z" level=info msg="StartContainer for \"e400bd296232845193e1679bc95b25174f39fd89695c1557fd36474d37895970\"" Feb 13 19:52:10.108781 containerd[2023]: time="2025-02-13T19:52:10.108697090Z" level=info msg="CreateContainer within sandbox \"6d10010c03f7b3e5ab3d92e474e02143f4b42dca6619a2e4a604b7024e69d11c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d76a361c9c8cff41d3758f7044158ad9a22f341d9bac9bb1efcb9467f861858e\"" Feb 13 19:52:10.114920 containerd[2023]: time="2025-02-13T19:52:10.114777564Z" level=info msg="StartContainer for \"d76a361c9c8cff41d3758f7044158ad9a22f341d9bac9bb1efcb9467f861858e\"" Feb 13 19:52:10.201519 systemd[1]: Started cri-containerd-d76a361c9c8cff41d3758f7044158ad9a22f341d9bac9bb1efcb9467f861858e.scope - libcontainer container d76a361c9c8cff41d3758f7044158ad9a22f341d9bac9bb1efcb9467f861858e. Feb 13 19:52:10.253311 systemd[1]: Started cri-containerd-e400bd296232845193e1679bc95b25174f39fd89695c1557fd36474d37895970.scope - libcontainer container e400bd296232845193e1679bc95b25174f39fd89695c1557fd36474d37895970. Feb 13 19:52:10.342280 containerd[2023]: time="2025-02-13T19:52:10.341888224Z" level=info msg="StartContainer for \"d76a361c9c8cff41d3758f7044158ad9a22f341d9bac9bb1efcb9467f861858e\" returns successfully" Feb 13 19:52:10.369407 containerd[2023]: time="2025-02-13T19:52:10.369123223Z" level=info msg="StartContainer for \"e400bd296232845193e1679bc95b25174f39fd89695c1557fd36474d37895970\" returns successfully" Feb 13 19:52:11.033330 kubelet[3545]: I0213 19:52:11.031644 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zbx54" podStartSLOduration=35.031608641 podStartE2EDuration="35.031608641s" podCreationTimestamp="2025-02-13 19:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:11.031030072 +0000 UTC m=+48.761733503" watchObservedRunningTime="2025-02-13 19:52:11.031608641 +0000 UTC m=+48.762312048" Feb 13 19:52:11.084132 kubelet[3545]: I0213 19:52:11.083191 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w9xtt" podStartSLOduration=35.083166307 podStartE2EDuration="35.083166307s" podCreationTimestamp="2025-02-13 19:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:11.079415285 +0000 UTC m=+48.810118704" watchObservedRunningTime="2025-02-13 19:52:11.083166307 +0000 UTC m=+48.813869714" Feb 13 19:52:14.362891 systemd[1]: Started sshd@10-172.31.30.61:22-139.178.89.65:37570.service - OpenSSH per-connection server daemon (139.178.89.65:37570). Feb 13 19:52:14.550018 sshd[4963]: Accepted publickey for core from 139.178.89.65 port 37570 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:14.553768 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:14.563640 systemd-logind[1996]: New session 11 of user core. Feb 13 19:52:14.573331 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:52:14.854282 sshd[4963]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:14.863909 systemd[1]: sshd@10-172.31.30.61:22-139.178.89.65:37570.service: Deactivated successfully. Feb 13 19:52:14.863920 systemd-logind[1996]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:52:14.869860 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:52:14.891361 systemd-logind[1996]: Removed session 11. Feb 13 19:52:14.901610 systemd[1]: Started sshd@11-172.31.30.61:22-139.178.89.65:45446.service - OpenSSH per-connection server daemon (139.178.89.65:45446). Feb 13 19:52:15.095697 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 45446 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:15.099539 sshd[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:15.114056 systemd-logind[1996]: New session 12 of user core. Feb 13 19:52:15.122548 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:52:15.483778 sshd[4977]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:15.499614 systemd[1]: sshd@11-172.31.30.61:22-139.178.89.65:45446.service: Deactivated successfully. Feb 13 19:52:15.511363 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:52:15.538101 systemd-logind[1996]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:52:15.549450 systemd[1]: Started sshd@12-172.31.30.61:22-139.178.89.65:45458.service - OpenSSH per-connection server daemon (139.178.89.65:45458). Feb 13 19:52:15.557744 systemd-logind[1996]: Removed session 12. Feb 13 19:52:15.760014 sshd[4988]: Accepted publickey for core from 139.178.89.65 port 45458 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:15.763602 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:15.773331 systemd-logind[1996]: New session 13 of user core. Feb 13 19:52:15.786354 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:52:16.075768 sshd[4988]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:16.087522 systemd[1]: sshd@12-172.31.30.61:22-139.178.89.65:45458.service: Deactivated successfully. Feb 13 19:52:16.094593 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:52:16.097455 systemd-logind[1996]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:52:16.101430 systemd-logind[1996]: Removed session 13. Feb 13 19:52:21.119630 systemd[1]: Started sshd@13-172.31.30.61:22-139.178.89.65:45464.service - OpenSSH per-connection server daemon (139.178.89.65:45464). Feb 13 19:52:21.317793 sshd[5001]: Accepted publickey for core from 139.178.89.65 port 45464 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:21.322557 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:21.333346 systemd-logind[1996]: New session 14 of user core. Feb 13 19:52:21.340275 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:52:21.607354 sshd[5001]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:21.615471 systemd[1]: sshd@13-172.31.30.61:22-139.178.89.65:45464.service: Deactivated successfully. Feb 13 19:52:21.623191 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:52:21.624770 systemd-logind[1996]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:52:21.628306 systemd-logind[1996]: Removed session 14. Feb 13 19:52:26.658540 systemd[1]: Started sshd@14-172.31.30.61:22-139.178.89.65:42092.service - OpenSSH per-connection server daemon (139.178.89.65:42092). Feb 13 19:52:26.831470 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 42092 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:26.834934 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.845642 systemd-logind[1996]: New session 15 of user core. Feb 13 19:52:26.856505 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:52:27.128459 sshd[5017]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:27.136041 systemd[1]: sshd@14-172.31.30.61:22-139.178.89.65:42092.service: Deactivated successfully. Feb 13 19:52:27.140297 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:52:27.143263 systemd-logind[1996]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:52:27.145517 systemd-logind[1996]: Removed session 15. Feb 13 19:52:32.177607 systemd[1]: Started sshd@15-172.31.30.61:22-139.178.89.65:42094.service - OpenSSH per-connection server daemon (139.178.89.65:42094). Feb 13 19:52:32.357447 sshd[5030]: Accepted publickey for core from 139.178.89.65 port 42094 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:32.360711 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:32.370048 systemd-logind[1996]: New session 16 of user core. Feb 13 19:52:32.376263 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:52:32.640164 sshd[5030]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:32.647197 systemd-logind[1996]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:52:32.648257 systemd[1]: sshd@15-172.31.30.61:22-139.178.89.65:42094.service: Deactivated successfully. Feb 13 19:52:32.652719 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:52:32.662033 systemd-logind[1996]: Removed session 16. Feb 13 19:52:32.682604 systemd[1]: Started sshd@16-172.31.30.61:22-139.178.89.65:42104.service - OpenSSH per-connection server daemon (139.178.89.65:42104). Feb 13 19:52:32.867148 sshd[5043]: Accepted publickey for core from 139.178.89.65 port 42104 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:32.870538 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:32.880764 systemd-logind[1996]: New session 17 of user core. Feb 13 19:52:32.893328 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:52:33.226641 sshd[5043]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:33.233490 systemd[1]: sshd@16-172.31.30.61:22-139.178.89.65:42104.service: Deactivated successfully. Feb 13 19:52:33.237433 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:52:33.239022 systemd-logind[1996]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:52:33.241535 systemd-logind[1996]: Removed session 17. Feb 13 19:52:33.264562 systemd[1]: Started sshd@17-172.31.30.61:22-139.178.89.65:42116.service - OpenSSH per-connection server daemon (139.178.89.65:42116). Feb 13 19:52:33.450317 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 42116 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:33.454088 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:33.465069 systemd-logind[1996]: New session 18 of user core. Feb 13 19:52:33.473547 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:52:36.494023 sshd[5054]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:36.506159 systemd[1]: sshd@17-172.31.30.61:22-139.178.89.65:42116.service: Deactivated successfully. Feb 13 19:52:36.516617 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:52:36.520853 systemd[1]: session-18.scope: Consumed 1.085s CPU time. Feb 13 19:52:36.525495 systemd-logind[1996]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:52:36.561609 systemd[1]: Started sshd@18-172.31.30.61:22-139.178.89.65:55764.service - OpenSSH per-connection server daemon (139.178.89.65:55764). Feb 13 19:52:36.565223 systemd-logind[1996]: Removed session 18. Feb 13 19:52:36.766860 sshd[5073]: Accepted publickey for core from 139.178.89.65 port 55764 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:36.770354 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:36.781112 systemd-logind[1996]: New session 19 of user core. Feb 13 19:52:36.787198 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:52:37.303891 sshd[5073]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:37.310445 systemd[1]: sshd@18-172.31.30.61:22-139.178.89.65:55764.service: Deactivated successfully. Feb 13 19:52:37.316035 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:52:37.319807 systemd-logind[1996]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:52:37.321991 systemd-logind[1996]: Removed session 19. Feb 13 19:52:37.341727 systemd[1]: Started sshd@19-172.31.30.61:22-139.178.89.65:55772.service - OpenSSH per-connection server daemon (139.178.89.65:55772). Feb 13 19:52:37.525847 sshd[5085]: Accepted publickey for core from 139.178.89.65 port 55772 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:37.529628 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:37.538742 systemd-logind[1996]: New session 20 of user core. Feb 13 19:52:37.550300 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:52:37.803158 sshd[5085]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:37.809645 systemd-logind[1996]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:52:37.810855 systemd[1]: sshd@19-172.31.30.61:22-139.178.89.65:55772.service: Deactivated successfully. Feb 13 19:52:37.815904 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:52:37.821160 systemd-logind[1996]: Removed session 20. Feb 13 19:52:42.849625 systemd[1]: Started sshd@20-172.31.30.61:22-139.178.89.65:55786.service - OpenSSH per-connection server daemon (139.178.89.65:55786). Feb 13 19:52:43.040884 sshd[5102]: Accepted publickey for core from 139.178.89.65 port 55786 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:43.045164 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:43.058483 systemd-logind[1996]: New session 21 of user core. Feb 13 19:52:43.070392 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:52:43.338498 sshd[5102]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:43.346215 systemd[1]: sshd@20-172.31.30.61:22-139.178.89.65:55786.service: Deactivated successfully. Feb 13 19:52:43.350235 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:52:43.355703 systemd-logind[1996]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:52:43.358259 systemd-logind[1996]: Removed session 21. Feb 13 19:52:48.380711 systemd[1]: Started sshd@21-172.31.30.61:22-139.178.89.65:46940.service - OpenSSH per-connection server daemon (139.178.89.65:46940). Feb 13 19:52:48.560559 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 46940 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:48.564022 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:48.573349 systemd-logind[1996]: New session 22 of user core. Feb 13 19:52:48.582231 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:52:48.839378 sshd[5119]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:48.847187 systemd[1]: sshd@21-172.31.30.61:22-139.178.89.65:46940.service: Deactivated successfully. Feb 13 19:52:48.853331 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:52:48.855217 systemd-logind[1996]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:52:48.858336 systemd-logind[1996]: Removed session 22. Feb 13 19:52:53.884735 systemd[1]: Started sshd@22-172.31.30.61:22-139.178.89.65:46954.service - OpenSSH per-connection server daemon (139.178.89.65:46954). Feb 13 19:52:54.075369 sshd[5131]: Accepted publickey for core from 139.178.89.65 port 46954 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:54.078551 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:54.098353 systemd-logind[1996]: New session 23 of user core. Feb 13 19:52:54.106437 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:52:54.383846 sshd[5131]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:54.393141 systemd[1]: sshd@22-172.31.30.61:22-139.178.89.65:46954.service: Deactivated successfully. Feb 13 19:52:54.398593 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:52:54.401329 systemd-logind[1996]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:52:54.404896 systemd-logind[1996]: Removed session 23. Feb 13 19:52:59.438442 systemd[1]: Started sshd@23-172.31.30.61:22-139.178.89.65:37954.service - OpenSSH per-connection server daemon (139.178.89.65:37954). Feb 13 19:52:59.606348 sshd[5143]: Accepted publickey for core from 139.178.89.65 port 37954 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:59.610093 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:59.619334 systemd-logind[1996]: New session 24 of user core. Feb 13 19:52:59.630470 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:52:59.880354 sshd[5143]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:59.889548 systemd[1]: sshd@23-172.31.30.61:22-139.178.89.65:37954.service: Deactivated successfully. Feb 13 19:52:59.894270 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:52:59.896651 systemd-logind[1996]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:52:59.900036 systemd-logind[1996]: Removed session 24. Feb 13 19:52:59.915537 systemd[1]: Started sshd@24-172.31.30.61:22-139.178.89.65:37960.service - OpenSSH per-connection server daemon (139.178.89.65:37960). Feb 13 19:53:00.098354 sshd[5156]: Accepted publickey for core from 139.178.89.65 port 37960 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:53:00.102364 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:00.113395 systemd-logind[1996]: New session 25 of user core. Feb 13 19:53:00.122503 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:53:02.484979 containerd[2023]: time="2025-02-13T19:53:02.483333513Z" level=info msg="StopContainer for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" with timeout 30 (s)" Feb 13 19:53:02.492416 containerd[2023]: time="2025-02-13T19:53:02.491243450Z" level=info msg="Stop container \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" with signal terminated" Feb 13 19:53:02.541604 systemd[1]: cri-containerd-28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78.scope: Deactivated successfully. Feb 13 19:53:02.575433 containerd[2023]: time="2025-02-13T19:53:02.575057961Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:53:02.612300 containerd[2023]: time="2025-02-13T19:53:02.610878750Z" level=info msg="StopContainer for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" with timeout 2 (s)" Feb 13 19:53:02.615407 containerd[2023]: time="2025-02-13T19:53:02.615263245Z" level=info msg="Stop container \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" with signal terminated" Feb 13 19:53:02.631553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78-rootfs.mount: Deactivated successfully. Feb 13 19:53:02.640456 systemd-networkd[1933]: lxc_health: Link DOWN Feb 13 19:53:02.640473 systemd-networkd[1933]: lxc_health: Lost carrier Feb 13 19:53:02.660798 containerd[2023]: time="2025-02-13T19:53:02.660328451Z" level=info msg="shim disconnected" id=28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78 namespace=k8s.io Feb 13 19:53:02.661805 containerd[2023]: time="2025-02-13T19:53:02.660881808Z" level=warning msg="cleaning up after shim disconnected" id=28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78 namespace=k8s.io Feb 13 19:53:02.661805 containerd[2023]: time="2025-02-13T19:53:02.661088167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:02.669684 systemd[1]: cri-containerd-468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8.scope: Deactivated successfully. Feb 13 19:53:02.670798 systemd[1]: cri-containerd-468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8.scope: Consumed 17.214s CPU time. Feb 13 19:53:02.710862 containerd[2023]: time="2025-02-13T19:53:02.710589337Z" level=info msg="StopContainer for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" returns successfully" Feb 13 19:53:02.712640 containerd[2023]: time="2025-02-13T19:53:02.712280683Z" level=info msg="StopPodSandbox for \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\"" Feb 13 19:53:02.712640 containerd[2023]: time="2025-02-13T19:53:02.712354111Z" level=info msg="Container to stop \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:53:02.717866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7-shm.mount: Deactivated successfully. Feb 13 19:53:02.737080 systemd[1]: cri-containerd-64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7.scope: Deactivated successfully. Feb 13 19:53:02.746886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8-rootfs.mount: Deactivated successfully. Feb 13 19:53:02.758345 containerd[2023]: time="2025-02-13T19:53:02.758159463Z" level=info msg="shim disconnected" id=468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8 namespace=k8s.io Feb 13 19:53:02.758741 containerd[2023]: time="2025-02-13T19:53:02.758706468Z" level=warning msg="cleaning up after shim disconnected" id=468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8 namespace=k8s.io Feb 13 19:53:02.758989 containerd[2023]: time="2025-02-13T19:53:02.758958222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:02.795179 containerd[2023]: time="2025-02-13T19:53:02.795076520Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:53:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:53:02.803335 containerd[2023]: time="2025-02-13T19:53:02.803073345Z" level=info msg="StopContainer for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" returns successfully" Feb 13 19:53:02.804733 containerd[2023]: time="2025-02-13T19:53:02.804646299Z" level=info msg="StopPodSandbox for \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\"" Feb 13 19:53:02.804932 containerd[2023]: time="2025-02-13T19:53:02.804748494Z" level=info msg="Container to stop \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:53:02.804932 containerd[2023]: time="2025-02-13T19:53:02.804780166Z" level=info msg="Container to stop \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:53:02.804932 containerd[2023]: time="2025-02-13T19:53:02.804806771Z" level=info msg="Container to stop \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:53:02.804932 containerd[2023]: time="2025-02-13T19:53:02.804836462Z" level=info msg="Container to stop \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:53:02.804932 containerd[2023]: time="2025-02-13T19:53:02.804861206Z" level=info msg="Container to stop \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:53:02.816013 containerd[2023]: time="2025-02-13T19:53:02.814990164Z" level=info msg="shim disconnected" id=64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7 namespace=k8s.io Feb 13 19:53:02.816013 containerd[2023]: time="2025-02-13T19:53:02.815257538Z" level=warning msg="cleaning up after shim disconnected" id=64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7 namespace=k8s.io Feb 13 19:53:02.816013 containerd[2023]: time="2025-02-13T19:53:02.815280397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:02.821621 systemd[1]: cri-containerd-5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a.scope: Deactivated successfully. Feb 13 19:53:02.865252 containerd[2023]: time="2025-02-13T19:53:02.865141076Z" level=info msg="TearDown network for sandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" successfully" Feb 13 19:53:02.865252 containerd[2023]: time="2025-02-13T19:53:02.865226534Z" level=info msg="StopPodSandbox for \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" returns successfully" Feb 13 19:53:02.890305 containerd[2023]: time="2025-02-13T19:53:02.890085714Z" level=info msg="shim disconnected" id=5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a namespace=k8s.io Feb 13 19:53:02.890931 containerd[2023]: time="2025-02-13T19:53:02.890320971Z" level=warning msg="cleaning up after shim disconnected" id=5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a namespace=k8s.io Feb 13 19:53:02.890931 containerd[2023]: time="2025-02-13T19:53:02.890348765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:02.898534 kubelet[3545]: I0213 19:53:02.898449 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32a79e45-696d-4f0b-8ffd-47c888e2c44a-cilium-config-path\") pod \"32a79e45-696d-4f0b-8ffd-47c888e2c44a\" (UID: \"32a79e45-696d-4f0b-8ffd-47c888e2c44a\") " Feb 13 19:53:02.898534 kubelet[3545]: I0213 19:53:02.898549 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdvq\" (UniqueName: \"kubernetes.io/projected/32a79e45-696d-4f0b-8ffd-47c888e2c44a-kube-api-access-qwdvq\") pod \"32a79e45-696d-4f0b-8ffd-47c888e2c44a\" (UID: \"32a79e45-696d-4f0b-8ffd-47c888e2c44a\") " Feb 13 19:53:02.911066 kubelet[3545]: I0213 19:53:02.907785 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a79e45-696d-4f0b-8ffd-47c888e2c44a-kube-api-access-qwdvq" (OuterVolumeSpecName: "kube-api-access-qwdvq") pod "32a79e45-696d-4f0b-8ffd-47c888e2c44a" (UID: "32a79e45-696d-4f0b-8ffd-47c888e2c44a"). InnerVolumeSpecName "kube-api-access-qwdvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:53:02.917314 kubelet[3545]: I0213 19:53:02.917185 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a79e45-696d-4f0b-8ffd-47c888e2c44a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32a79e45-696d-4f0b-8ffd-47c888e2c44a" (UID: "32a79e45-696d-4f0b-8ffd-47c888e2c44a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:53:02.923490 containerd[2023]: time="2025-02-13T19:53:02.923405330Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:53:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:53:02.925316 containerd[2023]: time="2025-02-13T19:53:02.925121936Z" level=info msg="TearDown network for sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" successfully" Feb 13 19:53:02.925316 containerd[2023]: time="2025-02-13T19:53:02.925193143Z" level=info msg="StopPodSandbox for \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" returns successfully" Feb 13 19:53:02.958972 kubelet[3545]: E0213 19:53:02.957798 3545 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:53:02.999720 kubelet[3545]: I0213 19:53:02.999529 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-run\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:02.999720 kubelet[3545]: I0213 19:53:02.999616 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-hubble-tls\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:02.999720 kubelet[3545]: I0213 19:53:02.999659 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-xtables-lock\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001309 kubelet[3545]: I0213 19:53:03.001201 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-etc-cni-netd\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001514 kubelet[3545]: I0213 19:53:03.001373 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cni-path\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001514 kubelet[3545]: I0213 19:53:03.001425 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-config-path\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001514 kubelet[3545]: I0213 19:53:03.001477 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdnk\" (UniqueName: \"kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-kube-api-access-lzdnk\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001687 kubelet[3545]: I0213 19:53:03.001528 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/997007d6-5e4e-4700-9480-ddf89d70f8e6-clustermesh-secrets\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001687 kubelet[3545]: I0213 19:53:03.001567 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-bpf-maps\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001687 kubelet[3545]: I0213 19:53:03.001601 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-kernel\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001687 kubelet[3545]: I0213 19:53:03.001634 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-lib-modules\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.001687 kubelet[3545]: I0213 19:53:03.001674 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-hostproc\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.002021 kubelet[3545]: I0213 19:53:03.001711 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-cgroup\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.002021 kubelet[3545]: I0213 19:53:03.001744 3545 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-net\") pod \"997007d6-5e4e-4700-9480-ddf89d70f8e6\" (UID: \"997007d6-5e4e-4700-9480-ddf89d70f8e6\") " Feb 13 19:53:03.002021 kubelet[3545]: I0213 19:53:03.001816 3545 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32a79e45-696d-4f0b-8ffd-47c888e2c44a-cilium-config-path\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.002021 kubelet[3545]: I0213 19:53:03.001839 3545 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qwdvq\" (UniqueName: \"kubernetes.io/projected/32a79e45-696d-4f0b-8ffd-47c888e2c44a-kube-api-access-qwdvq\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.002021 kubelet[3545]: I0213 19:53:03.001889 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.002021 kubelet[3545]: I0213 19:53:03.001971 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.002338 kubelet[3545]: I0213 19:53:03.002012 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.002338 kubelet[3545]: I0213 19:53:03.002050 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.005352 kubelet[3545]: I0213 19:53:03.004313 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.005352 kubelet[3545]: I0213 19:53:03.004400 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.008318 kubelet[3545]: I0213 19:53:03.007168 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.008318 kubelet[3545]: I0213 19:53:03.007248 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.008318 kubelet[3545]: I0213 19:53:03.007323 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.008318 kubelet[3545]: I0213 19:53:03.007358 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:53:03.015559 kubelet[3545]: I0213 19:53:03.014466 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-kube-api-access-lzdnk" (OuterVolumeSpecName: "kube-api-access-lzdnk") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "kube-api-access-lzdnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:53:03.016006 kubelet[3545]: I0213 19:53:03.015749 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:53:03.017787 kubelet[3545]: I0213 19:53:03.017487 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:53:03.019866 kubelet[3545]: I0213 19:53:03.019739 3545 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/997007d6-5e4e-4700-9480-ddf89d70f8e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "997007d6-5e4e-4700-9480-ddf89d70f8e6" (UID: "997007d6-5e4e-4700-9480-ddf89d70f8e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:53:03.102218 kubelet[3545]: I0213 19:53:03.102107 3545 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-xtables-lock\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102218 kubelet[3545]: I0213 19:53:03.102157 3545 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cni-path\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102218 kubelet[3545]: I0213 19:53:03.102182 3545 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-config-path\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102218 kubelet[3545]: I0213 19:53:03.102207 3545 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-etc-cni-netd\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102218 kubelet[3545]: I0213 19:53:03.102231 3545 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lzdnk\" (UniqueName: \"kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-kube-api-access-lzdnk\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102254 3545 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/997007d6-5e4e-4700-9480-ddf89d70f8e6-clustermesh-secrets\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102276 3545 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-bpf-maps\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102294 3545 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-kernel\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102315 3545 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-lib-modules\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102333 3545 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-hostproc\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102352 3545 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-cgroup\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102372 3545 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-host-proc-sys-net\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.102632 kubelet[3545]: I0213 19:53:03.102391 3545 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/997007d6-5e4e-4700-9480-ddf89d70f8e6-cilium-run\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.103119 kubelet[3545]: I0213 19:53:03.102409 3545 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/997007d6-5e4e-4700-9480-ddf89d70f8e6-hubble-tls\") on node \"ip-172-31-30-61\" DevicePath \"\"" Feb 13 19:53:03.180903 kubelet[3545]: I0213 19:53:03.180829 3545 scope.go:117] "RemoveContainer" containerID="28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78" Feb 13 19:53:03.190586 containerd[2023]: time="2025-02-13T19:53:03.190499284Z" level=info msg="RemoveContainer for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\"" Feb 13 19:53:03.199069 systemd[1]: Removed slice kubepods-besteffort-pod32a79e45_696d_4f0b_8ffd_47c888e2c44a.slice - libcontainer container kubepods-besteffort-pod32a79e45_696d_4f0b_8ffd_47c888e2c44a.slice. Feb 13 19:53:03.206185 containerd[2023]: time="2025-02-13T19:53:03.206056419Z" level=info msg="RemoveContainer for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" returns successfully" Feb 13 19:53:03.206992 kubelet[3545]: I0213 19:53:03.206654 3545 scope.go:117] "RemoveContainer" containerID="28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78" Feb 13 19:53:03.207860 containerd[2023]: time="2025-02-13T19:53:03.207774177Z" level=error msg="ContainerStatus for \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\": not found" Feb 13 19:53:03.209806 kubelet[3545]: E0213 19:53:03.208081 3545 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\": not found" containerID="28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78" Feb 13 19:53:03.209806 kubelet[3545]: I0213 19:53:03.208137 3545 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78"} err="failed to get container status \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\": rpc error: code = NotFound desc = an error occurred when try to find container \"28fdd83dbc5cbda0a6d9418cc835b08871719ab80ea683cfa291c7efd6daae78\": not found" Feb 13 19:53:03.209806 kubelet[3545]: I0213 19:53:03.208303 3545 scope.go:117] "RemoveContainer" containerID="468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8" Feb 13 19:53:03.216181 containerd[2023]: time="2025-02-13T19:53:03.215381599Z" level=info msg="RemoveContainer for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\"" Feb 13 19:53:03.229404 containerd[2023]: time="2025-02-13T19:53:03.228718044Z" level=info msg="RemoveContainer for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" returns successfully" Feb 13 19:53:03.233088 systemd[1]: Removed slice kubepods-burstable-pod997007d6_5e4e_4700_9480_ddf89d70f8e6.slice - libcontainer container kubepods-burstable-pod997007d6_5e4e_4700_9480_ddf89d70f8e6.slice. Feb 13 19:53:03.233360 systemd[1]: kubepods-burstable-pod997007d6_5e4e_4700_9480_ddf89d70f8e6.slice: Consumed 17.377s CPU time. Feb 13 19:53:03.236980 kubelet[3545]: I0213 19:53:03.235202 3545 scope.go:117] "RemoveContainer" containerID="ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124" Feb 13 19:53:03.252037 containerd[2023]: time="2025-02-13T19:53:03.251505480Z" level=info msg="RemoveContainer for \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\"" Feb 13 19:53:03.261530 containerd[2023]: time="2025-02-13T19:53:03.261439292Z" level=info msg="RemoveContainer for \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\" returns successfully" Feb 13 19:53:03.263147 kubelet[3545]: I0213 19:53:03.262136 3545 scope.go:117] "RemoveContainer" containerID="61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037" Feb 13 19:53:03.269135 containerd[2023]: time="2025-02-13T19:53:03.269010888Z" level=info msg="RemoveContainer for \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\"" Feb 13 19:53:03.277619 containerd[2023]: time="2025-02-13T19:53:03.277498434Z" level=info msg="RemoveContainer for \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\" returns successfully" Feb 13 19:53:03.278292 kubelet[3545]: I0213 19:53:03.278073 3545 scope.go:117] "RemoveContainer" containerID="ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af" Feb 13 19:53:03.283229 containerd[2023]: time="2025-02-13T19:53:03.283048964Z" level=info msg="RemoveContainer for \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\"" Feb 13 19:53:03.291965 containerd[2023]: time="2025-02-13T19:53:03.291860864Z" level=info msg="RemoveContainer for \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\" returns successfully" Feb 13 19:53:03.292517 kubelet[3545]: I0213 19:53:03.292456 3545 scope.go:117] "RemoveContainer" containerID="b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff" Feb 13 19:53:03.295562 containerd[2023]: time="2025-02-13T19:53:03.295514062Z" level=info msg="RemoveContainer for \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\"" Feb 13 19:53:03.301962 containerd[2023]: time="2025-02-13T19:53:03.301806682Z" level=info msg="RemoveContainer for \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\" returns successfully" Feb 13 19:53:03.302416 kubelet[3545]: I0213 19:53:03.302355 3545 scope.go:117] "RemoveContainer" containerID="468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8" Feb 13 19:53:03.303447 containerd[2023]: time="2025-02-13T19:53:03.303377896Z" level=error msg="ContainerStatus for \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\": not found" Feb 13 19:53:03.303820 kubelet[3545]: E0213 19:53:03.303743 3545 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\": not found" containerID="468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8" Feb 13 19:53:03.303967 kubelet[3545]: I0213 19:53:03.303804 3545 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8"} err="failed to get container status \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\": rpc error: code = NotFound desc = an error occurred when try to find container \"468b6e1dc23f1d3e07f4fd50301281416bd2ae024f08133e47a8a46d70c0eff8\": not found" Feb 13 19:53:03.303967 kubelet[3545]: I0213 19:53:03.303844 3545 scope.go:117] "RemoveContainer" containerID="ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124" Feb 13 19:53:03.304858 kubelet[3545]: E0213 19:53:03.304822 3545 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\": not found" containerID="ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124" Feb 13 19:53:03.304985 containerd[2023]: time="2025-02-13T19:53:03.304513783Z" level=error msg="ContainerStatus for \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\": not found" Feb 13 19:53:03.305090 kubelet[3545]: I0213 19:53:03.304868 3545 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124"} err="failed to get container status \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca85a615ab7eb5bba51f3dbb485fce6721852fd8f5f3c5af18e39efaffb48124\": not found" Feb 13 19:53:03.305090 kubelet[3545]: I0213 19:53:03.304907 3545 scope.go:117] "RemoveContainer" containerID="61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037" Feb 13 19:53:03.305469 containerd[2023]: time="2025-02-13T19:53:03.305388685Z" level=error msg="ContainerStatus for \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\": not found" Feb 13 19:53:03.305698 kubelet[3545]: E0213 19:53:03.305604 3545 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\": not found" containerID="61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037" Feb 13 19:53:03.305698 kubelet[3545]: I0213 19:53:03.305658 3545 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037"} err="failed to get container status \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\": rpc error: code = NotFound desc = an error occurred when try to find container \"61c91fd40d19fe52e349f6fd69bcdf9ddc4285cbeb6eca48f77723d4b6bae037\": not found" Feb 13 19:53:03.305698 kubelet[3545]: I0213 19:53:03.305696 3545 scope.go:117] "RemoveContainer" containerID="ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af" Feb 13 19:53:03.306563 containerd[2023]: time="2025-02-13T19:53:03.306428140Z" level=error msg="ContainerStatus for \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\": not found" Feb 13 19:53:03.306773 kubelet[3545]: E0213 19:53:03.306731 3545 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\": not found" containerID="ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af" Feb 13 19:53:03.306904 kubelet[3545]: I0213 19:53:03.306784 3545 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af"} err="failed to get container status \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba8e068aaed4318a9d0e0ec160b8c7834fc1dcd0c4aa7b74589d336eda6c25af\": not found" Feb 13 19:53:03.306904 kubelet[3545]: I0213 19:53:03.306837 3545 scope.go:117] "RemoveContainer" containerID="b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff" Feb 13 19:53:03.307292 containerd[2023]: time="2025-02-13T19:53:03.307224894Z" level=error msg="ContainerStatus for \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\": not found" Feb 13 19:53:03.307651 kubelet[3545]: E0213 19:53:03.307458 3545 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\": not found" containerID="b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff" Feb 13 19:53:03.307651 kubelet[3545]: I0213 19:53:03.307497 3545 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff"} err="failed to get container status \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6d0fe4408ae14530bcc5e048f93ee09737fabe1dd1fc215ad2f46bffe87f2ff\": not found" Feb 13 19:53:03.531796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7-rootfs.mount: Deactivated successfully. Feb 13 19:53:03.532183 systemd[1]: var-lib-kubelet-pods-32a79e45\x2d696d\x2d4f0b\x2d8ffd\x2d47c888e2c44a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqwdvq.mount: Deactivated successfully. Feb 13 19:53:03.532417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a-rootfs.mount: Deactivated successfully. Feb 13 19:53:03.532579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a-shm.mount: Deactivated successfully. Feb 13 19:53:03.532748 systemd[1]: var-lib-kubelet-pods-997007d6\x2d5e4e\x2d4700\x2d9480\x2dddf89d70f8e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzdnk.mount: Deactivated successfully. Feb 13 19:53:03.532901 systemd[1]: var-lib-kubelet-pods-997007d6\x2d5e4e\x2d4700\x2d9480\x2dddf89d70f8e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:53:03.533836 systemd[1]: var-lib-kubelet-pods-997007d6\x2d5e4e\x2d4700\x2d9480\x2dddf89d70f8e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:53:04.420384 sshd[5156]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:04.431610 systemd[1]: sshd@24-172.31.30.61:22-139.178.89.65:37960.service: Deactivated successfully. Feb 13 19:53:04.438497 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:53:04.439121 systemd[1]: session-25.scope: Consumed 1.641s CPU time. Feb 13 19:53:04.441252 systemd-logind[1996]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:53:04.468771 systemd[1]: Started sshd@25-172.31.30.61:22-139.178.89.65:37976.service - OpenSSH per-connection server daemon (139.178.89.65:37976). Feb 13 19:53:04.474654 systemd-logind[1996]: Removed session 25. Feb 13 19:53:04.644617 ntpd[1991]: Deleting interface #12 lxc_health, fe80::74e6:a6ff:fe88:3c3a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs Feb 13 19:53:04.645760 ntpd[1991]: 13 Feb 19:53:04 ntpd[1991]: Deleting interface #12 lxc_health, fe80::74e6:a6ff:fe88:3c3a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=62 secs Feb 13 19:53:04.657753 sshd[5320]: Accepted publickey for core from 139.178.89.65 port 37976 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:53:04.662321 sshd[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:04.668568 kubelet[3545]: I0213 19:53:04.668346 3545 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a79e45-696d-4f0b-8ffd-47c888e2c44a" path="/var/lib/kubelet/pods/32a79e45-696d-4f0b-8ffd-47c888e2c44a/volumes" Feb 13 19:53:04.673666 kubelet[3545]: I0213 19:53:04.672641 3545 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" path="/var/lib/kubelet/pods/997007d6-5e4e-4700-9480-ddf89d70f8e6/volumes" Feb 13 19:53:04.678409 systemd-logind[1996]: New session 26 of user core. Feb 13 19:53:04.686341 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:53:04.848843 kubelet[3545]: I0213 19:53:04.848721 3545 setters.go:580] "Node became not ready" node="ip-172-31-30-61" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:53:04Z","lastTransitionTime":"2025-02-13T19:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:53:06.624019 sshd[5320]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:06.634901 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:53:06.636577 systemd[1]: session-26.scope: Consumed 1.663s CPU time. Feb 13 19:53:06.639379 systemd[1]: sshd@25-172.31.30.61:22-139.178.89.65:37976.service: Deactivated successfully. Feb 13 19:53:06.657557 systemd-logind[1996]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:53:06.697732 systemd[1]: Started sshd@26-172.31.30.61:22-139.178.89.65:48850.service - OpenSSH per-connection server daemon (139.178.89.65:48850). Feb 13 19:53:06.703102 systemd-logind[1996]: Removed session 26. Feb 13 19:53:06.710207 kubelet[3545]: I0213 19:53:06.710097 3545 topology_manager.go:215] "Topology Admit Handler" podUID="73dff9de-efd9-4613-84e5-808e8d835227" podNamespace="kube-system" podName="cilium-ng5l5" Feb 13 19:53:06.712056 kubelet[3545]: E0213 19:53:06.710272 3545 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" containerName="mount-cgroup" Feb 13 19:53:06.712056 kubelet[3545]: E0213 19:53:06.710305 3545 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" containerName="mount-bpf-fs" Feb 13 19:53:06.712056 kubelet[3545]: E0213 19:53:06.710325 3545 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" containerName="cilium-agent" Feb 13 19:53:06.712056 kubelet[3545]: E0213 19:53:06.710343 3545 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" containerName="apply-sysctl-overwrites" Feb 13 19:53:06.712056 kubelet[3545]: E0213 19:53:06.710360 3545 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32a79e45-696d-4f0b-8ffd-47c888e2c44a" containerName="cilium-operator" Feb 13 19:53:06.712056 kubelet[3545]: E0213 19:53:06.710376 3545 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" containerName="clean-cilium-state" Feb 13 19:53:06.712056 kubelet[3545]: I0213 19:53:06.710454 3545 memory_manager.go:354] "RemoveStaleState removing state" podUID="997007d6-5e4e-4700-9480-ddf89d70f8e6" containerName="cilium-agent" Feb 13 19:53:06.712056 kubelet[3545]: I0213 19:53:06.710473 3545 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a79e45-696d-4f0b-8ffd-47c888e2c44a" containerName="cilium-operator" Feb 13 19:53:06.731537 kubelet[3545]: W0213 19:53:06.731418 3545 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.731537 kubelet[3545]: E0213 19:53:06.731494 3545 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.732563 kubelet[3545]: W0213 19:53:06.732348 3545 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.732563 kubelet[3545]: E0213 19:53:06.732412 3545 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.732563 kubelet[3545]: W0213 19:53:06.732533 3545 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.732563 kubelet[3545]: E0213 19:53:06.732567 3545 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.749327 kubelet[3545]: W0213 19:53:06.749234 3545 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.749327 kubelet[3545]: E0213 19:53:06.749319 3545 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-30-61" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-61' and this object Feb 13 19:53:06.753890 systemd[1]: Created slice kubepods-burstable-pod73dff9de_efd9_4613_84e5_808e8d835227.slice - libcontainer container kubepods-burstable-pod73dff9de_efd9_4613_84e5_808e8d835227.slice. Feb 13 19:53:06.836346 kubelet[3545]: I0213 19:53:06.836237 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-hostproc\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836497 kubelet[3545]: I0213 19:53:06.836359 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-cni-path\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836497 kubelet[3545]: I0213 19:53:06.836399 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73dff9de-efd9-4613-84e5-808e8d835227-cilium-ipsec-secrets\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836497 kubelet[3545]: I0213 19:53:06.836445 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73dff9de-efd9-4613-84e5-808e8d835227-cilium-config-path\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836497 kubelet[3545]: I0213 19:53:06.836486 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73dff9de-efd9-4613-84e5-808e8d835227-clustermesh-secrets\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836776 kubelet[3545]: I0213 19:53:06.836522 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkq9\" (UniqueName: \"kubernetes.io/projected/73dff9de-efd9-4613-84e5-808e8d835227-kube-api-access-qmkq9\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836776 kubelet[3545]: I0213 19:53:06.836600 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-cilium-run\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836776 kubelet[3545]: I0213 19:53:06.836644 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-etc-cni-netd\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836776 kubelet[3545]: I0213 19:53:06.836679 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73dff9de-efd9-4613-84e5-808e8d835227-hubble-tls\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836776 kubelet[3545]: I0213 19:53:06.836718 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-cilium-cgroup\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.836776 kubelet[3545]: I0213 19:53:06.836752 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-lib-modules\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.837212 kubelet[3545]: I0213 19:53:06.836794 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-host-proc-sys-net\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.837212 kubelet[3545]: I0213 19:53:06.836831 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-bpf-maps\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.837212 kubelet[3545]: I0213 19:53:06.836867 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-host-proc-sys-kernel\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.837212 kubelet[3545]: I0213 19:53:06.836923 3545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73dff9de-efd9-4613-84e5-808e8d835227-xtables-lock\") pod \"cilium-ng5l5\" (UID: \"73dff9de-efd9-4613-84e5-808e8d835227\") " pod="kube-system/cilium-ng5l5" Feb 13 19:53:06.934147 sshd[5331]: Accepted publickey for core from 139.178.89.65 port 48850 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:53:06.937740 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:06.961467 systemd-logind[1996]: New session 27 of user core. Feb 13 19:53:06.965442 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:53:07.104601 sshd[5331]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:07.113858 systemd[1]: sshd@26-172.31.30.61:22-139.178.89.65:48850.service: Deactivated successfully. Feb 13 19:53:07.120063 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:53:07.123639 systemd-logind[1996]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:53:07.146874 systemd[1]: Started sshd@27-172.31.30.61:22-139.178.89.65:48866.service - OpenSSH per-connection server daemon (139.178.89.65:48866). Feb 13 19:53:07.149431 systemd-logind[1996]: Removed session 27. Feb 13 19:53:07.331471 sshd[5340]: Accepted publickey for core from 139.178.89.65 port 48866 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:53:07.335090 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:07.344819 systemd-logind[1996]: New session 28 of user core. Feb 13 19:53:07.355279 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:53:07.939332 kubelet[3545]: E0213 19:53:07.939231 3545 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:53:07.940931 kubelet[3545]: E0213 19:53:07.939415 3545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73dff9de-efd9-4613-84e5-808e8d835227-cilium-config-path podName:73dff9de-efd9-4613-84e5-808e8d835227 nodeName:}" failed. No retries permitted until 2025-02-13 19:53:08.439344489 +0000 UTC m=+106.170047896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/73dff9de-efd9-4613-84e5-808e8d835227-cilium-config-path") pod "cilium-ng5l5" (UID: "73dff9de-efd9-4613-84e5-808e8d835227") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:53:07.940931 kubelet[3545]: E0213 19:53:07.939249 3545 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 13 19:53:07.940931 kubelet[3545]: E0213 19:53:07.940022 3545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73dff9de-efd9-4613-84e5-808e8d835227-cilium-ipsec-secrets podName:73dff9de-efd9-4613-84e5-808e8d835227 nodeName:}" failed. No retries permitted until 2025-02-13 19:53:08.439921773 +0000 UTC m=+106.170625180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/73dff9de-efd9-4613-84e5-808e8d835227-cilium-ipsec-secrets") pod "cilium-ng5l5" (UID: "73dff9de-efd9-4613-84e5-808e8d835227") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:53:07.940931 kubelet[3545]: E0213 19:53:07.940433 3545 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 19:53:07.940931 kubelet[3545]: E0213 19:53:07.940482 3545 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-ng5l5: failed to sync secret cache: timed out waiting for the condition Feb 13 19:53:07.941358 kubelet[3545]: E0213 19:53:07.940599 3545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73dff9de-efd9-4613-84e5-808e8d835227-hubble-tls podName:73dff9de-efd9-4613-84e5-808e8d835227 nodeName:}" failed. No retries permitted until 2025-02-13 19:53:08.440569017 +0000 UTC m=+106.171272412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/73dff9de-efd9-4613-84e5-808e8d835227-hubble-tls") pod "cilium-ng5l5" (UID: "73dff9de-efd9-4613-84e5-808e8d835227") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:53:07.960220 kubelet[3545]: E0213 19:53:07.959788 3545 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:53:08.573189 containerd[2023]: time="2025-02-13T19:53:08.573035352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ng5l5,Uid:73dff9de-efd9-4613-84e5-808e8d835227,Namespace:kube-system,Attempt:0,}" Feb 13 19:53:08.626150 containerd[2023]: time="2025-02-13T19:53:08.625789152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:53:08.626150 containerd[2023]: time="2025-02-13T19:53:08.625896702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:53:08.626150 containerd[2023]: time="2025-02-13T19:53:08.625933921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:08.627799 containerd[2023]: time="2025-02-13T19:53:08.627477232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:53:08.687310 systemd[1]: Started cri-containerd-0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd.scope - libcontainer container 0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd. Feb 13 19:53:08.746456 containerd[2023]: time="2025-02-13T19:53:08.746345025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ng5l5,Uid:73dff9de-efd9-4613-84e5-808e8d835227,Namespace:kube-system,Attempt:0,} returns sandbox id \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\"" Feb 13 19:53:08.758352 containerd[2023]: time="2025-02-13T19:53:08.758022169Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:53:08.784450 containerd[2023]: time="2025-02-13T19:53:08.784346693Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70\"" Feb 13 19:53:08.786527 containerd[2023]: time="2025-02-13T19:53:08.786393163Z" level=info msg="StartContainer for \"3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70\"" Feb 13 19:53:08.848338 systemd[1]: Started cri-containerd-3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70.scope - libcontainer container 3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70. Feb 13 19:53:08.923860 containerd[2023]: time="2025-02-13T19:53:08.923499067Z" level=info msg="StartContainer for \"3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70\" returns successfully" Feb 13 19:53:08.948622 systemd[1]: cri-containerd-3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70.scope: Deactivated successfully. Feb 13 19:53:09.017256 containerd[2023]: time="2025-02-13T19:53:09.016926830Z" level=info msg="shim disconnected" id=3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70 namespace=k8s.io Feb 13 19:53:09.017256 containerd[2023]: time="2025-02-13T19:53:09.017127054Z" level=warning msg="cleaning up after shim disconnected" id=3ffb590ba342dd7a6333c08d205cb354a8e3ff8670f5bf9fd9db8aba58f46b70 namespace=k8s.io Feb 13 19:53:09.017256 containerd[2023]: time="2025-02-13T19:53:09.017194072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:09.254290 containerd[2023]: time="2025-02-13T19:53:09.253529566Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:53:09.285000 containerd[2023]: time="2025-02-13T19:53:09.284628528Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee\"" Feb 13 19:53:09.289088 containerd[2023]: time="2025-02-13T19:53:09.288339487Z" level=info msg="StartContainer for \"219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee\"" Feb 13 19:53:09.344312 systemd[1]: Started cri-containerd-219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee.scope - libcontainer container 219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee. Feb 13 19:53:09.406153 containerd[2023]: time="2025-02-13T19:53:09.406074189Z" level=info msg="StartContainer for \"219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee\" returns successfully" Feb 13 19:53:09.420645 systemd[1]: cri-containerd-219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee.scope: Deactivated successfully. Feb 13 19:53:09.498798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee-rootfs.mount: Deactivated successfully. Feb 13 19:53:09.511138 containerd[2023]: time="2025-02-13T19:53:09.510907472Z" level=info msg="shim disconnected" id=219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee namespace=k8s.io Feb 13 19:53:09.511138 containerd[2023]: time="2025-02-13T19:53:09.511026908Z" level=warning msg="cleaning up after shim disconnected" id=219ca5f395d79ab1f7be3f0834d7b1a629f26eca690b80ffe0b1aa16fc2e37ee namespace=k8s.io Feb 13 19:53:09.511138 containerd[2023]: time="2025-02-13T19:53:09.511051256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:10.253922 containerd[2023]: time="2025-02-13T19:53:10.253822730Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:53:10.297148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594159455.mount: Deactivated successfully. Feb 13 19:53:10.304241 containerd[2023]: time="2025-02-13T19:53:10.304159938Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4\"" Feb 13 19:53:10.306070 containerd[2023]: time="2025-02-13T19:53:10.305254657Z" level=info msg="StartContainer for \"e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4\"" Feb 13 19:53:10.362323 systemd[1]: Started cri-containerd-e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4.scope - libcontainer container e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4. Feb 13 19:53:10.443541 containerd[2023]: time="2025-02-13T19:53:10.443438231Z" level=info msg="StartContainer for \"e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4\" returns successfully" Feb 13 19:53:10.454181 systemd[1]: cri-containerd-e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4.scope: Deactivated successfully. Feb 13 19:53:10.522746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4-rootfs.mount: Deactivated successfully. Feb 13 19:53:10.530958 containerd[2023]: time="2025-02-13T19:53:10.530473724Z" level=info msg="shim disconnected" id=e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4 namespace=k8s.io Feb 13 19:53:10.531775 containerd[2023]: time="2025-02-13T19:53:10.530912231Z" level=warning msg="cleaning up after shim disconnected" id=e7dc1baade85f785a31dc8e6f24e692147d25035abed4c55741a0d901941aab4 namespace=k8s.io Feb 13 19:53:10.531775 containerd[2023]: time="2025-02-13T19:53:10.531221193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:11.266987 containerd[2023]: time="2025-02-13T19:53:11.264888469Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:53:11.304826 containerd[2023]: time="2025-02-13T19:53:11.303510366Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc\"" Feb 13 19:53:11.307866 containerd[2023]: time="2025-02-13T19:53:11.307747632Z" level=info msg="StartContainer for \"b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc\"" Feb 13 19:53:11.391296 systemd[1]: Started cri-containerd-b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc.scope - libcontainer container b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc. Feb 13 19:53:11.456188 systemd[1]: cri-containerd-b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc.scope: Deactivated successfully. Feb 13 19:53:11.464986 containerd[2023]: time="2025-02-13T19:53:11.462894364Z" level=info msg="StartContainer for \"b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc\" returns successfully" Feb 13 19:53:11.519547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc-rootfs.mount: Deactivated successfully. Feb 13 19:53:11.524151 containerd[2023]: time="2025-02-13T19:53:11.522667496Z" level=info msg="shim disconnected" id=b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc namespace=k8s.io Feb 13 19:53:11.524151 containerd[2023]: time="2025-02-13T19:53:11.522754731Z" level=warning msg="cleaning up after shim disconnected" id=b432bc73c731be62c1dfbc4dd59f9c8ebaf159eeb82b1805a90499a543f0affc namespace=k8s.io Feb 13 19:53:11.524151 containerd[2023]: time="2025-02-13T19:53:11.522782957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:12.272267 containerd[2023]: time="2025-02-13T19:53:12.272190063Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:53:12.305811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951088730.mount: Deactivated successfully. Feb 13 19:53:12.312672 containerd[2023]: time="2025-02-13T19:53:12.312585043Z" level=info msg="CreateContainer within sandbox \"0edba0b3471c39027ced583b0683ed97df0c09ceaac0302ca6b68921b02da3dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16fb3c543e871b0b0b668b247135dcc340f3c3188db0207c603bd2268bac8bc5\"" Feb 13 19:53:12.313493 containerd[2023]: time="2025-02-13T19:53:12.313437757Z" level=info msg="StartContainer for \"16fb3c543e871b0b0b668b247135dcc340f3c3188db0207c603bd2268bac8bc5\"" Feb 13 19:53:12.380494 systemd[1]: Started cri-containerd-16fb3c543e871b0b0b668b247135dcc340f3c3188db0207c603bd2268bac8bc5.scope - libcontainer container 16fb3c543e871b0b0b668b247135dcc340f3c3188db0207c603bd2268bac8bc5. Feb 13 19:53:12.455088 containerd[2023]: time="2025-02-13T19:53:12.454222995Z" level=info msg="StartContainer for \"16fb3c543e871b0b0b668b247135dcc340f3c3188db0207c603bd2268bac8bc5\" returns successfully" Feb 13 19:53:13.336990 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:53:17.905239 (udev-worker)[6174]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:53:17.911280 (udev-worker)[6175]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:53:17.918318 systemd-networkd[1933]: lxc_health: Link UP Feb 13 19:53:17.941485 systemd-networkd[1933]: lxc_health: Gained carrier Feb 13 19:53:18.618027 kubelet[3545]: I0213 19:53:18.617853 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ng5l5" podStartSLOduration=12.617829743 podStartE2EDuration="12.617829743s" podCreationTimestamp="2025-02-13 19:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:53:13.320441854 +0000 UTC m=+111.051145285" watchObservedRunningTime="2025-02-13 19:53:18.617829743 +0000 UTC m=+116.348533150" Feb 13 19:53:19.922073 systemd-networkd[1933]: lxc_health: Gained IPv6LL Feb 13 19:53:20.868787 systemd[1]: run-containerd-runc-k8s.io-16fb3c543e871b0b0b668b247135dcc340f3c3188db0207c603bd2268bac8bc5-runc.mUy7O4.mount: Deactivated successfully. Feb 13 19:53:22.555253 containerd[2023]: time="2025-02-13T19:53:22.555113254Z" level=info msg="StopPodSandbox for \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\"" Feb 13 19:53:22.555916 containerd[2023]: time="2025-02-13T19:53:22.555373714Z" level=info msg="TearDown network for sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" successfully" Feb 13 19:53:22.555916 containerd[2023]: time="2025-02-13T19:53:22.555404950Z" level=info msg="StopPodSandbox for \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" returns successfully" Feb 13 19:53:22.557091 containerd[2023]: time="2025-02-13T19:53:22.556796638Z" level=info msg="RemovePodSandbox for \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\"" Feb 13 19:53:22.557335 containerd[2023]: time="2025-02-13T19:53:22.557220994Z" level=info msg="Forcibly stopping sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\"" Feb 13 19:53:22.557532 containerd[2023]: time="2025-02-13T19:53:22.557413042Z" level=info msg="TearDown network for sandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" successfully" Feb 13 19:53:22.567276 containerd[2023]: time="2025-02-13T19:53:22.565983358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:22.567276 containerd[2023]: time="2025-02-13T19:53:22.566090530Z" level=info msg="RemovePodSandbox \"5f69cd8f4e720b91f4ff07562316ab78504ab35fd3c623ee2b92c4ce3fe7d27a\" returns successfully" Feb 13 19:53:22.567276 containerd[2023]: time="2025-02-13T19:53:22.567027202Z" level=info msg="StopPodSandbox for \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\"" Feb 13 19:53:22.567688 containerd[2023]: time="2025-02-13T19:53:22.567327442Z" level=info msg="TearDown network for sandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" successfully" Feb 13 19:53:22.567688 containerd[2023]: time="2025-02-13T19:53:22.567382858Z" level=info msg="StopPodSandbox for \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" returns successfully" Feb 13 19:53:22.568869 containerd[2023]: time="2025-02-13T19:53:22.568795354Z" level=info msg="RemovePodSandbox for \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\"" Feb 13 19:53:22.569047 containerd[2023]: time="2025-02-13T19:53:22.568883818Z" level=info msg="Forcibly stopping sandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\"" Feb 13 19:53:22.569106 containerd[2023]: time="2025-02-13T19:53:22.569065678Z" level=info msg="TearDown network for sandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" successfully" Feb 13 19:53:22.580996 containerd[2023]: time="2025-02-13T19:53:22.579003202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:53:22.580996 containerd[2023]: time="2025-02-13T19:53:22.579117346Z" level=info msg="RemovePodSandbox \"64c594c12a529b9fb313b07a2450a2966b5633a629b92a03d5d0b6f10b69a7b7\" returns successfully" Feb 13 19:53:22.644663 ntpd[1991]: Listen normally on 15 lxc_health [fe80::464:a7ff:fe88:8edd%14]:123 Feb 13 19:53:22.646269 ntpd[1991]: 13 Feb 19:53:22 ntpd[1991]: Listen normally on 15 lxc_health [fe80::464:a7ff:fe88:8edd%14]:123 Feb 13 19:53:23.412637 sshd[5340]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:23.422769 systemd[1]: sshd@27-172.31.30.61:22-139.178.89.65:48866.service: Deactivated successfully. Feb 13 19:53:23.422867 systemd-logind[1996]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:53:23.431801 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:53:23.439621 systemd-logind[1996]: Removed session 28.