Feb 13 19:02:04.192568 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:02:04.192627 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:02:04.192655 kernel: KASLR disabled due to lack of seed Feb 13 19:02:04.192672 kernel: efi: EFI v2.7 by EDK II Feb 13 19:02:04.192688 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 19:02:04.192704 kernel: secureboot: Secure boot disabled Feb 13 19:02:04.192722 kernel: ACPI: Early table checksum verification disabled Feb 13 19:02:04.192738 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:02:04.192755 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:02:04.192771 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:02:04.192792 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:02:04.192831 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:02:04.192851 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:02:04.192868 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:02:04.192887 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:02:04.192911 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:02:04.192929 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:02:04.192946 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:02:04.192963 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:02:04.192981 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:02:04.192998 kernel: printk: bootconsole [uart0] enabled Feb 13 19:02:04.193014 kernel: NUMA: Failed to initialise from firmware Feb 13 19:02:04.193032 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:02:04.193048 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:02:04.193064 kernel: Zone ranges: Feb 13 19:02:04.193114 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:02:04.193138 kernel: DMA32 empty Feb 13 19:02:04.193155 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:02:04.193172 kernel: Movable zone start for each node Feb 13 19:02:04.193188 kernel: Early memory node ranges Feb 13 19:02:04.193205 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:02:04.193225 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:02:04.193242 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:02:04.193258 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:02:04.193274 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:02:04.193291 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:02:04.193307 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:02:04.193323 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:02:04.193344 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:02:04.193361 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:02:04.193385 kernel: psci: probing for conduit method from ACPI. Feb 13 19:02:04.193402 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:02:04.193420 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:02:04.193441 kernel: psci: Trusted OS migration not required Feb 13 19:02:04.193458 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:02:04.193475 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:02:04.193493 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:02:04.193511 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:02:04.193528 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:02:04.193546 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:02:04.193563 kernel: CPU features: detected: Spectre-v2 Feb 13 19:02:04.193580 kernel: CPU features: detected: Spectre-v3a Feb 13 19:02:04.193597 kernel: CPU features: detected: Spectre-BHB Feb 13 19:02:04.193618 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:02:04.193637 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:02:04.193659 kernel: alternatives: applying boot alternatives Feb 13 19:02:04.193678 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:02:04.193697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:02:04.193714 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:02:04.193732 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:02:04.193749 kernel: Fallback order for Node 0: 0 Feb 13 19:02:04.193766 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:02:04.193783 kernel: Policy zone: Normal Feb 13 19:02:04.193800 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:02:04.193817 kernel: software IO TLB: area num 2. Feb 13 19:02:04.193839 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:02:04.193857 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Feb 13 19:02:04.193874 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:02:04.193892 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:02:04.193910 kernel: rcu: RCU event tracing is enabled. Feb 13 19:02:04.193928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:02:04.193945 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:02:04.193963 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:02:04.193981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:02:04.193998 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:02:04.194015 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:02:04.194037 kernel: GICv3: 96 SPIs implemented Feb 13 19:02:04.194054 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:02:04.194071 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:02:04.199317 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:02:04.199338 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:02:04.199356 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:02:04.199374 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:02:04.199392 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:02:04.199410 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:02:04.199427 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:02:04.199446 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:02:04.199463 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:02:04.199490 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:02:04.199508 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:02:04.199526 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:02:04.199544 kernel: Console: colour dummy device 80x25 Feb 13 19:02:04.199561 kernel: printk: console [tty1] enabled Feb 13 19:02:04.199597 kernel: ACPI: Core revision 20230628 Feb 13 19:02:04.199619 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:02:04.199637 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:02:04.199655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:02:04.199673 kernel: landlock: Up and running. Feb 13 19:02:04.199697 kernel: SELinux: Initializing. Feb 13 19:02:04.199715 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:04.199734 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:02:04.199751 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:02:04.199769 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:02:04.199786 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:02:04.199805 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:02:04.199822 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:02:04.199845 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:02:04.199863 kernel: Remapping and enabling EFI services. Feb 13 19:02:04.199880 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:02:04.199898 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:02:04.199916 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:02:04.199934 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:02:04.199951 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:02:04.199968 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:02:04.199986 kernel: SMP: Total of 2 processors activated. Feb 13 19:02:04.200003 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:02:04.200025 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:02:04.200043 kernel: CPU features: detected: CRC32 instructions Feb 13 19:02:04.200071 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:02:04.200119 kernel: alternatives: applying system-wide alternatives Feb 13 19:02:04.200138 kernel: devtmpfs: initialized Feb 13 19:02:04.200157 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:02:04.200175 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:02:04.200194 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:02:04.200212 kernel: SMBIOS 3.0.0 present. Feb 13 19:02:04.200235 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:02:04.200254 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:02:04.200272 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:02:04.200290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:02:04.200309 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:02:04.200327 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:02:04.200345 kernel: audit: type=2000 audit(0.222:1): state=initialized audit_enabled=0 res=1 Feb 13 19:02:04.200368 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:02:04.200387 kernel: cpuidle: using governor menu Feb 13 19:02:04.200405 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:02:04.200423 kernel: ASID allocator initialised with 65536 entries Feb 13 19:02:04.200441 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:02:04.200459 kernel: Serial: AMBA PL011 UART driver Feb 13 19:02:04.200478 kernel: Modules: 17760 pages in range for non-PLT usage Feb 13 19:02:04.200496 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:02:04.200514 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:02:04.200537 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:02:04.200556 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:02:04.200575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:02:04.200593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:02:04.200611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:02:04.200630 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:02:04.200648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:02:04.200666 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:02:04.200684 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:02:04.200708 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:02:04.200727 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:02:04.200746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:02:04.200764 kernel: ACPI: Interpreter enabled Feb 13 19:02:04.200782 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:02:04.200800 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:02:04.200838 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:02:04.202391 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:02:04.202667 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:02:04.202893 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:02:04.203136 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:02:04.203388 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:02:04.203415 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:02:04.203434 kernel: acpiphp: Slot [1] registered Feb 13 19:02:04.203453 kernel: acpiphp: Slot [2] registered Feb 13 19:02:04.203472 kernel: acpiphp: Slot [3] registered Feb 13 19:02:04.203498 kernel: acpiphp: Slot [4] registered Feb 13 19:02:04.203517 kernel: acpiphp: Slot [5] registered Feb 13 19:02:04.203536 kernel: acpiphp: Slot [6] registered Feb 13 19:02:04.203554 kernel: acpiphp: Slot [7] registered Feb 13 19:02:04.203572 kernel: acpiphp: Slot [8] registered Feb 13 19:02:04.203591 kernel: acpiphp: Slot [9] registered Feb 13 19:02:04.203609 kernel: acpiphp: Slot [10] registered Feb 13 19:02:04.203627 kernel: acpiphp: Slot [11] registered Feb 13 19:02:04.203645 kernel: acpiphp: Slot [12] registered Feb 13 19:02:04.203664 kernel: acpiphp: Slot [13] registered Feb 13 19:02:04.203687 kernel: acpiphp: Slot [14] registered Feb 13 19:02:04.203705 kernel: acpiphp: Slot [15] registered Feb 13 19:02:04.203724 kernel: acpiphp: Slot [16] registered Feb 13 19:02:04.203743 kernel: acpiphp: Slot [17] registered Feb 13 19:02:04.203761 kernel: acpiphp: Slot [18] registered Feb 13 19:02:04.203779 kernel: acpiphp: Slot [19] registered Feb 13 19:02:04.203797 kernel: acpiphp: Slot [20] registered Feb 13 19:02:04.203815 kernel: acpiphp: Slot [21] registered Feb 13 19:02:04.203833 kernel: acpiphp: Slot [22] registered Feb 13 19:02:04.203856 kernel: acpiphp: Slot [23] registered Feb 13 19:02:04.203875 kernel: acpiphp: Slot [24] registered Feb 13 19:02:04.203894 kernel: acpiphp: Slot [25] registered Feb 13 19:02:04.203912 kernel: acpiphp: Slot [26] registered Feb 13 19:02:04.203930 kernel: acpiphp: Slot [27] registered Feb 13 19:02:04.203948 kernel: acpiphp: Slot [28] registered Feb 13 19:02:04.203966 kernel: acpiphp: Slot [29] registered Feb 13 19:02:04.203984 kernel: acpiphp: Slot [30] registered Feb 13 19:02:04.204002 kernel: acpiphp: Slot [31] registered Feb 13 19:02:04.204020 kernel: PCI host bridge to bus 0000:00 Feb 13 19:02:04.204359 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:02:04.204578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:02:04.204772 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:02:04.204988 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:02:04.207024 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:02:04.207326 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:02:04.207547 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:02:04.207786 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:02:04.207998 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:02:04.208282 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:02:04.208531 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:02:04.208746 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:02:04.208984 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:02:04.212378 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:02:04.212635 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:02:04.212891 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:02:04.214744 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:02:04.214992 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:02:04.215291 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:02:04.215510 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:02:04.215712 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:02:04.215893 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:02:04.216297 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:02:04.218407 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:02:04.218443 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:02:04.218464 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:02:04.218483 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:02:04.218502 kernel: iommu: Default domain type: Translated Feb 13 19:02:04.218531 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:02:04.218550 kernel: efivars: Registered efivars operations Feb 13 19:02:04.218569 kernel: vgaarb: loaded Feb 13 19:02:04.218588 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:02:04.218607 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:02:04.218626 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:02:04.218644 kernel: pnp: PnP ACPI init Feb 13 19:02:04.218890 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:02:04.218923 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:02:04.218942 kernel: NET: Registered PF_INET protocol family Feb 13 19:02:04.218961 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:02:04.218980 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:02:04.218998 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:02:04.219017 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:02:04.219035 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:02:04.219053 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:02:04.219072 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:04.219121 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:02:04.219140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:02:04.219158 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:02:04.219177 kernel: kvm [1]: HYP mode not available Feb 13 19:02:04.219195 kernel: Initialise system trusted keyrings Feb 13 19:02:04.219214 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:02:04.219232 kernel: Key type asymmetric registered Feb 13 19:02:04.219250 kernel: Asymmetric key parser 'x509' registered Feb 13 19:02:04.219268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:02:04.219291 kernel: io scheduler mq-deadline registered Feb 13 19:02:04.219309 kernel: io scheduler kyber registered Feb 13 19:02:04.219327 kernel: io scheduler bfq registered Feb 13 19:02:04.219545 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:02:04.219571 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:02:04.219590 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:02:04.219609 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:02:04.219627 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:02:04.219650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:02:04.219670 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:02:04.219875 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:02:04.219901 kernel: printk: console [ttyS0] disabled Feb 13 19:02:04.219920 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:02:04.219938 kernel: printk: console [ttyS0] enabled Feb 13 19:02:04.219956 kernel: printk: bootconsole [uart0] disabled Feb 13 19:02:04.219974 kernel: thunder_xcv, ver 1.0 Feb 13 19:02:04.219992 kernel: thunder_bgx, ver 1.0 Feb 13 19:02:04.220010 kernel: nicpf, ver 1.0 Feb 13 19:02:04.220034 kernel: nicvf, ver 1.0 Feb 13 19:02:04.221991 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:02:04.222237 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:02:03 UTC (1739473323) Feb 13 19:02:04.222264 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:02:04.222284 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:02:04.222303 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:02:04.222322 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:02:04.222350 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:02:04.222372 kernel: Segment Routing with IPv6 Feb 13 19:02:04.222390 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:02:04.222408 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:02:04.222426 kernel: Key type dns_resolver registered Feb 13 19:02:04.222445 kernel: registered taskstats version 1 Feb 13 19:02:04.222464 kernel: Loading compiled-in X.509 certificates Feb 13 19:02:04.222482 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:02:04.222501 kernel: Key type .fscrypt registered Feb 13 19:02:04.222518 kernel: Key type fscrypt-provisioning registered Feb 13 19:02:04.222542 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:02:04.222561 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:02:04.222579 kernel: ima: No architecture policies found Feb 13 19:02:04.222597 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:02:04.222615 kernel: clk: Disabling unused clocks Feb 13 19:02:04.222634 kernel: Freeing unused kernel memory: 38336K Feb 13 19:02:04.222652 kernel: Run /init as init process Feb 13 19:02:04.222671 kernel: with arguments: Feb 13 19:02:04.222729 kernel: /init Feb 13 19:02:04.222822 kernel: with environment: Feb 13 19:02:04.222842 kernel: HOME=/ Feb 13 19:02:04.222861 kernel: TERM=linux Feb 13 19:02:04.222879 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:02:04.222901 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:02:04.222925 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:04.222947 systemd[1]: Detected virtualization amazon. Feb 13 19:02:04.222972 systemd[1]: Detected architecture arm64. Feb 13 19:02:04.222992 systemd[1]: Running in initrd. Feb 13 19:02:04.223011 systemd[1]: No hostname configured, using default hostname. Feb 13 19:02:04.223032 systemd[1]: Hostname set to . Feb 13 19:02:04.223051 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:04.223072 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:02:04.223115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:04.223136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:04.223158 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:02:04.223185 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:04.223206 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:02:04.223228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:02:04.223250 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:02:04.223271 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:02:04.223291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:04.223317 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:04.223337 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:04.223358 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:04.223378 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:04.223398 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:04.223418 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:04.223439 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:04.223459 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:02:04.223479 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:02:04.223504 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:04.223524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:04.223544 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:04.223564 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:04.223584 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:02:04.223604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:04.223624 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:02:04.223644 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:02:04.223669 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:04.223743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:04.223770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:04.223790 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:04.223811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:04.223832 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:02:04.223863 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:02:04.226233 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:02:04.226281 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:02:04.226309 kernel: Bridge firewalling registered Feb 13 19:02:04.226330 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:04.226355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:04.226380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:04.226404 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:02:04.226424 systemd-journald[251]: Journal started Feb 13 19:02:04.226465 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2f5e99e964c09e7d149057da107f15) is 8M, max 75.3M, 67.3M free. Feb 13 19:02:04.154336 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:02:04.232160 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:04.187623 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:02:04.236624 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:04.247368 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:04.262437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:04.269016 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:04.302237 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:04.311749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:04.316895 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:04.326577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:02:04.354369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:04.382741 dracut-cmdline[288]: dracut-dracut-053 Feb 13 19:02:04.392593 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:02:04.436433 systemd-resolved[289]: Positive Trust Anchors: Feb 13 19:02:04.436470 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:04.436535 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:04.572117 kernel: SCSI subsystem initialized Feb 13 19:02:04.579208 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:02:04.592216 kernel: iscsi: registered transport (tcp) Feb 13 19:02:04.614489 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:02:04.614562 kernel: QLogic iSCSI HBA Driver Feb 13 19:02:04.669118 kernel: random: crng init done Feb 13 19:02:04.669509 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 19:02:04.671292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:04.675354 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:04.704153 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:04.714359 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:02:04.756491 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:02:04.756572 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:02:04.756599 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:02:04.823131 kernel: raid6: neonx8 gen() 6546 MB/s Feb 13 19:02:04.840114 kernel: raid6: neonx4 gen() 6515 MB/s Feb 13 19:02:04.857117 kernel: raid6: neonx2 gen() 5414 MB/s Feb 13 19:02:04.874112 kernel: raid6: neonx1 gen() 3922 MB/s Feb 13 19:02:04.891114 kernel: raid6: int64x8 gen() 3584 MB/s Feb 13 19:02:04.908114 kernel: raid6: int64x4 gen() 3660 MB/s Feb 13 19:02:04.925112 kernel: raid6: int64x2 gen() 3539 MB/s Feb 13 19:02:04.942871 kernel: raid6: int64x1 gen() 2753 MB/s Feb 13 19:02:04.942905 kernel: raid6: using algorithm neonx8 gen() 6546 MB/s Feb 13 19:02:04.960861 kernel: raid6: .... xor() 4762 MB/s, rmw enabled Feb 13 19:02:04.960915 kernel: raid6: using neon recovery algorithm Feb 13 19:02:04.968922 kernel: xor: measuring software checksum speed Feb 13 19:02:04.968977 kernel: 8regs : 12943 MB/sec Feb 13 19:02:04.970112 kernel: 32regs : 12042 MB/sec Feb 13 19:02:04.972110 kernel: arm64_neon : 8956 MB/sec Feb 13 19:02:04.972144 kernel: xor: using function: 8regs (12943 MB/sec) Feb 13 19:02:05.055173 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:02:05.073205 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:05.084393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:05.119621 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 19:02:05.129291 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:05.147408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:02:05.182923 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Feb 13 19:02:05.238279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:05.249502 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:05.360956 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:05.381517 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:02:05.414443 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:05.414932 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:05.418121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:05.418229 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:05.452697 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:02:05.482633 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:05.549092 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:02:05.549174 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:02:05.572171 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:02:05.572443 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:02:05.572684 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:21:fb:92:99:87 Feb 13 19:02:05.586742 (udev-worker)[529]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:05.603383 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:05.603624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:05.625072 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:02:05.625130 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:02:05.613145 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:05.625035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:05.625346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:05.628190 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:05.644308 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:02:05.653105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:02:05.653181 kernel: GPT:9289727 != 16777215 Feb 13 19:02:05.653206 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:02:05.653242 kernel: GPT:9289727 != 16777215 Feb 13 19:02:05.654951 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:02:05.655006 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:05.656653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:05.683152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:05.706476 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:02:05.751746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:05.783182 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (537) Feb 13 19:02:05.800149 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (526) Feb 13 19:02:05.899289 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:02:05.925356 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:02:05.966995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:02:05.987550 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:02:05.987718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:02:06.009402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:02:06.022065 disk-uuid[663]: Primary Header is updated. Feb 13 19:02:06.022065 disk-uuid[663]: Secondary Entries is updated. Feb 13 19:02:06.022065 disk-uuid[663]: Secondary Header is updated. Feb 13 19:02:06.033119 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:06.043115 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:07.054116 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:02:07.056440 disk-uuid[664]: The operation has completed successfully. Feb 13 19:02:07.232898 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:02:07.235184 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:02:07.342413 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:02:07.359199 sh[923]: Success Feb 13 19:02:07.378206 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:02:07.476796 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:02:07.481029 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:02:07.491270 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:02:07.534766 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:02:07.534829 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:07.534865 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:02:07.536461 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:02:07.537688 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:02:07.644116 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:02:07.656519 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:02:07.660429 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:02:07.674331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:02:07.681400 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:02:07.715954 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:07.716039 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:07.716070 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:02:07.724159 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:02:07.744208 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:02:07.747057 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:07.759058 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:02:07.769402 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:02:07.884915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:07.910341 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:07.958973 systemd-networkd[1117]: lo: Link UP Feb 13 19:02:07.958987 systemd-networkd[1117]: lo: Gained carrier Feb 13 19:02:07.962915 systemd-networkd[1117]: Enumeration completed Feb 13 19:02:07.963057 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:07.964842 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:07.964849 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:07.965301 systemd[1]: Reached target network.target - Network. Feb 13 19:02:07.971939 systemd-networkd[1117]: eth0: Link UP Feb 13 19:02:07.971946 systemd-networkd[1117]: eth0: Gained carrier Feb 13 19:02:07.971964 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:08.003164 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.18.242/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:02:08.134578 ignition[1029]: Ignition 2.20.0 Feb 13 19:02:08.134600 ignition[1029]: Stage: fetch-offline Feb 13 19:02:08.140125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:08.135021 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:08.135045 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:08.135536 ignition[1029]: Ignition finished successfully Feb 13 19:02:08.155681 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:02:08.182478 ignition[1127]: Ignition 2.20.0 Feb 13 19:02:08.182510 ignition[1127]: Stage: fetch Feb 13 19:02:08.183318 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:08.183345 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:08.183541 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:08.202241 ignition[1127]: PUT result: OK Feb 13 19:02:08.205483 ignition[1127]: parsed url from cmdline: "" Feb 13 19:02:08.205499 ignition[1127]: no config URL provided Feb 13 19:02:08.205513 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:02:08.205538 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:02:08.205580 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:08.207181 ignition[1127]: PUT result: OK Feb 13 19:02:08.207259 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:02:08.211465 ignition[1127]: GET result: OK Feb 13 19:02:08.211614 ignition[1127]: parsing config with SHA512: cb62bc320d57f7ae6952c7d9389f83c185e0e298b8331e05077155d411195ad7d3de1801f3de753818ed069ff3fe2569e49ae7356989cc62995c56ac0c4c91c7 Feb 13 19:02:08.225020 unknown[1127]: fetched base config from "system" Feb 13 19:02:08.225047 unknown[1127]: fetched base config from "system" Feb 13 19:02:08.226371 ignition[1127]: fetch: fetch complete Feb 13 19:02:08.225061 unknown[1127]: fetched user config from "aws" Feb 13 19:02:08.226384 ignition[1127]: fetch: fetch passed Feb 13 19:02:08.231954 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:02:08.226473 ignition[1127]: Ignition finished successfully Feb 13 19:02:08.244441 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:02:08.274136 ignition[1134]: Ignition 2.20.0 Feb 13 19:02:08.274633 ignition[1134]: Stage: kargs Feb 13 19:02:08.275286 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:08.275321 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:08.275498 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:08.278299 ignition[1134]: PUT result: OK Feb 13 19:02:08.288314 ignition[1134]: kargs: kargs passed Feb 13 19:02:08.288468 ignition[1134]: Ignition finished successfully Feb 13 19:02:08.295135 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:02:08.306450 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:02:08.333183 ignition[1140]: Ignition 2.20.0 Feb 13 19:02:08.333204 ignition[1140]: Stage: disks Feb 13 19:02:08.333758 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:08.333783 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:08.333934 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:08.336683 ignition[1140]: PUT result: OK Feb 13 19:02:08.345919 ignition[1140]: disks: disks passed Feb 13 19:02:08.346004 ignition[1140]: Ignition finished successfully Feb 13 19:02:08.351776 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:02:08.352546 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:08.355463 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:02:08.355747 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:08.356047 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:08.356655 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:08.372426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:02:08.427741 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:02:08.435560 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:02:08.541311 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:02:08.623374 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:02:08.624565 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:02:08.625600 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:08.640270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:08.646285 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:02:08.650428 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:02:08.650514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:02:08.650560 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:08.669536 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:02:08.680401 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:02:08.690138 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) Feb 13 19:02:08.693928 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:08.693973 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:08.695612 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:02:08.711109 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:02:08.713841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:09.075316 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:02:09.084679 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:02:09.092544 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:02:09.100016 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:02:09.325944 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:09.333334 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:02:09.337605 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:02:09.367129 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:09.402029 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:02:09.410246 ignition[1280]: INFO : Ignition 2.20.0 Feb 13 19:02:09.412235 ignition[1280]: INFO : Stage: mount Feb 13 19:02:09.414048 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:09.414048 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:09.418480 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:09.422422 ignition[1280]: INFO : PUT result: OK Feb 13 19:02:09.427112 ignition[1280]: INFO : mount: mount passed Feb 13 19:02:09.428970 ignition[1280]: INFO : Ignition finished successfully Feb 13 19:02:09.433160 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:02:09.446783 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:02:09.532466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:02:09.545492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:02:09.570126 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Feb 13 19:02:09.573911 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:02:09.573958 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:02:09.573984 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:02:09.581114 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:02:09.584129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:02:09.589231 systemd-networkd[1117]: eth0: Gained IPv6LL Feb 13 19:02:09.618475 ignition[1308]: INFO : Ignition 2.20.0 Feb 13 19:02:09.618475 ignition[1308]: INFO : Stage: files Feb 13 19:02:09.621993 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:09.621993 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:09.621993 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:09.629509 ignition[1308]: INFO : PUT result: OK Feb 13 19:02:09.633335 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:02:09.646054 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:02:09.646054 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:02:09.679023 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:02:09.681965 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:02:09.684912 unknown[1308]: wrote ssh authorized keys file for user: core Feb 13 19:02:09.689174 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:02:09.689174 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:02:09.689174 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:02:09.773924 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:02:10.013133 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:02:10.013133 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:02:10.020006 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:02:10.374578 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:02:10.517530 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:02:10.517530 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:10.527641 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:02:10.921541 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:02:11.259509 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:02:11.263545 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:02:11.266043 ignition[1308]: INFO : files: files passed Feb 13 19:02:11.266043 ignition[1308]: INFO : Ignition finished successfully Feb 13 19:02:11.291244 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:02:11.314450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:02:11.322070 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:02:11.328384 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:02:11.328591 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:02:11.361251 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:11.361251 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:11.368135 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:02:11.372465 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:11.379220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:02:11.388385 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:02:11.437616 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:02:11.438193 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:02:11.445239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:02:11.447196 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:02:11.449155 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:02:11.457373 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:02:11.495398 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:11.507391 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:02:11.532487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:11.537652 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:11.540747 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:02:11.542854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:02:11.543732 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:02:11.553609 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:02:11.555897 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:02:11.561183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:02:11.563504 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:02:11.569557 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:02:11.572046 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:02:11.577762 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:02:11.581547 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:02:11.587570 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:02:11.589751 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:02:11.594750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:02:11.595762 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:02:11.601037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:11.605285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:11.607717 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:02:11.611453 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:11.614592 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:02:11.614808 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:02:11.622188 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:02:11.622669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:02:11.624590 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:02:11.624803 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:02:11.646237 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:02:11.648052 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:02:11.648515 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:11.676387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:02:11.680232 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:02:11.681851 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:11.693240 ignition[1360]: INFO : Ignition 2.20.0 Feb 13 19:02:11.693240 ignition[1360]: INFO : Stage: umount Feb 13 19:02:11.693240 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:02:11.693240 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:02:11.693240 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:02:11.707286 ignition[1360]: INFO : PUT result: OK Feb 13 19:02:11.698172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:02:11.699976 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:02:11.714517 ignition[1360]: INFO : umount: umount passed Feb 13 19:02:11.714517 ignition[1360]: INFO : Ignition finished successfully Feb 13 19:02:11.725381 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:02:11.727346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:02:11.735804 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:02:11.737601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:02:11.737867 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:02:11.744205 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:02:11.744382 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:02:11.746770 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:02:11.746882 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:02:11.749992 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:02:11.750098 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:02:11.754122 systemd[1]: Stopped target network.target - Network. Feb 13 19:02:11.759112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:02:11.759226 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:02:11.762367 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:02:11.770031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:02:11.771944 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:11.774840 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:02:11.776562 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:02:11.778398 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:02:11.778475 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:02:11.780403 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:02:11.780469 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:02:11.782407 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:02:11.782495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:02:11.784403 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:02:11.784488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:02:11.787239 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:02:11.801267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:02:11.805528 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:02:11.805740 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:02:11.812315 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:02:11.813363 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:02:11.813480 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:11.849894 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:02:11.851976 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:02:11.852330 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:02:11.860371 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:11.863995 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:02:11.869196 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:02:11.886201 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:02:11.886904 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:02:11.887109 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:02:11.909742 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:02:11.912138 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:11.923008 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:02:11.923173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:11.926302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:02:11.926379 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:11.936176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:02:11.936290 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:02:11.942582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:02:11.942706 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:02:11.945647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:02:11.945739 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:02:11.957849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:02:11.957959 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:02:11.973368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:02:11.976941 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:11.977060 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:11.979660 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:02:11.979745 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:11.992980 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:02:11.993241 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:12.000067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:02:12.000172 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:12.004718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:02:12.004823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:12.010566 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:02:12.010686 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:02:12.010800 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:12.010895 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:02:12.012453 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:02:12.012768 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:02:12.037867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:02:12.038247 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:02:12.045670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:02:12.057352 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:02:12.074596 systemd[1]: Switching root. Feb 13 19:02:12.110485 systemd-journald[251]: Journal stopped Feb 13 19:02:14.113982 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:02:14.120867 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:02:14.120935 kernel: SELinux: policy capability open_perms=1 Feb 13 19:02:14.120967 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:02:14.120997 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:02:14.121028 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:02:14.121059 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:02:14.121141 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:02:14.121175 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:02:14.121206 kernel: audit: type=1403 audit(1739473332.423:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:02:14.121248 systemd[1]: Successfully loaded SELinux policy in 49.225ms. Feb 13 19:02:14.121302 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.277ms. Feb 13 19:02:14.121337 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:02:14.121369 systemd[1]: Detected virtualization amazon. Feb 13 19:02:14.121403 systemd[1]: Detected architecture arm64. Feb 13 19:02:14.121434 systemd[1]: Detected first boot. Feb 13 19:02:14.121470 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:02:14.121501 zram_generator::config[1406]: No configuration found. Feb 13 19:02:14.121533 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:02:14.121564 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:02:14.121597 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:02:14.121627 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:02:14.121656 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:02:14.121685 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:14.121720 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:02:14.121750 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:02:14.121783 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:02:14.121812 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:02:14.121843 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:02:14.121876 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:02:14.121906 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:02:14.121938 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:02:14.121968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:02:14.122002 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:02:14.122031 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:02:14.122072 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:02:14.122125 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:02:14.122159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:02:14.122201 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:02:14.122231 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:02:14.122260 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:02:14.122295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:02:14.122325 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:02:14.122356 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:02:14.122388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:02:14.122419 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:02:14.122447 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:02:14.122477 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:02:14.122506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:02:14.122542 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:02:14.122573 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:02:14.122602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:02:14.122633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:02:14.122662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:02:14.122691 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:02:14.122724 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:02:14.122753 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:02:14.122782 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:02:14.122815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:02:14.122850 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:02:14.122881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:02:14.122915 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:02:14.122950 systemd[1]: Reached target machines.target - Containers. Feb 13 19:02:14.122979 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:02:14.123010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:14.123042 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:02:14.123071 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:02:14.139230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:14.139268 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:14.139300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:14.139331 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:02:14.139360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:14.139390 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:02:14.139422 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:02:14.139458 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:02:14.139495 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:02:14.139524 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:02:14.139557 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:14.139590 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:02:14.139620 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:02:14.139649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:02:14.139681 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:02:14.139713 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:02:14.139747 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:02:14.139784 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:02:14.139814 systemd[1]: Stopped verity-setup.service. Feb 13 19:02:14.139846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:02:14.139877 kernel: fuse: init (API version 7.39) Feb 13 19:02:14.139906 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:02:14.139940 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:02:14.139969 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:02:14.139998 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:02:14.140026 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:02:14.140056 kernel: loop: module loaded Feb 13 19:02:14.141285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:02:14.141361 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:02:14.141396 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:02:14.141429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:14.141468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:14.141498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:14.141529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:14.141558 kernel: ACPI: bus type drm_connector registered Feb 13 19:02:14.141588 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:02:14.141624 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:02:14.141657 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:14.141688 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:14.141721 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:14.141757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:14.141788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:02:14.141823 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:02:14.141861 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:02:14.141957 systemd-journald[1496]: Collecting audit messages is disabled. Feb 13 19:02:14.142026 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:02:14.142057 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:02:14.149180 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:02:14.149246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:02:14.149278 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:02:14.149309 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:02:14.149339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:14.149378 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:02:14.149415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:14.149448 systemd-journald[1496]: Journal started Feb 13 19:02:14.149506 systemd-journald[1496]: Runtime Journal (/run/log/journal/ec2f5e99e964c09e7d149057da107f15) is 8M, max 75.3M, 67.3M free. Feb 13 19:02:13.502678 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:02:13.516686 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:02:13.517634 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:02:14.170371 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:02:14.170456 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:14.180502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:14.191148 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:02:14.201561 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:02:14.206030 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:02:14.210192 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:02:14.213681 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:02:14.220339 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:02:14.222917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:02:14.226532 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:02:14.238951 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:02:14.285825 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:02:14.289667 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:02:14.306526 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:02:14.314111 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:02:14.320445 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:02:14.339447 kernel: loop0: detected capacity change from 0 to 53784 Feb 13 19:02:14.358786 systemd-journald[1496]: Time spent on flushing to /var/log/journal/ec2f5e99e964c09e7d149057da107f15 is 69.317ms for 926 entries. Feb 13 19:02:14.358786 systemd-journald[1496]: System Journal (/var/log/journal/ec2f5e99e964c09e7d149057da107f15) is 8M, max 195.6M, 187.6M free. Feb 13 19:02:14.459766 systemd-journald[1496]: Received client request to flush runtime journal. Feb 13 19:02:14.459859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:02:14.367748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:14.470463 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:02:14.480231 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:02:14.497350 kernel: loop1: detected capacity change from 0 to 123192 Feb 13 19:02:14.517342 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:02:14.523567 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:02:14.535386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:02:14.576121 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 19:02:14.617733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:02:14.632988 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:02:14.642357 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Feb 13 19:02:14.642397 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Feb 13 19:02:14.660970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:02:14.674754 udevadm[1563]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:02:14.732199 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 19:02:14.802756 kernel: loop4: detected capacity change from 0 to 53784 Feb 13 19:02:14.824053 kernel: loop5: detected capacity change from 0 to 123192 Feb 13 19:02:14.850151 kernel: loop6: detected capacity change from 0 to 201592 Feb 13 19:02:14.892796 kernel: loop7: detected capacity change from 0 to 113512 Feb 13 19:02:14.918458 (sd-merge)[1568]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:02:14.920504 (sd-merge)[1568]: Merged extensions into '/usr'. Feb 13 19:02:14.930981 systemd[1]: Reload requested from client PID 1522 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:02:14.931017 systemd[1]: Reloading... Feb 13 19:02:15.094749 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:02:15.165110 zram_generator::config[1599]: No configuration found. Feb 13 19:02:15.422760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:15.579579 systemd[1]: Reloading finished in 645 ms. Feb 13 19:02:15.605544 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:02:15.608322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:02:15.611463 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:02:15.630356 systemd[1]: Starting ensure-sysext.service... Feb 13 19:02:15.634488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:02:15.647645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:02:15.673451 systemd[1]: Reload requested from client PID 1649 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:02:15.673477 systemd[1]: Reloading... Feb 13 19:02:15.719642 systemd-tmpfiles[1650]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:02:15.722347 systemd-tmpfiles[1650]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:02:15.725397 systemd-tmpfiles[1650]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:02:15.725957 systemd-tmpfiles[1650]: ACLs are not supported, ignoring. Feb 13 19:02:15.726125 systemd-tmpfiles[1650]: ACLs are not supported, ignoring. Feb 13 19:02:15.737503 systemd-tmpfiles[1650]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:15.737529 systemd-tmpfiles[1650]: Skipping /boot Feb 13 19:02:15.764195 systemd-udevd[1651]: Using default interface naming scheme 'v255'. Feb 13 19:02:15.775966 systemd-tmpfiles[1650]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:02:15.775997 systemd-tmpfiles[1650]: Skipping /boot Feb 13 19:02:15.813923 zram_generator::config[1680]: No configuration found. Feb 13 19:02:16.069834 (udev-worker)[1692]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:16.279916 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:16.312119 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1702) Feb 13 19:02:16.471813 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:02:16.472855 systemd[1]: Reloading finished in 798 ms. Feb 13 19:02:16.502055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:02:16.535433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:02:16.594495 systemd[1]: Finished ensure-sysext.service. Feb 13 19:02:16.621732 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:02:16.652241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:02:16.660426 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:16.674014 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:02:16.676531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:02:16.686189 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:02:16.696492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:02:16.703178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:02:16.710457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:02:16.717454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:02:16.721171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:02:16.732521 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:02:16.734829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:02:16.748467 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:02:16.756393 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:16.766868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:02:16.772820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:02:16.774888 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:02:16.790034 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:02:16.796439 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:02:16.801735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:02:16.802230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:02:16.805430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:02:16.805822 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:02:16.816816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:02:16.845979 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:02:16.846420 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:02:16.863176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:02:16.879662 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:02:16.881173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:02:16.884618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:02:16.900374 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:02:16.921653 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:02:16.935225 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:02:16.938956 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:02:16.958925 augenrules[1889]: No rules Feb 13 19:02:16.962940 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:16.963677 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:16.967636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:02:16.979058 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:02:16.994567 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:02:16.997844 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:02:17.003393 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:02:17.016315 lvm[1898]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:02:17.046035 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:02:17.060302 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:02:17.068826 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:02:17.115228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:02:17.189769 systemd-networkd[1866]: lo: Link UP Feb 13 19:02:17.189789 systemd-networkd[1866]: lo: Gained carrier Feb 13 19:02:17.192628 systemd-networkd[1866]: Enumeration completed Feb 13 19:02:17.192834 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:02:17.195520 systemd-networkd[1866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:17.195542 systemd-networkd[1866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:02:17.197751 systemd-networkd[1866]: eth0: Link UP Feb 13 19:02:17.198036 systemd-networkd[1866]: eth0: Gained carrier Feb 13 19:02:17.198069 systemd-networkd[1866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:02:17.206537 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:02:17.208823 systemd-networkd[1866]: eth0: DHCPv4 address 172.31.18.242/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:02:17.218429 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:02:17.237443 systemd-resolved[1868]: Positive Trust Anchors: Feb 13 19:02:17.237482 systemd-resolved[1868]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:02:17.237545 systemd-resolved[1868]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:02:17.245423 systemd-resolved[1868]: Defaulting to hostname 'linux'. Feb 13 19:02:17.249152 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:02:17.253411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:02:17.255641 systemd[1]: Reached target network.target - Network. Feb 13 19:02:17.257307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:02:17.259529 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:02:17.261646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:02:17.263967 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:02:17.266605 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:02:17.268873 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:02:17.271204 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:02:17.273526 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:02:17.273573 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:02:17.275292 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:02:17.277859 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:02:17.282549 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:02:17.288640 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:02:17.291404 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:02:17.293820 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:02:17.308207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:02:17.311703 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:02:17.315121 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:02:17.317347 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:02:17.319245 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:02:17.321120 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:17.321184 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:02:17.327281 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:02:17.337589 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:02:17.344390 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:02:17.354336 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:02:17.375646 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:02:17.377650 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:02:17.381713 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:02:17.391445 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:02:17.397463 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:02:17.406299 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:02:17.417770 jq[1922]: false Feb 13 19:02:17.423398 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:02:17.433227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:02:17.443352 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:02:17.448136 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:02:17.449071 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:02:17.454432 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:02:17.463216 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:02:17.472882 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:02:17.475225 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:02:17.512250 coreos-metadata[1920]: Feb 13 19:02:17.510 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.523 INFO Fetch successful Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.523 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.527 INFO Fetch successful Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.530 INFO Fetch successful Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.530 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.531 INFO Fetch successful Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.534 INFO Fetch failed with 404: resource not found Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.534 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.535 INFO Fetch successful Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.535 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.539 INFO Fetch successful Feb 13 19:02:17.541024 coreos-metadata[1920]: Feb 13 19:02:17.539 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:02:17.533698 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:02:17.536209 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:02:17.544897 coreos-metadata[1920]: Feb 13 19:02:17.542 INFO Fetch successful Feb 13 19:02:17.544897 coreos-metadata[1920]: Feb 13 19:02:17.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:02:17.544897 coreos-metadata[1920]: Feb 13 19:02:17.544 INFO Fetch successful Feb 13 19:02:17.544897 coreos-metadata[1920]: Feb 13 19:02:17.544 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:02:17.553433 coreos-metadata[1920]: Feb 13 19:02:17.546 INFO Fetch successful Feb 13 19:02:17.556555 extend-filesystems[1923]: Found loop4 Feb 13 19:02:17.556555 extend-filesystems[1923]: Found loop5 Feb 13 19:02:17.556555 extend-filesystems[1923]: Found loop6 Feb 13 19:02:17.556555 extend-filesystems[1923]: Found loop7 Feb 13 19:02:17.556555 extend-filesystems[1923]: Found nvme0n1 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p1 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p2 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p3 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found usr Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p4 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p6 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p7 Feb 13 19:02:17.600098 extend-filesystems[1923]: Found nvme0n1p9 Feb 13 19:02:17.600098 extend-filesystems[1923]: Checking size of /dev/nvme0n1p9 Feb 13 19:02:17.606033 dbus-daemon[1921]: [system] SELinux support is enabled Feb 13 19:02:17.648964 jq[1935]: true Feb 13 19:02:17.631105 dbus-daemon[1921]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1866 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:02:17.638803 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:02:17.645960 (ntainerd)[1954]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:02:17.647564 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:02:17.647612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:02:17.650289 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:02:17.650325 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:02:17.663710 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:02:17.664205 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:02:17.695602 extend-filesystems[1923]: Resized partition /dev/nvme0n1p9 Feb 13 19:02:17.693013 dbus-daemon[1921]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:02:17.702339 extend-filesystems[1967]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:02:17.706283 jq[1953]: true Feb 13 19:02:17.710868 update_engine[1934]: I20250213 19:02:17.710346 1934 main.cc:92] Flatcar Update Engine starting Feb 13 19:02:17.715422 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:02:17.753220 update_engine[1934]: I20250213 19:02:17.750757 1934 update_check_scheduler.cc:74] Next update check in 9m12s Feb 13 19:02:17.755899 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:02:17.759688 tar[1950]: linux-arm64/LICENSE Feb 13 19:02:17.763884 tar[1950]: linux-arm64/helm Feb 13 19:02:17.767498 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:02:17.773395 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:02:17.783144 ntpd[1927]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:02:48 UTC 2025 (1): Starting Feb 13 19:02:17.793200 ntpd[1927]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:02:48 UTC 2025 (1): Starting Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: ---------------------------------------------------- Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: corporation. Support and training for ntp-4 are Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: available at https://www.nwtime.org/support Feb 13 19:02:17.793920 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: ---------------------------------------------------- Feb 13 19:02:17.793221 ntpd[1927]: ---------------------------------------------------- Feb 13 19:02:17.793240 ntpd[1927]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:02:17.793258 ntpd[1927]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:02:17.793275 ntpd[1927]: corporation. Support and training for ntp-4 are Feb 13 19:02:17.811499 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: proto: precision = 0.096 usec (-23) Feb 13 19:02:17.793292 ntpd[1927]: available at https://www.nwtime.org/support Feb 13 19:02:17.793309 ntpd[1927]: ---------------------------------------------------- Feb 13 19:02:17.810981 ntpd[1927]: proto: precision = 0.096 usec (-23) Feb 13 19:02:17.833642 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: basedate set to 2025-02-01 Feb 13 19:02:17.833642 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: gps base set to 2025-02-02 (week 2352) Feb 13 19:02:17.830719 ntpd[1927]: basedate set to 2025-02-01 Feb 13 19:02:17.826235 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:02:17.830759 ntpd[1927]: gps base set to 2025-02-02 (week 2352) Feb 13 19:02:17.830602 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:02:17.846607 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:02:17.851441 ntpd[1927]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:02:17.851939 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:02:17.851939 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:02:17.851539 ntpd[1927]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:02:17.855823 ntpd[1927]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Listen normally on 3 eth0 172.31.18.242:123 Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Listen normally on 4 lo [::1]:123 Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: bind(21) AF_INET6 fe80::421:fbff:fe92:9987%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: unable to create socket on eth0 (5) for fe80::421:fbff:fe92:9987%2#123 Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: failed to init interface for address fe80::421:fbff:fe92:9987%2 Feb 13 19:02:17.860154 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: Listening on routing socket on fd #21 for interface updates Feb 13 19:02:17.856408 ntpd[1927]: Listen normally on 3 eth0 172.31.18.242:123 Feb 13 19:02:17.856486 ntpd[1927]: Listen normally on 4 lo [::1]:123 Feb 13 19:02:17.856569 ntpd[1927]: bind(21) AF_INET6 fe80::421:fbff:fe92:9987%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:02:17.856607 ntpd[1927]: unable to create socket on eth0 (5) for fe80::421:fbff:fe92:9987%2#123 Feb 13 19:02:17.856634 ntpd[1927]: failed to init interface for address fe80::421:fbff:fe92:9987%2 Feb 13 19:02:17.856690 ntpd[1927]: Listening on routing socket on fd #21 for interface updates Feb 13 19:02:17.881059 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1692) Feb 13 19:02:17.886115 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:02:17.887413 ntpd[1927]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:17.893478 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:17.893478 ntpd[1927]: 13 Feb 19:02:17 ntpd[1927]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:17.892737 ntpd[1927]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:02:17.922202 extend-filesystems[1967]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:02:17.922202 extend-filesystems[1967]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:02:17.922202 extend-filesystems[1967]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:02:17.915733 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:02:17.941817 extend-filesystems[1923]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:02:17.919332 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:02:17.998547 bash[2015]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:17.999934 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:02:18.055028 systemd[1]: Starting sshkeys.service... Feb 13 19:02:18.077300 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:02:18.098204 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:02:18.107194 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:02:18.224668 systemd-logind[1933]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:02:18.224722 systemd-logind[1933]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:02:18.225139 systemd-logind[1933]: New seat seat0. Feb 13 19:02:18.230227 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:02:18.452603 locksmithd[1984]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:02:18.475002 containerd[1954]: time="2025-02-13T19:02:18.472437178Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:02:18.537732 coreos-metadata[2041]: Feb 13 19:02:18.536 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:02:18.541576 coreos-metadata[2041]: Feb 13 19:02:18.541 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:02:18.548236 coreos-metadata[2041]: Feb 13 19:02:18.548 INFO Fetch successful Feb 13 19:02:18.548236 coreos-metadata[2041]: Feb 13 19:02:18.548 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:02:18.551174 coreos-metadata[2041]: Feb 13 19:02:18.550 INFO Fetch successful Feb 13 19:02:18.554940 unknown[2041]: wrote ssh authorized keys file for user: core Feb 13 19:02:18.602542 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:02:18.606441 dbus-daemon[1921]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:02:18.611691 dbus-daemon[1921]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1983 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:02:18.624794 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:02:18.633441 update-ssh-keys[2101]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:02:18.631957 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:02:18.637839 systemd[1]: Finished sshkeys.service. Feb 13 19:02:18.704038 polkitd[2106]: Started polkitd version 121 Feb 13 19:02:18.738020 containerd[1954]: time="2025-02-13T19:02:18.737399591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.751474 containerd[1954]: time="2025-02-13T19:02:18.751394099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:18.751474 containerd[1954]: time="2025-02-13T19:02:18.751465607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:02:18.751652 containerd[1954]: time="2025-02-13T19:02:18.751503731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:02:18.752167 containerd[1954]: time="2025-02-13T19:02:18.751796747Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:02:18.752167 containerd[1954]: time="2025-02-13T19:02:18.751851491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752167 containerd[1954]: time="2025-02-13T19:02:18.751982195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752167 containerd[1954]: time="2025-02-13T19:02:18.752010491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752686 containerd[1954]: time="2025-02-13T19:02:18.752460947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752686 containerd[1954]: time="2025-02-13T19:02:18.752506403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752686 containerd[1954]: time="2025-02-13T19:02:18.752539463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752686 containerd[1954]: time="2025-02-13T19:02:18.752564087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.752892 containerd[1954]: time="2025-02-13T19:02:18.752729555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.757572 containerd[1954]: time="2025-02-13T19:02:18.757224119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:02:18.757572 containerd[1954]: time="2025-02-13T19:02:18.757535291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:02:18.757572 containerd[1954]: time="2025-02-13T19:02:18.757565735Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:02:18.757788 containerd[1954]: time="2025-02-13T19:02:18.757763855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:02:18.758523 containerd[1954]: time="2025-02-13T19:02:18.757860791Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:02:18.758416 polkitd[2106]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:02:18.758526 polkitd[2106]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:02:18.762138 polkitd[2106]: Finished loading, compiling and executing 2 rules Feb 13 19:02:18.765808 dbus-daemon[1921]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:02:18.767016 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:02:18.771833 containerd[1954]: time="2025-02-13T19:02:18.771571751Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:02:18.771833 containerd[1954]: time="2025-02-13T19:02:18.771676175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:02:18.771833 containerd[1954]: time="2025-02-13T19:02:18.771715691Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:02:18.771833 containerd[1954]: time="2025-02-13T19:02:18.771753443Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:02:18.771833 containerd[1954]: time="2025-02-13T19:02:18.771788279Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:02:18.772856 containerd[1954]: time="2025-02-13T19:02:18.772112987Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:02:18.772856 containerd[1954]: time="2025-02-13T19:02:18.772546835Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:02:18.772856 containerd[1954]: time="2025-02-13T19:02:18.772770311Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:02:18.772856 containerd[1954]: time="2025-02-13T19:02:18.772809011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:02:18.772856 containerd[1954]: time="2025-02-13T19:02:18.772843235Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:02:18.773325 containerd[1954]: time="2025-02-13T19:02:18.772884071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.773325 containerd[1954]: time="2025-02-13T19:02:18.772914755Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.773325 containerd[1954]: time="2025-02-13T19:02:18.772945187Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.773325 containerd[1954]: time="2025-02-13T19:02:18.772976711Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.773325 containerd[1954]: time="2025-02-13T19:02:18.773010131Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.773325 containerd[1954]: time="2025-02-13T19:02:18.773039567Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.775203 polkitd[2106]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.773070911Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778453367Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778511255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778544771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778573871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778607555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778638371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778668131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.778693 containerd[1954]: time="2025-02-13T19:02:18.778696847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778728947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778759139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778792859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778822211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778852979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778882103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778915043Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.778961279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.779011031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.779044499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.780213407Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.780265151Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:02:18.780317 containerd[1954]: time="2025-02-13T19:02:18.780290903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:02:18.780923 containerd[1954]: time="2025-02-13T19:02:18.780319139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:02:18.780923 containerd[1954]: time="2025-02-13T19:02:18.780341903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.780923 containerd[1954]: time="2025-02-13T19:02:18.780386783Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:02:18.780923 containerd[1954]: time="2025-02-13T19:02:18.780411335Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:02:18.780923 containerd[1954]: time="2025-02-13T19:02:18.780440975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:02:18.781678 containerd[1954]: time="2025-02-13T19:02:18.780979787Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:02:18.785798 containerd[1954]: time="2025-02-13T19:02:18.785370539Z" level=info msg="Connect containerd service" Feb 13 19:02:18.785798 containerd[1954]: time="2025-02-13T19:02:18.785504075Z" level=info msg="using legacy CRI server" Feb 13 19:02:18.785798 containerd[1954]: time="2025-02-13T19:02:18.785527451Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:02:18.785798 containerd[1954]: time="2025-02-13T19:02:18.785763059Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:02:18.787605 containerd[1954]: time="2025-02-13T19:02:18.787542923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:02:18.788415 containerd[1954]: time="2025-02-13T19:02:18.787836155Z" level=info msg="Start subscribing containerd event" Feb 13 19:02:18.788415 containerd[1954]: time="2025-02-13T19:02:18.787924691Z" level=info msg="Start recovering state" Feb 13 19:02:18.788415 containerd[1954]: time="2025-02-13T19:02:18.788057507Z" level=info msg="Start event monitor" Feb 13 19:02:18.788415 containerd[1954]: time="2025-02-13T19:02:18.788110751Z" level=info msg="Start snapshots syncer" Feb 13 19:02:18.788415 containerd[1954]: time="2025-02-13T19:02:18.788135303Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:02:18.788415 containerd[1954]: time="2025-02-13T19:02:18.788156447Z" level=info msg="Start streaming server" Feb 13 19:02:18.795307 containerd[1954]: time="2025-02-13T19:02:18.792181259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:02:18.795307 containerd[1954]: time="2025-02-13T19:02:18.792357371Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:02:18.795307 containerd[1954]: time="2025-02-13T19:02:18.792476879Z" level=info msg="containerd successfully booted in 0.332742s" Feb 13 19:02:18.792609 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:02:18.806956 ntpd[1927]: bind(24) AF_INET6 fe80::421:fbff:fe92:9987%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:02:18.807647 ntpd[1927]: 13 Feb 19:02:18 ntpd[1927]: bind(24) AF_INET6 fe80::421:fbff:fe92:9987%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:02:18.807647 ntpd[1927]: 13 Feb 19:02:18 ntpd[1927]: unable to create socket on eth0 (6) for fe80::421:fbff:fe92:9987%2#123 Feb 13 19:02:18.807647 ntpd[1927]: 13 Feb 19:02:18 ntpd[1927]: failed to init interface for address fe80::421:fbff:fe92:9987%2 Feb 13 19:02:18.807017 ntpd[1927]: unable to create socket on eth0 (6) for fe80::421:fbff:fe92:9987%2#123 Feb 13 19:02:18.807045 ntpd[1927]: failed to init interface for address fe80::421:fbff:fe92:9987%2 Feb 13 19:02:18.827501 systemd-hostnamed[1983]: Hostname set to (transient) Feb 13 19:02:18.827792 systemd-resolved[1868]: System hostname changed to 'ip-172-31-18-242'. Feb 13 19:02:18.934281 systemd-networkd[1866]: eth0: Gained IPv6LL Feb 13 19:02:18.944946 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:02:18.949731 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:02:18.967704 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:02:18.980679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:18.990888 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:02:19.107201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:02:19.134119 amazon-ssm-agent[2126]: Initializing new seelog logger Feb 13 19:02:19.136130 amazon-ssm-agent[2126]: New Seelog Logger Creation Complete Feb 13 19:02:19.136130 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.136130 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.136130 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 processing appconfig overrides Feb 13 19:02:19.139132 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.139132 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.139132 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 processing appconfig overrides Feb 13 19:02:19.139132 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.139132 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.139132 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 processing appconfig overrides Feb 13 19:02:19.141115 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO Proxy environment variables: Feb 13 19:02:19.144649 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.144649 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:02:19.144649 amazon-ssm-agent[2126]: 2025/02/13 19:02:19 processing appconfig overrides Feb 13 19:02:19.239466 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO http_proxy: Feb 13 19:02:19.252942 sshd_keygen[1972]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:02:19.339356 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO no_proxy: Feb 13 19:02:19.362624 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:02:19.379650 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:02:19.390686 systemd[1]: Started sshd@0-172.31.18.242:22-139.178.89.65:34658.service - OpenSSH per-connection server daemon (139.178.89.65:34658). Feb 13 19:02:19.428411 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:02:19.430189 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:02:19.437707 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO https_proxy: Feb 13 19:02:19.447559 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:02:19.514164 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:02:19.527663 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:02:19.535734 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:02:19.538328 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:02:19.539247 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:02:19.582526 tar[1950]: linux-arm64/README.md Feb 13 19:02:19.618965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:02:19.636399 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:02:19.698549 sshd[2151]: Accepted publickey for core from 139.178.89.65 port 34658 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:19.703971 sshd-session[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:19.722417 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:02:19.733609 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:02:19.740998 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO Agent will take identity from EC2 Feb 13 19:02:19.760717 systemd-logind[1933]: New session 1 of user core. Feb 13 19:02:19.781978 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:02:19.796808 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:02:19.816168 (systemd)[2167]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:02:19.822031 systemd-logind[1933]: New session c1 of user core. Feb 13 19:02:19.832961 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:02:19.832961 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:02:19.833159 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [Registrar] Starting registrar module Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:02:19.833924 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:02:19.841136 amazon-ssm-agent[2126]: 2025-02-13 19:02:19 INFO [CredentialRefresher] Next credential rotation will be in 30.399978949 minutes Feb 13 19:02:20.124787 systemd[2167]: Queued start job for default target default.target. Feb 13 19:02:20.135124 systemd[2167]: Created slice app.slice - User Application Slice. Feb 13 19:02:20.135190 systemd[2167]: Reached target paths.target - Paths. Feb 13 19:02:20.135402 systemd[2167]: Reached target timers.target - Timers. Feb 13 19:02:20.138042 systemd[2167]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:02:20.177990 systemd[2167]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:02:20.178286 systemd[2167]: Reached target sockets.target - Sockets. Feb 13 19:02:20.178388 systemd[2167]: Reached target basic.target - Basic System. Feb 13 19:02:20.178473 systemd[2167]: Reached target default.target - Main User Target. Feb 13 19:02:20.178533 systemd[2167]: Startup finished in 342ms. Feb 13 19:02:20.178817 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:02:20.189393 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:02:20.353747 systemd[1]: Started sshd@1-172.31.18.242:22-139.178.89.65:34664.service - OpenSSH per-connection server daemon (139.178.89.65:34664). Feb 13 19:02:20.551650 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 34664 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:20.554263 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:20.562414 systemd-logind[1933]: New session 2 of user core. Feb 13 19:02:20.573380 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:02:20.718719 sshd[2181]: Connection closed by 139.178.89.65 port 34664 Feb 13 19:02:20.721290 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:20.727639 systemd[1]: sshd@1-172.31.18.242:22-139.178.89.65:34664.service: Deactivated successfully. Feb 13 19:02:20.731045 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:02:20.735189 systemd-logind[1933]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:02:20.737531 systemd-logind[1933]: Removed session 2. Feb 13 19:02:20.763577 systemd[1]: Started sshd@2-172.31.18.242:22-139.178.89.65:34678.service - OpenSSH per-connection server daemon (139.178.89.65:34678). Feb 13 19:02:20.815997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:20.822439 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:02:20.828844 systemd[1]: Startup finished in 1.091s (kernel) + 8.638s (initrd) + 8.452s (userspace) = 18.182s. Feb 13 19:02:20.833406 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:20.898211 amazon-ssm-agent[2126]: 2025-02-13 19:02:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:02:20.993011 sshd[2187]: Accepted publickey for core from 139.178.89.65 port 34678 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:20.996588 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:20.998778 amazon-ssm-agent[2126]: 2025-02-13 19:02:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2200) started Feb 13 19:02:21.010025 systemd-logind[1933]: New session 3 of user core. Feb 13 19:02:21.020625 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:02:21.100289 amazon-ssm-agent[2126]: 2025-02-13 19:02:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:02:21.154545 sshd[2205]: Connection closed by 139.178.89.65 port 34678 Feb 13 19:02:21.155425 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:21.162759 systemd[1]: sshd@2-172.31.18.242:22-139.178.89.65:34678.service: Deactivated successfully. Feb 13 19:02:21.167035 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:02:21.169181 systemd-logind[1933]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:02:21.172064 systemd-logind[1933]: Removed session 3. Feb 13 19:02:21.760010 kubelet[2193]: E0213 19:02:21.759901 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:21.764359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:21.764695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:21.765426 systemd[1]: kubelet.service: Consumed 1.297s CPU time, 251.5M memory peak. Feb 13 19:02:21.806964 ntpd[1927]: Listen normally on 7 eth0 [fe80::421:fbff:fe92:9987%2]:123 Feb 13 19:02:21.807403 ntpd[1927]: 13 Feb 19:02:21 ntpd[1927]: Listen normally on 7 eth0 [fe80::421:fbff:fe92:9987%2]:123 Feb 13 19:02:24.378611 systemd-resolved[1868]: Clock change detected. Flushing caches. Feb 13 19:02:30.774006 systemd[1]: Started sshd@3-172.31.18.242:22-139.178.89.65:37180.service - OpenSSH per-connection server daemon (139.178.89.65:37180). Feb 13 19:02:30.957912 sshd[2223]: Accepted publickey for core from 139.178.89.65 port 37180 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:30.960382 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:30.969851 systemd-logind[1933]: New session 4 of user core. Feb 13 19:02:30.975788 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:02:31.102490 sshd[2225]: Connection closed by 139.178.89.65 port 37180 Feb 13 19:02:31.103605 sshd-session[2223]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:31.110623 systemd[1]: sshd@3-172.31.18.242:22-139.178.89.65:37180.service: Deactivated successfully. Feb 13 19:02:31.113933 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:02:31.115431 systemd-logind[1933]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:02:31.117432 systemd-logind[1933]: Removed session 4. Feb 13 19:02:31.145002 systemd[1]: Started sshd@4-172.31.18.242:22-139.178.89.65:37190.service - OpenSSH per-connection server daemon (139.178.89.65:37190). Feb 13 19:02:31.331692 sshd[2231]: Accepted publickey for core from 139.178.89.65 port 37190 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:31.334059 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:31.338514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:31.347876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:31.352401 systemd-logind[1933]: New session 5 of user core. Feb 13 19:02:31.366480 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:02:31.495207 sshd[2236]: Connection closed by 139.178.89.65 port 37190 Feb 13 19:02:31.495784 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:31.503104 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:02:31.505206 systemd-logind[1933]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:02:31.507325 systemd[1]: sshd@4-172.31.18.242:22-139.178.89.65:37190.service: Deactivated successfully. Feb 13 19:02:31.514546 systemd-logind[1933]: Removed session 5. Feb 13 19:02:31.541197 systemd[1]: Started sshd@5-172.31.18.242:22-139.178.89.65:37192.service - OpenSSH per-connection server daemon (139.178.89.65:37192). Feb 13 19:02:31.690791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:31.696028 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:31.740336 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 37192 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:31.743270 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:31.754247 systemd-logind[1933]: New session 6 of user core. Feb 13 19:02:31.761802 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:02:31.778005 kubelet[2249]: E0213 19:02:31.777880 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:31.784892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:31.785218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:31.786641 systemd[1]: kubelet.service: Consumed 290ms CPU time, 102.5M memory peak. Feb 13 19:02:31.897008 sshd[2256]: Connection closed by 139.178.89.65 port 37192 Feb 13 19:02:31.896282 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:31.901197 systemd[1]: sshd@5-172.31.18.242:22-139.178.89.65:37192.service: Deactivated successfully. Feb 13 19:02:31.904170 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:02:31.906983 systemd-logind[1933]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:02:31.909080 systemd-logind[1933]: Removed session 6. Feb 13 19:02:31.937018 systemd[1]: Started sshd@6-172.31.18.242:22-139.178.89.65:37202.service - OpenSSH per-connection server daemon (139.178.89.65:37202). Feb 13 19:02:32.119069 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 37202 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:32.121808 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:32.131738 systemd-logind[1933]: New session 7 of user core. Feb 13 19:02:32.137788 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:02:32.256314 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:02:32.256964 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:32.272796 sudo[2266]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:32.295810 sshd[2265]: Connection closed by 139.178.89.65 port 37202 Feb 13 19:02:32.296983 sshd-session[2263]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:32.303967 systemd[1]: sshd@6-172.31.18.242:22-139.178.89.65:37202.service: Deactivated successfully. Feb 13 19:02:32.307084 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:02:32.309127 systemd-logind[1933]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:02:32.311432 systemd-logind[1933]: Removed session 7. Feb 13 19:02:32.342986 systemd[1]: Started sshd@7-172.31.18.242:22-139.178.89.65:37216.service - OpenSSH per-connection server daemon (139.178.89.65:37216). Feb 13 19:02:32.524725 sshd[2272]: Accepted publickey for core from 139.178.89.65 port 37216 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:32.527096 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:32.534546 systemd-logind[1933]: New session 8 of user core. Feb 13 19:02:32.542747 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:02:32.647276 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:02:32.648227 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:32.654632 sudo[2276]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:32.664714 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:02:32.665340 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:32.684179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:02:32.743984 augenrules[2298]: No rules Feb 13 19:02:32.746315 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:02:32.746964 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:02:32.749437 sudo[2275]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:32.773022 sshd[2274]: Connection closed by 139.178.89.65 port 37216 Feb 13 19:02:32.773876 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:32.780809 systemd[1]: sshd@7-172.31.18.242:22-139.178.89.65:37216.service: Deactivated successfully. Feb 13 19:02:32.783885 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:02:32.785310 systemd-logind[1933]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:02:32.787615 systemd-logind[1933]: Removed session 8. Feb 13 19:02:32.819963 systemd[1]: Started sshd@8-172.31.18.242:22-139.178.89.65:37230.service - OpenSSH per-connection server daemon (139.178.89.65:37230). Feb 13 19:02:32.999278 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 37230 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:33.001813 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:33.012907 systemd-logind[1933]: New session 9 of user core. Feb 13 19:02:33.018784 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:02:33.121383 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:02:33.122032 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:33.677095 (dockerd)[2326]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:02:33.677700 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:02:34.025146 dockerd[2326]: time="2025-02-13T19:02:34.024460707Z" level=info msg="Starting up" Feb 13 19:02:34.138341 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3387361511-merged.mount: Deactivated successfully. Feb 13 19:02:34.170575 dockerd[2326]: time="2025-02-13T19:02:34.170511124Z" level=info msg="Loading containers: start." Feb 13 19:02:34.407592 kernel: Initializing XFRM netlink socket Feb 13 19:02:34.441683 (udev-worker)[2349]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:34.539575 systemd-networkd[1866]: docker0: Link UP Feb 13 19:02:34.576904 dockerd[2326]: time="2025-02-13T19:02:34.576829122Z" level=info msg="Loading containers: done." Feb 13 19:02:34.602634 dockerd[2326]: time="2025-02-13T19:02:34.602558826Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:02:34.602853 dockerd[2326]: time="2025-02-13T19:02:34.602707158Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:02:34.602975 dockerd[2326]: time="2025-02-13T19:02:34.602926626Z" level=info msg="Daemon has completed initialization" Feb 13 19:02:34.659783 dockerd[2326]: time="2025-02-13T19:02:34.659672179Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:02:34.660143 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:02:35.584394 containerd[1954]: time="2025-02-13T19:02:35.584334487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:02:36.204151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376345298.mount: Deactivated successfully. Feb 13 19:02:37.603368 containerd[1954]: time="2025-02-13T19:02:37.603310365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:37.609480 containerd[1954]: time="2025-02-13T19:02:37.609408333Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236" Feb 13 19:02:37.611468 containerd[1954]: time="2025-02-13T19:02:37.611398413Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:37.616930 containerd[1954]: time="2025-02-13T19:02:37.616878561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:37.619463 containerd[1954]: time="2025-02-13T19:02:37.619219005Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.034818866s" Feb 13 19:02:37.619463 containerd[1954]: time="2025-02-13T19:02:37.619272885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:02:37.620582 containerd[1954]: time="2025-02-13T19:02:37.620536473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:02:39.197530 containerd[1954]: time="2025-02-13T19:02:39.197432817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.201323 containerd[1954]: time="2025-02-13T19:02:39.201259593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145" Feb 13 19:02:39.202803 containerd[1954]: time="2025-02-13T19:02:39.202758165Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.208214 containerd[1954]: time="2025-02-13T19:02:39.208162281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.210541 containerd[1954]: time="2025-02-13T19:02:39.210460341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.5897164s" Feb 13 19:02:39.210681 containerd[1954]: time="2025-02-13T19:02:39.210541161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:02:39.211227 containerd[1954]: time="2025-02-13T19:02:39.211144989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:02:40.575078 containerd[1954]: time="2025-02-13T19:02:40.574998756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:40.577146 containerd[1954]: time="2025-02-13T19:02:40.576597048Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800" Feb 13 19:02:40.578201 containerd[1954]: time="2025-02-13T19:02:40.578127828Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:40.586589 containerd[1954]: time="2025-02-13T19:02:40.586532076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:40.588576 containerd[1954]: time="2025-02-13T19:02:40.588489228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.377284839s" Feb 13 19:02:40.588576 containerd[1954]: time="2025-02-13T19:02:40.588568392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:02:40.589638 containerd[1954]: time="2025-02-13T19:02:40.589595688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:02:41.915922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057846647.mount: Deactivated successfully. Feb 13 19:02:41.918051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:02:41.927822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:42.278941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:42.291256 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:42.381564 kubelet[2592]: E0213 19:02:42.380705 2592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:42.386812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:42.387145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:42.388393 systemd[1]: kubelet.service: Consumed 296ms CPU time, 99.8M memory peak. Feb 13 19:02:42.660952 containerd[1954]: time="2025-02-13T19:02:42.660293810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.662457 containerd[1954]: time="2025-02-13T19:02:42.662113442Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 19:02:42.663453 containerd[1954]: time="2025-02-13T19:02:42.663360506Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.667204 containerd[1954]: time="2025-02-13T19:02:42.667084598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.669126 containerd[1954]: time="2025-02-13T19:02:42.668771402Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 2.079113686s" Feb 13 19:02:42.669126 containerd[1954]: time="2025-02-13T19:02:42.668829866Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:02:42.670155 containerd[1954]: time="2025-02-13T19:02:42.670115966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:02:43.254247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339248956.mount: Deactivated successfully. Feb 13 19:02:44.424779 containerd[1954]: time="2025-02-13T19:02:44.424691403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.427008 containerd[1954]: time="2025-02-13T19:02:44.426936819Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Feb 13 19:02:44.427946 containerd[1954]: time="2025-02-13T19:02:44.427363959Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.433626 containerd[1954]: time="2025-02-13T19:02:44.433491867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.436442 containerd[1954]: time="2025-02-13T19:02:44.436188507Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.765903209s" Feb 13 19:02:44.436442 containerd[1954]: time="2025-02-13T19:02:44.436247787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:02:44.437020 containerd[1954]: time="2025-02-13T19:02:44.436859703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:02:44.887841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571766861.mount: Deactivated successfully. Feb 13 19:02:44.892797 containerd[1954]: time="2025-02-13T19:02:44.892729589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.894097 containerd[1954]: time="2025-02-13T19:02:44.894011117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:02:44.896541 containerd[1954]: time="2025-02-13T19:02:44.895742129Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.905748 containerd[1954]: time="2025-02-13T19:02:44.905674133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:44.909815 containerd[1954]: time="2025-02-13T19:02:44.909755441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 472.839458ms" Feb 13 19:02:44.910047 containerd[1954]: time="2025-02-13T19:02:44.910014437Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:02:44.910913 containerd[1954]: time="2025-02-13T19:02:44.910849841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:02:45.456074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612370151.mount: Deactivated successfully. Feb 13 19:02:47.714289 containerd[1954]: time="2025-02-13T19:02:47.714062587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:47.716436 containerd[1954]: time="2025-02-13T19:02:47.716368411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Feb 13 19:02:47.717373 containerd[1954]: time="2025-02-13T19:02:47.716847103Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:47.728483 containerd[1954]: time="2025-02-13T19:02:47.728389867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:47.730759 containerd[1954]: time="2025-02-13T19:02:47.730711579Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.819800562s" Feb 13 19:02:47.731061 containerd[1954]: time="2025-02-13T19:02:47.730919959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:02:48.428192 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:02:52.442710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:02:52.452668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:52.780833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:52.794016 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:52.872009 kubelet[2741]: E0213 19:02:52.870854 2741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:52.883216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:52.883576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:52.886635 systemd[1]: kubelet.service: Consumed 273ms CPU time, 101.6M memory peak. Feb 13 19:02:55.739635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:55.740411 systemd[1]: kubelet.service: Consumed 273ms CPU time, 101.6M memory peak. Feb 13 19:02:55.752346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:55.814854 systemd[1]: Reload requested from client PID 2755 ('systemctl') (unit session-9.scope)... Feb 13 19:02:55.814886 systemd[1]: Reloading... Feb 13 19:02:56.087543 zram_generator::config[2800]: No configuration found. Feb 13 19:02:56.310166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:56.534534 systemd[1]: Reloading finished in 718 ms. Feb 13 19:02:56.631788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:56.641607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:56.644608 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:56.645044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:56.645127 systemd[1]: kubelet.service: Consumed 214ms CPU time, 89.8M memory peak. Feb 13 19:02:56.657029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:56.942021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:56.958087 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:57.032474 kubelet[2865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:57.032474 kubelet[2865]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:57.032474 kubelet[2865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:57.034535 kubelet[2865]: I0213 19:02:57.033280 2865 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:57.828537 kubelet[2865]: I0213 19:02:57.826808 2865 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:02:57.828537 kubelet[2865]: I0213 19:02:57.826860 2865 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:57.828537 kubelet[2865]: I0213 19:02:57.827668 2865 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:02:57.875898 kubelet[2865]: E0213 19:02:57.875821 2865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:57.879102 kubelet[2865]: I0213 19:02:57.879045 2865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:57.892531 kubelet[2865]: E0213 19:02:57.892458 2865 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:57.892692 kubelet[2865]: I0213 19:02:57.892537 2865 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:57.899112 kubelet[2865]: I0213 19:02:57.899021 2865 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:57.899546 kubelet[2865]: I0213 19:02:57.899473 2865 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:57.899843 kubelet[2865]: I0213 19:02:57.899545 2865 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-242","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:57.900010 kubelet[2865]: I0213 19:02:57.899872 2865 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:57.900010 kubelet[2865]: I0213 19:02:57.899892 2865 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:02:57.900168 kubelet[2865]: I0213 19:02:57.900137 2865 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:57.905961 kubelet[2865]: I0213 19:02:57.905921 2865 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:02:57.905961 kubelet[2865]: I0213 19:02:57.905966 2865 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:57.907739 kubelet[2865]: I0213 19:02:57.906003 2865 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:02:57.907739 kubelet[2865]: I0213 19:02:57.906023 2865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:57.918270 kubelet[2865]: W0213 19:02:57.917962 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:57.918270 kubelet[2865]: E0213 19:02:57.918050 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:57.918270 kubelet[2865]: W0213 19:02:57.918164 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-242&limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:57.918270 kubelet[2865]: E0213 19:02:57.918216 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-242&limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:57.919292 kubelet[2865]: I0213 19:02:57.919213 2865 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:57.920132 kubelet[2865]: I0213 19:02:57.920083 2865 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:57.920225 kubelet[2865]: W0213 19:02:57.920208 2865 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:02:57.921406 kubelet[2865]: I0213 19:02:57.921370 2865 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:02:57.921560 kubelet[2865]: I0213 19:02:57.921424 2865 server.go:1287] "Started kubelet" Feb 13 19:02:57.927213 kubelet[2865]: I0213 19:02:57.926191 2865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:57.927213 kubelet[2865]: I0213 19:02:57.926228 2865 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:57.937149 kubelet[2865]: I0213 19:02:57.937105 2865 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:57.938989 kubelet[2865]: I0213 19:02:57.938883 2865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:57.939326 kubelet[2865]: I0213 19:02:57.939287 2865 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:57.941403 kubelet[2865]: E0213 19:02:57.941132 2865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.242:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.242:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-242.1823d9d23340f2a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-242,UID:ip-172-31-18-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-242,},FirstTimestamp:2025-02-13 19:02:57.92139741 +0000 UTC m=+0.957080646,LastTimestamp:2025-02-13 19:02:57.92139741 +0000 UTC m=+0.957080646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-242,}" Feb 13 19:02:57.943213 kubelet[2865]: I0213 19:02:57.941700 2865 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:02:57.943213 kubelet[2865]: E0213 19:02:57.941921 2865 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-242\" not found" Feb 13 19:02:57.943213 kubelet[2865]: I0213 19:02:57.943035 2865 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:02:57.946132 kubelet[2865]: I0213 19:02:57.946093 2865 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:57.946523 kubelet[2865]: I0213 19:02:57.946469 2865 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:57.947948 kubelet[2865]: W0213 19:02:57.947873 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:57.948250 kubelet[2865]: E0213 19:02:57.948169 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:57.948536 kubelet[2865]: E0213 19:02:57.948467 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-242?timeout=10s\": dial tcp 172.31.18.242:6443: connect: connection refused" interval="200ms" Feb 13 19:02:57.949445 kubelet[2865]: I0213 19:02:57.949407 2865 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:57.949950 kubelet[2865]: I0213 19:02:57.949911 2865 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:57.953188 kubelet[2865]: I0213 19:02:57.953153 2865 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:57.965175 kubelet[2865]: E0213 19:02:57.965025 2865 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:57.968212 kubelet[2865]: I0213 19:02:57.966964 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:57.969520 kubelet[2865]: I0213 19:02:57.969447 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:57.969520 kubelet[2865]: I0213 19:02:57.969518 2865 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:02:57.969681 kubelet[2865]: I0213 19:02:57.969555 2865 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:02:57.969681 kubelet[2865]: I0213 19:02:57.969571 2865 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:02:57.969681 kubelet[2865]: E0213 19:02:57.969638 2865 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:57.980092 kubelet[2865]: W0213 19:02:57.979470 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:57.980732 kubelet[2865]: E0213 19:02:57.980223 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:57.995613 kubelet[2865]: I0213 19:02:57.995564 2865 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:02:57.995613 kubelet[2865]: I0213 19:02:57.995596 2865 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:57.995803 kubelet[2865]: I0213 19:02:57.995628 2865 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:57.997894 kubelet[2865]: I0213 19:02:57.997845 2865 policy_none.go:49] "None policy: Start" Feb 13 19:02:57.997894 kubelet[2865]: I0213 19:02:57.997888 2865 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:02:57.998048 kubelet[2865]: I0213 19:02:57.997912 2865 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:58.007415 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:02:58.025908 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:02:58.032690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:02:58.042761 kubelet[2865]: E0213 19:02:58.042697 2865 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-242\" not found" Feb 13 19:02:58.046527 kubelet[2865]: I0213 19:02:58.045208 2865 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:58.046527 kubelet[2865]: I0213 19:02:58.045540 2865 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:58.046527 kubelet[2865]: I0213 19:02:58.045561 2865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:58.047016 kubelet[2865]: I0213 19:02:58.046989 2865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:58.049776 kubelet[2865]: E0213 19:02:58.049725 2865 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:02:58.049929 kubelet[2865]: E0213 19:02:58.049799 2865 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-242\" not found" Feb 13 19:02:58.087483 systemd[1]: Created slice kubepods-burstable-pod0f040acd057fce887e2943e19a1ab1ff.slice - libcontainer container kubepods-burstable-pod0f040acd057fce887e2943e19a1ab1ff.slice. Feb 13 19:02:58.106534 kubelet[2865]: E0213 19:02:58.105928 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:02:58.114008 systemd[1]: Created slice kubepods-burstable-pod158d34cd1a5e876b8151ae16b3273950.slice - libcontainer container kubepods-burstable-pod158d34cd1a5e876b8151ae16b3273950.slice. Feb 13 19:02:58.119958 kubelet[2865]: E0213 19:02:58.118487 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:02:58.123588 systemd[1]: Created slice kubepods-burstable-pod4d77e00a962d6b20d381a540f78a2fdd.slice - libcontainer container kubepods-burstable-pod4d77e00a962d6b20d381a540f78a2fdd.slice. Feb 13 19:02:58.127535 kubelet[2865]: E0213 19:02:58.127441 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:02:58.147379 kubelet[2865]: I0213 19:02:58.147325 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f040acd057fce887e2943e19a1ab1ff-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-242\" (UID: \"0f040acd057fce887e2943e19a1ab1ff\") " pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:02:58.147379 kubelet[2865]: I0213 19:02:58.147389 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f040acd057fce887e2943e19a1ab1ff-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-242\" (UID: \"0f040acd057fce887e2943e19a1ab1ff\") " pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:02:58.147598 kubelet[2865]: I0213 19:02:58.147432 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:02:58.147598 kubelet[2865]: I0213 19:02:58.147468 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:02:58.147598 kubelet[2865]: I0213 19:02:58.147532 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:02:58.147598 kubelet[2865]: I0213 19:02:58.147582 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f040acd057fce887e2943e19a1ab1ff-ca-certs\") pod \"kube-apiserver-ip-172-31-18-242\" (UID: \"0f040acd057fce887e2943e19a1ab1ff\") " pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:02:58.147791 kubelet[2865]: I0213 19:02:58.147618 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:02:58.147791 kubelet[2865]: I0213 19:02:58.147657 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:02:58.147791 kubelet[2865]: I0213 19:02:58.147694 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d77e00a962d6b20d381a540f78a2fdd-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-242\" (UID: \"4d77e00a962d6b20d381a540f78a2fdd\") " pod="kube-system/kube-scheduler-ip-172-31-18-242" Feb 13 19:02:58.150314 kubelet[2865]: E0213 19:02:58.149813 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-242?timeout=10s\": dial tcp 172.31.18.242:6443: connect: connection refused" interval="400ms" Feb 13 19:02:58.151848 kubelet[2865]: I0213 19:02:58.151801 2865 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-242" Feb 13 19:02:58.152713 kubelet[2865]: E0213 19:02:58.152641 2865 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.242:6443/api/v1/nodes\": dial tcp 172.31.18.242:6443: connect: connection refused" node="ip-172-31-18-242" Feb 13 19:02:58.355092 kubelet[2865]: I0213 19:02:58.354852 2865 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-242" Feb 13 19:02:58.355874 kubelet[2865]: E0213 19:02:58.355365 2865 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.242:6443/api/v1/nodes\": dial tcp 172.31.18.242:6443: connect: connection refused" node="ip-172-31-18-242" Feb 13 19:02:58.407983 containerd[1954]: time="2025-02-13T19:02:58.407843705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-242,Uid:0f040acd057fce887e2943e19a1ab1ff,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:58.419969 containerd[1954]: time="2025-02-13T19:02:58.419846273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-242,Uid:158d34cd1a5e876b8151ae16b3273950,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:58.428858 containerd[1954]: time="2025-02-13T19:02:58.428779613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-242,Uid:4d77e00a962d6b20d381a540f78a2fdd,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:58.551159 kubelet[2865]: E0213 19:02:58.551050 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-242?timeout=10s\": dial tcp 172.31.18.242:6443: connect: connection refused" interval="800ms" Feb 13 19:02:58.758583 kubelet[2865]: I0213 19:02:58.758523 2865 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-242" Feb 13 19:02:58.759014 kubelet[2865]: E0213 19:02:58.758968 2865 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.242:6443/api/v1/nodes\": dial tcp 172.31.18.242:6443: connect: connection refused" node="ip-172-31-18-242" Feb 13 19:02:58.873415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570847168.mount: Deactivated successfully. Feb 13 19:02:58.882160 containerd[1954]: time="2025-02-13T19:02:58.882094987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:58.886633 containerd[1954]: time="2025-02-13T19:02:58.886531951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:02:58.887406 containerd[1954]: time="2025-02-13T19:02:58.887358247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:58.890297 containerd[1954]: time="2025-02-13T19:02:58.890238355Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:58.893061 containerd[1954]: time="2025-02-13T19:02:58.892788151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:58.893061 containerd[1954]: time="2025-02-13T19:02:58.892971571Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:58.893322 containerd[1954]: time="2025-02-13T19:02:58.893269495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:58.900686 kubelet[2865]: W0213 19:02:58.900222 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:58.900686 kubelet[2865]: E0213 19:02:58.900288 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:58.901104 containerd[1954]: time="2025-02-13T19:02:58.901023403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:58.903067 containerd[1954]: time="2025-02-13T19:02:58.902771335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.883758ms" Feb 13 19:02:58.907151 containerd[1954]: time="2025-02-13T19:02:58.907077187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.122554ms" Feb 13 19:02:58.918051 containerd[1954]: time="2025-02-13T19:02:58.917988403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.034178ms" Feb 13 19:02:58.935349 kubelet[2865]: W0213 19:02:58.935196 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:58.935349 kubelet[2865]: E0213 19:02:58.935279 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:59.043491 kubelet[2865]: W0213 19:02:59.042613 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:59.043491 kubelet[2865]: E0213 19:02:59.042715 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:59.094883 containerd[1954]: time="2025-02-13T19:02:59.093675328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:59.094883 containerd[1954]: time="2025-02-13T19:02:59.093800368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:59.094883 containerd[1954]: time="2025-02-13T19:02:59.093839404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:59.094883 containerd[1954]: time="2025-02-13T19:02:59.094033096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:59.110079 containerd[1954]: time="2025-02-13T19:02:59.109281784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:59.110079 containerd[1954]: time="2025-02-13T19:02:59.109405144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:59.110079 containerd[1954]: time="2025-02-13T19:02:59.109448176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:59.112583 containerd[1954]: time="2025-02-13T19:02:59.111269572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:59.114807 containerd[1954]: time="2025-02-13T19:02:59.114640660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:59.115067 containerd[1954]: time="2025-02-13T19:02:59.114988576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:59.115244 containerd[1954]: time="2025-02-13T19:02:59.115173952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:59.116517 containerd[1954]: time="2025-02-13T19:02:59.116337496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:59.163879 systemd[1]: Started cri-containerd-fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472.scope - libcontainer container fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472. Feb 13 19:02:59.182338 systemd[1]: Started cri-containerd-9b123823a2a53d615fa241bd8a3f943abcfd0fea5cfbaa4c107efc5bc09704af.scope - libcontainer container 9b123823a2a53d615fa241bd8a3f943abcfd0fea5cfbaa4c107efc5bc09704af. Feb 13 19:02:59.186824 systemd[1]: Started cri-containerd-fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b.scope - libcontainer container fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b. Feb 13 19:02:59.308172 containerd[1954]: time="2025-02-13T19:02:59.307173833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-242,Uid:0f040acd057fce887e2943e19a1ab1ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b123823a2a53d615fa241bd8a3f943abcfd0fea5cfbaa4c107efc5bc09704af\"" Feb 13 19:02:59.311774 containerd[1954]: time="2025-02-13T19:02:59.311708129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-242,Uid:158d34cd1a5e876b8151ae16b3273950,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472\"" Feb 13 19:02:59.318968 containerd[1954]: time="2025-02-13T19:02:59.318622073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-242,Uid:4d77e00a962d6b20d381a540f78a2fdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b\"" Feb 13 19:02:59.321564 containerd[1954]: time="2025-02-13T19:02:59.321462401Z" level=info msg="CreateContainer within sandbox \"9b123823a2a53d615fa241bd8a3f943abcfd0fea5cfbaa4c107efc5bc09704af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:02:59.323279 containerd[1954]: time="2025-02-13T19:02:59.323090297Z" level=info msg="CreateContainer within sandbox \"fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:02:59.327460 containerd[1954]: time="2025-02-13T19:02:59.327375341Z" level=info msg="CreateContainer within sandbox \"fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:02:59.339099 kubelet[2865]: W0213 19:02:59.338511 2865 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-242&limit=500&resourceVersion=0": dial tcp 172.31.18.242:6443: connect: connection refused Feb 13 19:02:59.339099 kubelet[2865]: E0213 19:02:59.338739 2865 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-242&limit=500&resourceVersion=0\": dial tcp 172.31.18.242:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:59.352407 kubelet[2865]: E0213 19:02:59.352352 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-242?timeout=10s\": dial tcp 172.31.18.242:6443: connect: connection refused" interval="1.6s" Feb 13 19:02:59.355390 containerd[1954]: time="2025-02-13T19:02:59.355303097Z" level=info msg="CreateContainer within sandbox \"fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6\"" Feb 13 19:02:59.358535 containerd[1954]: time="2025-02-13T19:02:59.357292661Z" level=info msg="StartContainer for \"a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6\"" Feb 13 19:02:59.363683 containerd[1954]: time="2025-02-13T19:02:59.363621833Z" level=info msg="CreateContainer within sandbox \"9b123823a2a53d615fa241bd8a3f943abcfd0fea5cfbaa4c107efc5bc09704af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d530aa4cfd9ca306b8d1c3182a060e4f54f339e577376233f32857dae28c4f98\"" Feb 13 19:02:59.364846 containerd[1954]: time="2025-02-13T19:02:59.364764161Z" level=info msg="CreateContainer within sandbox \"fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6\"" Feb 13 19:02:59.365372 containerd[1954]: time="2025-02-13T19:02:59.365334845Z" level=info msg="StartContainer for \"d530aa4cfd9ca306b8d1c3182a060e4f54f339e577376233f32857dae28c4f98\"" Feb 13 19:02:59.365884 containerd[1954]: time="2025-02-13T19:02:59.365402081Z" level=info msg="StartContainer for \"0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6\"" Feb 13 19:02:59.430280 systemd[1]: Started cri-containerd-a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6.scope - libcontainer container a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6. Feb 13 19:02:59.453834 systemd[1]: Started cri-containerd-0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6.scope - libcontainer container 0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6. Feb 13 19:02:59.457810 systemd[1]: Started cri-containerd-d530aa4cfd9ca306b8d1c3182a060e4f54f339e577376233f32857dae28c4f98.scope - libcontainer container d530aa4cfd9ca306b8d1c3182a060e4f54f339e577376233f32857dae28c4f98. Feb 13 19:02:59.564351 kubelet[2865]: I0213 19:02:59.564202 2865 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-242" Feb 13 19:02:59.565429 kubelet[2865]: E0213 19:02:59.565308 2865 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.242:6443/api/v1/nodes\": dial tcp 172.31.18.242:6443: connect: connection refused" node="ip-172-31-18-242" Feb 13 19:02:59.580396 containerd[1954]: time="2025-02-13T19:02:59.580329726Z" level=info msg="StartContainer for \"a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6\" returns successfully" Feb 13 19:02:59.592789 containerd[1954]: time="2025-02-13T19:02:59.590465370Z" level=info msg="StartContainer for \"d530aa4cfd9ca306b8d1c3182a060e4f54f339e577376233f32857dae28c4f98\" returns successfully" Feb 13 19:02:59.596356 containerd[1954]: time="2025-02-13T19:02:59.596178558Z" level=info msg="StartContainer for \"0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6\" returns successfully" Feb 13 19:03:00.003555 kubelet[2865]: E0213 19:03:00.003484 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:00.006927 kubelet[2865]: E0213 19:03:00.006875 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:00.013883 kubelet[2865]: E0213 19:03:00.013835 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:01.015526 kubelet[2865]: E0213 19:03:01.015438 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:01.016167 kubelet[2865]: E0213 19:03:01.015999 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:01.168417 kubelet[2865]: I0213 19:03:01.168371 2865 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-242" Feb 13 19:03:01.287843 kubelet[2865]: E0213 19:03:01.287369 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:02.019114 kubelet[2865]: E0213 19:03:02.019054 2865 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:02.221562 update_engine[1934]: I20250213 19:03:02.220558 1934 update_attempter.cc:509] Updating boot flags... Feb 13 19:03:02.360576 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3153) Feb 13 19:03:02.893543 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3156) Feb 13 19:03:03.364630 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3156) Feb 13 19:03:04.699780 kubelet[2865]: E0213 19:03:04.699722 2865 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-242\" not found" node="ip-172-31-18-242" Feb 13 19:03:04.824936 kubelet[2865]: E0213 19:03:04.824779 2865 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-242.1823d9d23340f2a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-242,UID:ip-172-31-18-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-242,},FirstTimestamp:2025-02-13 19:02:57.92139741 +0000 UTC m=+0.957080646,LastTimestamp:2025-02-13 19:02:57.92139741 +0000 UTC m=+0.957080646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-242,}" Feb 13 19:03:04.867863 kubelet[2865]: I0213 19:03:04.867721 2865 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-18-242" Feb 13 19:03:04.867863 kubelet[2865]: E0213 19:03:04.867787 2865 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-242\": node \"ip-172-31-18-242\" not found" Feb 13 19:03:04.907529 kubelet[2865]: E0213 19:03:04.907335 2865 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-242.1823d9d235da4ece default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-242,UID:ip-172-31-18-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-18-242,},FirstTimestamp:2025-02-13 19:02:57.965002446 +0000 UTC m=+1.000685682,LastTimestamp:2025-02-13 19:02:57.965002446 +0000 UTC m=+1.000685682,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-242,}" Feb 13 19:03:04.914647 kubelet[2865]: I0213 19:03:04.914576 2865 apiserver.go:52] "Watching apiserver" Feb 13 19:03:04.944465 kubelet[2865]: I0213 19:03:04.944407 2865 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-242" Feb 13 19:03:04.946674 kubelet[2865]: I0213 19:03:04.946587 2865 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:03:04.969369 kubelet[2865]: E0213 19:03:04.968751 2865 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-242\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-242" Feb 13 19:03:04.969369 kubelet[2865]: I0213 19:03:04.968801 2865 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:04.978847 kubelet[2865]: E0213 19:03:04.978763 2865 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-242\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:04.978847 kubelet[2865]: I0213 19:03:04.978810 2865 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:04.984568 kubelet[2865]: E0213 19:03:04.984357 2865 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-242.1823d9d2379c217a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-242,UID:ip-172-31-18-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-18-242 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-18-242,},FirstTimestamp:2025-02-13 19:02:57.994482042 +0000 UTC m=+1.030165230,LastTimestamp:2025-02-13 19:02:57.994482042 +0000 UTC m=+1.030165230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-242,}" Feb 13 19:03:04.984861 kubelet[2865]: E0213 19:03:04.984810 2865 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-242\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:05.043053 kubelet[2865]: E0213 19:03:05.042907 2865 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-242.1823d9d2379cae26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-242,UID:ip-172-31-18-242,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-18-242 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-18-242,},FirstTimestamp:2025-02-13 19:02:57.994518054 +0000 UTC m=+1.030201242,LastTimestamp:2025-02-13 19:02:57.994518054 +0000 UTC m=+1.030201242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-242,}" Feb 13 19:03:06.933906 systemd[1]: Reload requested from client PID 3409 ('systemctl') (unit session-9.scope)... Feb 13 19:03:06.933937 systemd[1]: Reloading... Feb 13 19:03:07.119660 zram_generator::config[3457]: No configuration found. Feb 13 19:03:07.355091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:07.617468 systemd[1]: Reloading finished in 682 ms. Feb 13 19:03:07.662128 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:07.678249 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:03:07.678810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:07.678945 systemd[1]: kubelet.service: Consumed 1.741s CPU time, 122.7M memory peak. Feb 13 19:03:07.692979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:08.018793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:08.028119 (kubelet)[3514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:03:08.111636 kubelet[3514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:08.113339 kubelet[3514]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:03:08.113339 kubelet[3514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:03:08.113339 kubelet[3514]: I0213 19:03:08.112416 3514 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:03:08.129527 kubelet[3514]: I0213 19:03:08.127074 3514 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:03:08.129527 kubelet[3514]: I0213 19:03:08.127119 3514 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:03:08.129527 kubelet[3514]: I0213 19:03:08.127641 3514 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:03:08.133026 kubelet[3514]: I0213 19:03:08.132990 3514 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:03:08.140073 kubelet[3514]: I0213 19:03:08.139710 3514 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:03:08.147899 kubelet[3514]: E0213 19:03:08.147810 3514 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:03:08.147899 kubelet[3514]: I0213 19:03:08.147886 3514 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:03:08.156025 kubelet[3514]: I0213 19:03:08.155823 3514 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:03:08.156786 kubelet[3514]: I0213 19:03:08.156471 3514 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:03:08.157227 kubelet[3514]: I0213 19:03:08.156578 3514 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-242","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:03:08.157227 kubelet[3514]: I0213 19:03:08.157069 3514 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:03:08.157227 kubelet[3514]: I0213 19:03:08.157089 3514 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:03:08.157227 kubelet[3514]: I0213 19:03:08.157168 3514 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:08.158697 kubelet[3514]: I0213 19:03:08.157847 3514 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:03:08.158882 kubelet[3514]: I0213 19:03:08.158849 3514 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:03:08.159055 kubelet[3514]: I0213 19:03:08.159017 3514 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:03:08.159210 kubelet[3514]: I0213 19:03:08.159190 3514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:03:08.171039 sudo[3528]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:03:08.173814 sudo[3528]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:03:08.208463 kubelet[3514]: I0213 19:03:08.206111 3514 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:03:08.208463 kubelet[3514]: I0213 19:03:08.207037 3514 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:03:08.211409 kubelet[3514]: I0213 19:03:08.211360 3514 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:03:08.212212 kubelet[3514]: I0213 19:03:08.212066 3514 server.go:1287] "Started kubelet" Feb 13 19:03:08.219551 kubelet[3514]: I0213 19:03:08.216463 3514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:03:08.219551 kubelet[3514]: I0213 19:03:08.216947 3514 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:03:08.219551 kubelet[3514]: I0213 19:03:08.217036 3514 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:03:08.224459 kubelet[3514]: I0213 19:03:08.224422 3514 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:03:08.227968 kubelet[3514]: I0213 19:03:08.224913 3514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:03:08.235107 kubelet[3514]: I0213 19:03:08.225216 3514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:03:08.237332 kubelet[3514]: I0213 19:03:08.236967 3514 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:03:08.238176 kubelet[3514]: I0213 19:03:08.238147 3514 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:03:08.238642 kubelet[3514]: I0213 19:03:08.238617 3514 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:03:08.241289 kubelet[3514]: I0213 19:03:08.241233 3514 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:03:08.242713 kubelet[3514]: I0213 19:03:08.242623 3514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:03:08.252187 kubelet[3514]: E0213 19:03:08.252146 3514 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:03:08.253920 kubelet[3514]: I0213 19:03:08.253864 3514 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:03:08.297368 kubelet[3514]: I0213 19:03:08.295699 3514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:03:08.306968 kubelet[3514]: I0213 19:03:08.306395 3514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:03:08.306968 kubelet[3514]: I0213 19:03:08.306439 3514 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:03:08.306968 kubelet[3514]: I0213 19:03:08.306471 3514 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:03:08.306968 kubelet[3514]: I0213 19:03:08.306486 3514 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:03:08.306968 kubelet[3514]: E0213 19:03:08.306591 3514 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:03:08.406804 kubelet[3514]: E0213 19:03:08.406749 3514 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422556 3514 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422586 3514 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422620 3514 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422877 3514 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422898 3514 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422934 3514 policy_none.go:49] "None policy: Start" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422951 3514 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.422970 3514 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:03:08.425535 kubelet[3514]: I0213 19:03:08.423164 3514 state_mem.go:75] "Updated machine memory state" Feb 13 19:03:08.443284 kubelet[3514]: I0213 19:03:08.443248 3514 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:03:08.445424 kubelet[3514]: I0213 19:03:08.445297 3514 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:03:08.446861 kubelet[3514]: I0213 19:03:08.445641 3514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:03:08.447961 kubelet[3514]: I0213 19:03:08.447472 3514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:03:08.451589 kubelet[3514]: E0213 19:03:08.450223 3514 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:03:08.564976 kubelet[3514]: I0213 19:03:08.564842 3514 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-242" Feb 13 19:03:08.587341 kubelet[3514]: I0213 19:03:08.587206 3514 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-18-242" Feb 13 19:03:08.588537 kubelet[3514]: I0213 19:03:08.588472 3514 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-18-242" Feb 13 19:03:08.607960 kubelet[3514]: I0213 19:03:08.607923 3514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-242" Feb 13 19:03:08.609203 kubelet[3514]: I0213 19:03:08.608657 3514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:08.609918 kubelet[3514]: I0213 19:03:08.608864 3514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:08.645208 kubelet[3514]: I0213 19:03:08.644690 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f040acd057fce887e2943e19a1ab1ff-ca-certs\") pod \"kube-apiserver-ip-172-31-18-242\" (UID: \"0f040acd057fce887e2943e19a1ab1ff\") " pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:08.645208 kubelet[3514]: I0213 19:03:08.644751 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f040acd057fce887e2943e19a1ab1ff-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-242\" (UID: \"0f040acd057fce887e2943e19a1ab1ff\") " pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:08.645208 kubelet[3514]: I0213 19:03:08.644789 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:08.645208 kubelet[3514]: I0213 19:03:08.644829 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4d77e00a962d6b20d381a540f78a2fdd-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-242\" (UID: \"4d77e00a962d6b20d381a540f78a2fdd\") " pod="kube-system/kube-scheduler-ip-172-31-18-242" Feb 13 19:03:08.645208 kubelet[3514]: I0213 19:03:08.644871 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f040acd057fce887e2943e19a1ab1ff-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-242\" (UID: \"0f040acd057fce887e2943e19a1ab1ff\") " pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:08.646382 kubelet[3514]: I0213 19:03:08.644913 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:08.646382 kubelet[3514]: I0213 19:03:08.644981 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:08.646382 kubelet[3514]: I0213 19:03:08.645018 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:08.646382 kubelet[3514]: I0213 19:03:08.645055 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/158d34cd1a5e876b8151ae16b3273950-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-242\" (UID: \"158d34cd1a5e876b8151ae16b3273950\") " pod="kube-system/kube-controller-manager-ip-172-31-18-242" Feb 13 19:03:09.114801 sudo[3528]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:09.164990 kubelet[3514]: I0213 19:03:09.164914 3514 apiserver.go:52] "Watching apiserver" Feb 13 19:03:09.242311 kubelet[3514]: I0213 19:03:09.238694 3514 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:03:09.385964 kubelet[3514]: I0213 19:03:09.385832 3514 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:09.401796 kubelet[3514]: E0213 19:03:09.401735 3514 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-242\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-242" Feb 13 19:03:09.424322 kubelet[3514]: I0213 19:03:09.424211 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-242" podStartSLOduration=1.423745935 podStartE2EDuration="1.423745935s" podCreationTimestamp="2025-02-13 19:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:09.423638523 +0000 UTC m=+1.388473136" watchObservedRunningTime="2025-02-13 19:03:09.423745935 +0000 UTC m=+1.388580536" Feb 13 19:03:09.457640 kubelet[3514]: I0213 19:03:09.457560 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-242" podStartSLOduration=1.457537479 podStartE2EDuration="1.457537479s" podCreationTimestamp="2025-02-13 19:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:09.439674687 +0000 UTC m=+1.404509336" watchObservedRunningTime="2025-02-13 19:03:09.457537479 +0000 UTC m=+1.422372104" Feb 13 19:03:09.480790 kubelet[3514]: I0213 19:03:09.480700 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-242" podStartSLOduration=1.480679384 podStartE2EDuration="1.480679384s" podCreationTimestamp="2025-02-13 19:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:09.461624235 +0000 UTC m=+1.426458872" watchObservedRunningTime="2025-02-13 19:03:09.480679384 +0000 UTC m=+1.445514009" Feb 13 19:03:11.783873 sudo[2310]: pam_unix(sudo:session): session closed for user root Feb 13 19:03:11.807140 sshd[2309]: Connection closed by 139.178.89.65 port 37230 Feb 13 19:03:11.808012 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:11.814570 systemd[1]: sshd@8-172.31.18.242:22-139.178.89.65:37230.service: Deactivated successfully. Feb 13 19:03:11.821292 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:11.822108 systemd[1]: session-9.scope: Consumed 11.808s CPU time, 262.7M memory peak. Feb 13 19:03:11.825367 systemd-logind[1933]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:11.827759 systemd-logind[1933]: Removed session 9. Feb 13 19:03:13.546218 kubelet[3514]: I0213 19:03:13.546098 3514 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:03:13.549658 containerd[1954]: time="2025-02-13T19:03:13.548635052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:03:13.550242 kubelet[3514]: I0213 19:03:13.549226 3514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:03:14.500617 systemd[1]: Created slice kubepods-besteffort-pod52f966fe_5c13_46a6_a7a3_edf1c9c5e2e9.slice - libcontainer container kubepods-besteffort-pod52f966fe_5c13_46a6_a7a3_edf1c9c5e2e9.slice. Feb 13 19:03:14.532432 systemd[1]: Created slice kubepods-burstable-pode5b0f5ca_3437_4fda_aea0_800c870fc242.slice - libcontainer container kubepods-burstable-pode5b0f5ca_3437_4fda_aea0_800c870fc242.slice. Feb 13 19:03:14.581697 kubelet[3514]: I0213 19:03:14.581511 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74wd\" (UniqueName: \"kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-kube-api-access-b74wd\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.581697 kubelet[3514]: I0213 19:03:14.581583 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5b0f5ca-3437-4fda-aea0-800c870fc242-clustermesh-secrets\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.581697 kubelet[3514]: I0213 19:03:14.581629 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-kernel\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.581697 kubelet[3514]: I0213 19:03:14.581693 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9-xtables-lock\") pod \"kube-proxy-mlnxk\" (UID: \"52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9\") " pod="kube-system/kube-proxy-mlnxk" Feb 13 19:03:14.582459 kubelet[3514]: I0213 19:03:14.581734 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-hostproc\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.582459 kubelet[3514]: I0213 19:03:14.581771 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-cgroup\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.582459 kubelet[3514]: I0213 19:03:14.581807 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-hubble-tls\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.582459 kubelet[3514]: I0213 19:03:14.581844 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-run\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.582459 kubelet[3514]: I0213 19:03:14.581882 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9-lib-modules\") pod \"kube-proxy-mlnxk\" (UID: \"52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9\") " pod="kube-system/kube-proxy-mlnxk" Feb 13 19:03:14.582459 kubelet[3514]: I0213 19:03:14.581949 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-etc-cni-netd\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.584043 kubelet[3514]: I0213 19:03:14.581986 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-lib-modules\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.584043 kubelet[3514]: I0213 19:03:14.582024 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-config-path\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.584043 kubelet[3514]: I0213 19:03:14.582059 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-net\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.584043 kubelet[3514]: I0213 19:03:14.582099 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-bpf-maps\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.584043 kubelet[3514]: I0213 19:03:14.582151 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-xtables-lock\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.584043 kubelet[3514]: I0213 19:03:14.582200 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9-kube-proxy\") pod \"kube-proxy-mlnxk\" (UID: \"52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9\") " pod="kube-system/kube-proxy-mlnxk" Feb 13 19:03:14.584349 kubelet[3514]: I0213 19:03:14.582236 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxrj\" (UniqueName: \"kubernetes.io/projected/52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9-kube-api-access-jcxrj\") pod \"kube-proxy-mlnxk\" (UID: \"52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9\") " pod="kube-system/kube-proxy-mlnxk" Feb 13 19:03:14.584349 kubelet[3514]: I0213 19:03:14.582276 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cni-path\") pod \"cilium-vzbhp\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " pod="kube-system/cilium-vzbhp" Feb 13 19:03:14.632658 systemd[1]: Created slice kubepods-besteffort-pod0d0ec3f6_0dd1_4d52_b4c4_ce2056d6e24b.slice - libcontainer container kubepods-besteffort-pod0d0ec3f6_0dd1_4d52_b4c4_ce2056d6e24b.slice. Feb 13 19:03:14.682788 kubelet[3514]: I0213 19:03:14.682646 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gblfj\" (UniqueName: \"kubernetes.io/projected/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-kube-api-access-gblfj\") pod \"cilium-operator-6c4d7847fc-kpvxk\" (UID: \"0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b\") " pod="kube-system/cilium-operator-6c4d7847fc-kpvxk" Feb 13 19:03:14.688300 kubelet[3514]: I0213 19:03:14.684842 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-kpvxk\" (UID: \"0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b\") " pod="kube-system/cilium-operator-6c4d7847fc-kpvxk" Feb 13 19:03:14.819333 containerd[1954]: time="2025-02-13T19:03:14.818599906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mlnxk,Uid:52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:14.840482 containerd[1954]: time="2025-02-13T19:03:14.840428338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzbhp,Uid:e5b0f5ca-3437-4fda-aea0-800c870fc242,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:14.858015 containerd[1954]: time="2025-02-13T19:03:14.857630842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:14.858015 containerd[1954]: time="2025-02-13T19:03:14.857745298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:14.858015 containerd[1954]: time="2025-02-13T19:03:14.857782846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:14.859314 containerd[1954]: time="2025-02-13T19:03:14.858663094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:14.896942 systemd[1]: Started cri-containerd-e88bef65eb8ba1d18e6c12a9a1029107df02928089472b1a014d249d3e7fbce9.scope - libcontainer container e88bef65eb8ba1d18e6c12a9a1029107df02928089472b1a014d249d3e7fbce9. Feb 13 19:03:14.899459 containerd[1954]: time="2025-02-13T19:03:14.898631062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:14.899696 containerd[1954]: time="2025-02-13T19:03:14.899433694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:14.899696 containerd[1954]: time="2025-02-13T19:03:14.899488906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:14.901577 containerd[1954]: time="2025-02-13T19:03:14.901125778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:14.941928 containerd[1954]: time="2025-02-13T19:03:14.941875823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kpvxk,Uid:0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:14.951699 systemd[1]: Started cri-containerd-e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b.scope - libcontainer container e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b. Feb 13 19:03:14.975041 containerd[1954]: time="2025-02-13T19:03:14.974984639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mlnxk,Uid:52f966fe-5c13-46a6-a7a3-edf1c9c5e2e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e88bef65eb8ba1d18e6c12a9a1029107df02928089472b1a014d249d3e7fbce9\"" Feb 13 19:03:14.983715 containerd[1954]: time="2025-02-13T19:03:14.983139059Z" level=info msg="CreateContainer within sandbox \"e88bef65eb8ba1d18e6c12a9a1029107df02928089472b1a014d249d3e7fbce9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:03:15.026233 containerd[1954]: time="2025-02-13T19:03:15.026162995Z" level=info msg="CreateContainer within sandbox \"e88bef65eb8ba1d18e6c12a9a1029107df02928089472b1a014d249d3e7fbce9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43b0313d19838fa984dedc73b971123b5f160c831d419f9cad93931dc51645c9\"" Feb 13 19:03:15.027695 containerd[1954]: time="2025-02-13T19:03:15.027645943Z" level=info msg="StartContainer for \"43b0313d19838fa984dedc73b971123b5f160c831d419f9cad93931dc51645c9\"" Feb 13 19:03:15.031451 containerd[1954]: time="2025-02-13T19:03:15.029778151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:15.031451 containerd[1954]: time="2025-02-13T19:03:15.030417115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:15.031451 containerd[1954]: time="2025-02-13T19:03:15.030671467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:15.033241 containerd[1954]: time="2025-02-13T19:03:15.033024127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:15.038760 containerd[1954]: time="2025-02-13T19:03:15.038693851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzbhp,Uid:e5b0f5ca-3437-4fda-aea0-800c870fc242,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\"" Feb 13 19:03:15.045024 containerd[1954]: time="2025-02-13T19:03:15.044967895Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:03:15.097837 systemd[1]: Started cri-containerd-870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4.scope - libcontainer container 870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4. Feb 13 19:03:15.115814 systemd[1]: Started cri-containerd-43b0313d19838fa984dedc73b971123b5f160c831d419f9cad93931dc51645c9.scope - libcontainer container 43b0313d19838fa984dedc73b971123b5f160c831d419f9cad93931dc51645c9. Feb 13 19:03:15.211932 containerd[1954]: time="2025-02-13T19:03:15.211711388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-kpvxk,Uid:0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b,Namespace:kube-system,Attempt:0,} returns sandbox id \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\"" Feb 13 19:03:15.225883 containerd[1954]: time="2025-02-13T19:03:15.225714488Z" level=info msg="StartContainer for \"43b0313d19838fa984dedc73b971123b5f160c831d419f9cad93931dc51645c9\" returns successfully" Feb 13 19:03:18.346751 kubelet[3514]: I0213 19:03:18.345990 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mlnxk" podStartSLOduration=4.34596714 podStartE2EDuration="4.34596714s" podCreationTimestamp="2025-02-13 19:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:15.448449897 +0000 UTC m=+7.413284534" watchObservedRunningTime="2025-02-13 19:03:18.34596714 +0000 UTC m=+10.310801789" Feb 13 19:03:21.588131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631471750.mount: Deactivated successfully. Feb 13 19:03:23.984988 containerd[1954]: time="2025-02-13T19:03:23.984926144Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:23.986533 containerd[1954]: time="2025-02-13T19:03:23.986413772Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:03:23.989574 containerd[1954]: time="2025-02-13T19:03:23.989523476Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:23.992487 containerd[1954]: time="2025-02-13T19:03:23.992028464Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.946993405s" Feb 13 19:03:23.992487 containerd[1954]: time="2025-02-13T19:03:23.992084612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:03:23.994340 containerd[1954]: time="2025-02-13T19:03:23.994286636Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:03:23.998030 containerd[1954]: time="2025-02-13T19:03:23.997818668Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:03:24.016065 containerd[1954]: time="2025-02-13T19:03:24.015927412Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\"" Feb 13 19:03:24.019099 containerd[1954]: time="2025-02-13T19:03:24.017734780Z" level=info msg="StartContainer for \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\"" Feb 13 19:03:24.080930 systemd[1]: Started cri-containerd-85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec.scope - libcontainer container 85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec. Feb 13 19:03:24.133964 containerd[1954]: time="2025-02-13T19:03:24.133893412Z" level=info msg="StartContainer for \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\" returns successfully" Feb 13 19:03:24.156372 systemd[1]: cri-containerd-85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec.scope: Deactivated successfully. Feb 13 19:03:24.195050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec-rootfs.mount: Deactivated successfully. Feb 13 19:03:25.461031 containerd[1954]: time="2025-02-13T19:03:25.460949983Z" level=info msg="shim disconnected" id=85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec namespace=k8s.io Feb 13 19:03:25.461031 containerd[1954]: time="2025-02-13T19:03:25.461024695Z" level=warning msg="cleaning up after shim disconnected" id=85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec namespace=k8s.io Feb 13 19:03:25.461790 containerd[1954]: time="2025-02-13T19:03:25.461045371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:26.469028 containerd[1954]: time="2025-02-13T19:03:26.468951908Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:03:26.500605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945113864.mount: Deactivated successfully. Feb 13 19:03:26.504833 containerd[1954]: time="2025-02-13T19:03:26.501327008Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\"" Feb 13 19:03:26.507182 containerd[1954]: time="2025-02-13T19:03:26.507130808Z" level=info msg="StartContainer for \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\"" Feb 13 19:03:26.566781 systemd[1]: Started cri-containerd-a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450.scope - libcontainer container a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450. Feb 13 19:03:26.616638 containerd[1954]: time="2025-02-13T19:03:26.616455249Z" level=info msg="StartContainer for \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\" returns successfully" Feb 13 19:03:26.648085 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:26.648860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:26.649278 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:26.657346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:26.664634 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:03:26.666415 systemd[1]: cri-containerd-a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450.scope: Deactivated successfully. Feb 13 19:03:26.700786 containerd[1954]: time="2025-02-13T19:03:26.700666917Z" level=info msg="shim disconnected" id=a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450 namespace=k8s.io Feb 13 19:03:26.701616 containerd[1954]: time="2025-02-13T19:03:26.700801473Z" level=warning msg="cleaning up after shim disconnected" id=a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450 namespace=k8s.io Feb 13 19:03:26.701616 containerd[1954]: time="2025-02-13T19:03:26.700827825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:26.709040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:26.726355 containerd[1954]: time="2025-02-13T19:03:26.725669697Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:03:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:03:27.489220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450-rootfs.mount: Deactivated successfully. Feb 13 19:03:27.500277 containerd[1954]: time="2025-02-13T19:03:27.499346841Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:03:27.588280 containerd[1954]: time="2025-02-13T19:03:27.588195669Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\"" Feb 13 19:03:27.591532 containerd[1954]: time="2025-02-13T19:03:27.590870661Z" level=info msg="StartContainer for \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\"" Feb 13 19:03:27.747829 systemd[1]: Started cri-containerd-0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992.scope - libcontainer container 0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992. Feb 13 19:03:27.839726 containerd[1954]: time="2025-02-13T19:03:27.839051327Z" level=info msg="StartContainer for \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\" returns successfully" Feb 13 19:03:27.844382 systemd[1]: cri-containerd-0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992.scope: Deactivated successfully. Feb 13 19:03:27.927231 containerd[1954]: time="2025-02-13T19:03:27.926928923Z" level=info msg="shim disconnected" id=0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992 namespace=k8s.io Feb 13 19:03:27.927641 containerd[1954]: time="2025-02-13T19:03:27.927224495Z" level=warning msg="cleaning up after shim disconnected" id=0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992 namespace=k8s.io Feb 13 19:03:27.927641 containerd[1954]: time="2025-02-13T19:03:27.927593447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:28.412272 containerd[1954]: time="2025-02-13T19:03:28.412041358Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:28.413615 containerd[1954]: time="2025-02-13T19:03:28.413521318Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:03:28.414000 containerd[1954]: time="2025-02-13T19:03:28.413932342Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:28.416924 containerd[1954]: time="2025-02-13T19:03:28.416735122Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.422386878s" Feb 13 19:03:28.416924 containerd[1954]: time="2025-02-13T19:03:28.416788786Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:03:28.421888 containerd[1954]: time="2025-02-13T19:03:28.421712422Z" level=info msg="CreateContainer within sandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:03:28.449282 containerd[1954]: time="2025-02-13T19:03:28.449198830Z" level=info msg="CreateContainer within sandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\"" Feb 13 19:03:28.450055 containerd[1954]: time="2025-02-13T19:03:28.449984386Z" level=info msg="StartContainer for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\"" Feb 13 19:03:28.490484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422350889.mount: Deactivated successfully. Feb 13 19:03:28.513218 containerd[1954]: time="2025-02-13T19:03:28.513016330Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:03:28.529826 systemd[1]: Started cri-containerd-c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6.scope - libcontainer container c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6. Feb 13 19:03:28.565110 containerd[1954]: time="2025-02-13T19:03:28.565033150Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\"" Feb 13 19:03:28.566291 containerd[1954]: time="2025-02-13T19:03:28.566106094Z" level=info msg="StartContainer for \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\"" Feb 13 19:03:28.640682 systemd[1]: Started cri-containerd-02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1.scope - libcontainer container 02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1. Feb 13 19:03:28.658668 containerd[1954]: time="2025-02-13T19:03:28.658353935Z" level=info msg="StartContainer for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" returns successfully" Feb 13 19:03:28.702553 systemd[1]: cri-containerd-02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1.scope: Deactivated successfully. Feb 13 19:03:28.718334 containerd[1954]: time="2025-02-13T19:03:28.718150391Z" level=info msg="StartContainer for \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\" returns successfully" Feb 13 19:03:28.849781 containerd[1954]: time="2025-02-13T19:03:28.849359796Z" level=info msg="shim disconnected" id=02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1 namespace=k8s.io Feb 13 19:03:28.849781 containerd[1954]: time="2025-02-13T19:03:28.849466272Z" level=warning msg="cleaning up after shim disconnected" id=02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1 namespace=k8s.io Feb 13 19:03:28.851822 containerd[1954]: time="2025-02-13T19:03:28.849515988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:03:29.491378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1-rootfs.mount: Deactivated successfully. Feb 13 19:03:29.525131 containerd[1954]: time="2025-02-13T19:03:29.525055799Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:03:29.555118 containerd[1954]: time="2025-02-13T19:03:29.555043175Z" level=info msg="CreateContainer within sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\"" Feb 13 19:03:29.557109 containerd[1954]: time="2025-02-13T19:03:29.557048591Z" level=info msg="StartContainer for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\"" Feb 13 19:03:29.663839 systemd[1]: Started cri-containerd-ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b.scope - libcontainer container ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b. Feb 13 19:03:29.808157 containerd[1954]: time="2025-02-13T19:03:29.807990828Z" level=info msg="StartContainer for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" returns successfully" Feb 13 19:03:29.861053 kubelet[3514]: I0213 19:03:29.860835 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-kpvxk" podStartSLOduration=2.656268227 podStartE2EDuration="15.860811013s" podCreationTimestamp="2025-02-13 19:03:14 +0000 UTC" firstStartedPulling="2025-02-13 19:03:15.213876428 +0000 UTC m=+7.178711041" lastFinishedPulling="2025-02-13 19:03:28.418419226 +0000 UTC m=+20.383253827" observedRunningTime="2025-02-13 19:03:29.68588952 +0000 UTC m=+21.650724145" watchObservedRunningTime="2025-02-13 19:03:29.860811013 +0000 UTC m=+21.825645638" Feb 13 19:03:30.272577 kubelet[3514]: I0213 19:03:30.270336 3514 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:03:30.419697 systemd[1]: Created slice kubepods-burstable-pod4fe0b713_4e84_46f7_bbcb_cc6deeeadb84.slice - libcontainer container kubepods-burstable-pod4fe0b713_4e84_46f7_bbcb_cc6deeeadb84.slice. Feb 13 19:03:30.425337 kubelet[3514]: I0213 19:03:30.425295 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4fl\" (UniqueName: \"kubernetes.io/projected/4fe0b713-4e84-46f7-bbcb-cc6deeeadb84-kube-api-access-9l4fl\") pod \"coredns-668d6bf9bc-hzn8p\" (UID: \"4fe0b713-4e84-46f7-bbcb-cc6deeeadb84\") " pod="kube-system/coredns-668d6bf9bc-hzn8p" Feb 13 19:03:30.426806 kubelet[3514]: I0213 19:03:30.426675 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fe0b713-4e84-46f7-bbcb-cc6deeeadb84-config-volume\") pod \"coredns-668d6bf9bc-hzn8p\" (UID: \"4fe0b713-4e84-46f7-bbcb-cc6deeeadb84\") " pod="kube-system/coredns-668d6bf9bc-hzn8p" Feb 13 19:03:30.435821 systemd[1]: Created slice kubepods-burstable-pode1d96c86_6dba_492d_9640_adc956b1cef8.slice - libcontainer container kubepods-burstable-pode1d96c86_6dba_492d_9640_adc956b1cef8.slice. Feb 13 19:03:30.527673 kubelet[3514]: I0213 19:03:30.527371 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wx9c\" (UniqueName: \"kubernetes.io/projected/e1d96c86-6dba-492d-9640-adc956b1cef8-kube-api-access-8wx9c\") pod \"coredns-668d6bf9bc-cl7z2\" (UID: \"e1d96c86-6dba-492d-9640-adc956b1cef8\") " pod="kube-system/coredns-668d6bf9bc-cl7z2" Feb 13 19:03:30.527673 kubelet[3514]: I0213 19:03:30.527461 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1d96c86-6dba-492d-9640-adc956b1cef8-config-volume\") pod \"coredns-668d6bf9bc-cl7z2\" (UID: \"e1d96c86-6dba-492d-9640-adc956b1cef8\") " pod="kube-system/coredns-668d6bf9bc-cl7z2" Feb 13 19:03:30.575028 kubelet[3514]: I0213 19:03:30.574455 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vzbhp" podStartSLOduration=7.623039571 podStartE2EDuration="16.574430568s" podCreationTimestamp="2025-02-13 19:03:14 +0000 UTC" firstStartedPulling="2025-02-13 19:03:15.042024595 +0000 UTC m=+7.006859196" lastFinishedPulling="2025-02-13 19:03:23.993415592 +0000 UTC m=+15.958250193" observedRunningTime="2025-02-13 19:03:30.57431622 +0000 UTC m=+22.539150905" watchObservedRunningTime="2025-02-13 19:03:30.574430568 +0000 UTC m=+22.539265193" Feb 13 19:03:30.734133 containerd[1954]: time="2025-02-13T19:03:30.733688629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hzn8p,Uid:4fe0b713-4e84-46f7-bbcb-cc6deeeadb84,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:30.743982 containerd[1954]: time="2025-02-13T19:03:30.743530429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cl7z2,Uid:e1d96c86-6dba-492d-9640-adc956b1cef8,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:33.236478 (udev-worker)[4313]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:33.243309 systemd-networkd[1866]: cilium_host: Link UP Feb 13 19:03:33.244868 systemd-networkd[1866]: cilium_net: Link UP Feb 13 19:03:33.247309 (udev-worker)[4314]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:33.248821 systemd-networkd[1866]: cilium_net: Gained carrier Feb 13 19:03:33.249281 systemd-networkd[1866]: cilium_host: Gained carrier Feb 13 19:03:33.422021 systemd-networkd[1866]: cilium_vxlan: Link UP Feb 13 19:03:33.422035 systemd-networkd[1866]: cilium_vxlan: Gained carrier Feb 13 19:03:33.736847 systemd-networkd[1866]: cilium_host: Gained IPv6LL Feb 13 19:03:33.903560 kernel: NET: Registered PF_ALG protocol family Feb 13 19:03:34.024820 systemd-networkd[1866]: cilium_net: Gained IPv6LL Feb 13 19:03:35.214170 systemd-networkd[1866]: lxc_health: Link UP Feb 13 19:03:35.234798 systemd-networkd[1866]: lxc_health: Gained carrier Feb 13 19:03:35.305106 systemd-networkd[1866]: cilium_vxlan: Gained IPv6LL Feb 13 19:03:35.826628 kernel: eth0: renamed from tmp11165 Feb 13 19:03:35.836570 systemd-networkd[1866]: lxc2c4eb3a65418: Link UP Feb 13 19:03:35.839268 systemd-networkd[1866]: lxc2c4eb3a65418: Gained carrier Feb 13 19:03:35.881564 kernel: eth0: renamed from tmpb0b91 Feb 13 19:03:35.883375 systemd-networkd[1866]: lxc604482753047: Link UP Feb 13 19:03:35.888125 (udev-worker)[4360]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:35.893782 systemd-networkd[1866]: lxc604482753047: Gained carrier Feb 13 19:03:36.328721 systemd-networkd[1866]: lxc_health: Gained IPv6LL Feb 13 19:03:36.968728 systemd-networkd[1866]: lxc604482753047: Gained IPv6LL Feb 13 19:03:37.224797 systemd-networkd[1866]: lxc2c4eb3a65418: Gained IPv6LL Feb 13 19:03:39.378444 ntpd[1927]: Listen normally on 8 cilium_host 192.168.0.192:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 8 cilium_host 192.168.0.192:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 9 cilium_net [fe80::589c:35ff:fe7a:b46%4]:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 10 cilium_host [fe80::8426:50ff:fe51:692e%5]:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 11 cilium_vxlan [fe80::c4c8:11ff:fe0b:3f58%6]:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 12 lxc_health [fe80::2c6b:50ff:feb5:9497%8]:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 13 lxc2c4eb3a65418 [fe80::4403:2bff:fe00:d889%10]:123 Feb 13 19:03:39.379629 ntpd[1927]: 13 Feb 19:03:39 ntpd[1927]: Listen normally on 14 lxc604482753047 [fe80::3860:91ff:fe4b:170c%12]:123 Feb 13 19:03:39.378592 ntpd[1927]: Listen normally on 9 cilium_net [fe80::589c:35ff:fe7a:b46%4]:123 Feb 13 19:03:39.378672 ntpd[1927]: Listen normally on 10 cilium_host [fe80::8426:50ff:fe51:692e%5]:123 Feb 13 19:03:39.378737 ntpd[1927]: Listen normally on 11 cilium_vxlan [fe80::c4c8:11ff:fe0b:3f58%6]:123 Feb 13 19:03:39.378801 ntpd[1927]: Listen normally on 12 lxc_health [fe80::2c6b:50ff:feb5:9497%8]:123 Feb 13 19:03:39.378865 ntpd[1927]: Listen normally on 13 lxc2c4eb3a65418 [fe80::4403:2bff:fe00:d889%10]:123 Feb 13 19:03:39.378937 ntpd[1927]: Listen normally on 14 lxc604482753047 [fe80::3860:91ff:fe4b:170c%12]:123 Feb 13 19:03:39.722630 kubelet[3514]: I0213 19:03:39.721356 3514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:03:44.032412 containerd[1954]: time="2025-02-13T19:03:44.031368659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:44.032412 containerd[1954]: time="2025-02-13T19:03:44.031472999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:44.035297 containerd[1954]: time="2025-02-13T19:03:44.032555987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:44.035790 containerd[1954]: time="2025-02-13T19:03:44.035596091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:44.117634 systemd[1]: Started cri-containerd-b0b913a58c669556a027c38da19aaff4529cd40a726a941c78231817bcfd945f.scope - libcontainer container b0b913a58c669556a027c38da19aaff4529cd40a726a941c78231817bcfd945f. Feb 13 19:03:44.161555 containerd[1954]: time="2025-02-13T19:03:44.160895688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:44.161555 containerd[1954]: time="2025-02-13T19:03:44.161012448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:44.162628 containerd[1954]: time="2025-02-13T19:03:44.161832108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:44.162628 containerd[1954]: time="2025-02-13T19:03:44.162090912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:44.208910 systemd[1]: Started cri-containerd-111651c31bac0c30c04cbc4fe511bb3ebd24bbe60b668529970ce1ab10433adc.scope - libcontainer container 111651c31bac0c30c04cbc4fe511bb3ebd24bbe60b668529970ce1ab10433adc. Feb 13 19:03:44.294309 containerd[1954]: time="2025-02-13T19:03:44.294200544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cl7z2,Uid:e1d96c86-6dba-492d-9640-adc956b1cef8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0b913a58c669556a027c38da19aaff4529cd40a726a941c78231817bcfd945f\"" Feb 13 19:03:44.302769 containerd[1954]: time="2025-02-13T19:03:44.302685000Z" level=info msg="CreateContainer within sandbox \"b0b913a58c669556a027c38da19aaff4529cd40a726a941c78231817bcfd945f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:44.347713 containerd[1954]: time="2025-02-13T19:03:44.347594209Z" level=info msg="CreateContainer within sandbox \"b0b913a58c669556a027c38da19aaff4529cd40a726a941c78231817bcfd945f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53cef4d28712acdf6da2f0220432dba1ec34ea9338cc14b110c79255fa13673c\"" Feb 13 19:03:44.350549 containerd[1954]: time="2025-02-13T19:03:44.349319797Z" level=info msg="StartContainer for \"53cef4d28712acdf6da2f0220432dba1ec34ea9338cc14b110c79255fa13673c\"" Feb 13 19:03:44.406790 containerd[1954]: time="2025-02-13T19:03:44.405785641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hzn8p,Uid:4fe0b713-4e84-46f7-bbcb-cc6deeeadb84,Namespace:kube-system,Attempt:0,} returns sandbox id \"111651c31bac0c30c04cbc4fe511bb3ebd24bbe60b668529970ce1ab10433adc\"" Feb 13 19:03:44.417790 containerd[1954]: time="2025-02-13T19:03:44.417708097Z" level=info msg="CreateContainer within sandbox \"111651c31bac0c30c04cbc4fe511bb3ebd24bbe60b668529970ce1ab10433adc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:44.442808 systemd[1]: Started cri-containerd-53cef4d28712acdf6da2f0220432dba1ec34ea9338cc14b110c79255fa13673c.scope - libcontainer container 53cef4d28712acdf6da2f0220432dba1ec34ea9338cc14b110c79255fa13673c. Feb 13 19:03:44.459778 containerd[1954]: time="2025-02-13T19:03:44.459403009Z" level=info msg="CreateContainer within sandbox \"111651c31bac0c30c04cbc4fe511bb3ebd24bbe60b668529970ce1ab10433adc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1d75d13ad2994d6d578884007876105063674864d6a225ac100b8405e928304\"" Feb 13 19:03:44.461433 containerd[1954]: time="2025-02-13T19:03:44.461359453Z" level=info msg="StartContainer for \"d1d75d13ad2994d6d578884007876105063674864d6a225ac100b8405e928304\"" Feb 13 19:03:44.531989 systemd[1]: Started cri-containerd-d1d75d13ad2994d6d578884007876105063674864d6a225ac100b8405e928304.scope - libcontainer container d1d75d13ad2994d6d578884007876105063674864d6a225ac100b8405e928304. Feb 13 19:03:44.569723 containerd[1954]: time="2025-02-13T19:03:44.568992074Z" level=info msg="StartContainer for \"53cef4d28712acdf6da2f0220432dba1ec34ea9338cc14b110c79255fa13673c\" returns successfully" Feb 13 19:03:44.666791 containerd[1954]: time="2025-02-13T19:03:44.666591542Z" level=info msg="StartContainer for \"d1d75d13ad2994d6d578884007876105063674864d6a225ac100b8405e928304\" returns successfully" Feb 13 19:03:45.618471 kubelet[3514]: I0213 19:03:45.617662 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hzn8p" podStartSLOduration=31.617614851 podStartE2EDuration="31.617614851s" podCreationTimestamp="2025-02-13 19:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:45.612383619 +0000 UTC m=+37.577218268" watchObservedRunningTime="2025-02-13 19:03:45.617614851 +0000 UTC m=+37.582449584" Feb 13 19:03:45.618471 kubelet[3514]: I0213 19:03:45.618221 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cl7z2" podStartSLOduration=31.617984535 podStartE2EDuration="31.617984535s" podCreationTimestamp="2025-02-13 19:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:44.612002366 +0000 UTC m=+36.576836979" watchObservedRunningTime="2025-02-13 19:03:45.617984535 +0000 UTC m=+37.582819160" Feb 13 19:03:55.899040 systemd[1]: Started sshd@9-172.31.18.242:22-139.178.89.65:46730.service - OpenSSH per-connection server daemon (139.178.89.65:46730). Feb 13 19:03:56.077348 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 46730 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:56.080538 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:56.088430 systemd-logind[1933]: New session 10 of user core. Feb 13 19:03:56.098741 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:03:56.380589 sshd[4897]: Connection closed by 139.178.89.65 port 46730 Feb 13 19:03:56.382015 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:56.390458 systemd[1]: sshd@9-172.31.18.242:22-139.178.89.65:46730.service: Deactivated successfully. Feb 13 19:03:56.394222 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:03:56.395834 systemd-logind[1933]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:03:56.400094 systemd-logind[1933]: Removed session 10. Feb 13 19:04:01.421022 systemd[1]: Started sshd@10-172.31.18.242:22-139.178.89.65:46742.service - OpenSSH per-connection server daemon (139.178.89.65:46742). Feb 13 19:04:01.613972 sshd[4911]: Accepted publickey for core from 139.178.89.65 port 46742 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:01.616445 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:01.624907 systemd-logind[1933]: New session 11 of user core. Feb 13 19:04:01.634826 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:04:01.874091 sshd[4913]: Connection closed by 139.178.89.65 port 46742 Feb 13 19:04:01.875175 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:01.881475 systemd[1]: sshd@10-172.31.18.242:22-139.178.89.65:46742.service: Deactivated successfully. Feb 13 19:04:01.884835 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:04:01.886254 systemd-logind[1933]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:04:01.888826 systemd-logind[1933]: Removed session 11. Feb 13 19:04:06.919962 systemd[1]: Started sshd@11-172.31.18.242:22-139.178.89.65:43696.service - OpenSSH per-connection server daemon (139.178.89.65:43696). Feb 13 19:04:07.103560 sshd[4926]: Accepted publickey for core from 139.178.89.65 port 43696 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:07.106066 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:07.114927 systemd-logind[1933]: New session 12 of user core. Feb 13 19:04:07.125851 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:04:07.365708 sshd[4928]: Connection closed by 139.178.89.65 port 43696 Feb 13 19:04:07.367177 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:07.373416 systemd-logind[1933]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:04:07.376318 systemd[1]: sshd@11-172.31.18.242:22-139.178.89.65:43696.service: Deactivated successfully. Feb 13 19:04:07.381185 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:04:07.383383 systemd-logind[1933]: Removed session 12. Feb 13 19:04:12.412068 systemd[1]: Started sshd@12-172.31.18.242:22-139.178.89.65:43702.service - OpenSSH per-connection server daemon (139.178.89.65:43702). Feb 13 19:04:12.602801 sshd[4942]: Accepted publickey for core from 139.178.89.65 port 43702 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:12.605371 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:12.613529 systemd-logind[1933]: New session 13 of user core. Feb 13 19:04:12.618737 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:04:12.857231 sshd[4944]: Connection closed by 139.178.89.65 port 43702 Feb 13 19:04:12.858336 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:12.863044 systemd[1]: sshd@12-172.31.18.242:22-139.178.89.65:43702.service: Deactivated successfully. Feb 13 19:04:12.867528 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:04:12.872151 systemd-logind[1933]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:04:12.874109 systemd-logind[1933]: Removed session 13. Feb 13 19:04:12.898089 systemd[1]: Started sshd@13-172.31.18.242:22-139.178.89.65:43712.service - OpenSSH per-connection server daemon (139.178.89.65:43712). Feb 13 19:04:13.082692 sshd[4957]: Accepted publickey for core from 139.178.89.65 port 43712 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:13.086086 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:13.095131 systemd-logind[1933]: New session 14 of user core. Feb 13 19:04:13.102786 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:04:13.423557 sshd[4959]: Connection closed by 139.178.89.65 port 43712 Feb 13 19:04:13.423929 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:13.432155 systemd[1]: sshd@13-172.31.18.242:22-139.178.89.65:43712.service: Deactivated successfully. Feb 13 19:04:13.445637 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:04:13.448209 systemd-logind[1933]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:04:13.476695 systemd[1]: Started sshd@14-172.31.18.242:22-139.178.89.65:43716.service - OpenSSH per-connection server daemon (139.178.89.65:43716). Feb 13 19:04:13.478772 systemd-logind[1933]: Removed session 14. Feb 13 19:04:13.681050 sshd[4968]: Accepted publickey for core from 139.178.89.65 port 43716 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:13.683589 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:13.692045 systemd-logind[1933]: New session 15 of user core. Feb 13 19:04:13.700815 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:04:13.943971 sshd[4971]: Connection closed by 139.178.89.65 port 43716 Feb 13 19:04:13.945048 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:13.952260 systemd[1]: sshd@14-172.31.18.242:22-139.178.89.65:43716.service: Deactivated successfully. Feb 13 19:04:13.957295 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:04:13.959815 systemd-logind[1933]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:04:13.962012 systemd-logind[1933]: Removed session 15. Feb 13 19:04:18.989955 systemd[1]: Started sshd@15-172.31.18.242:22-139.178.89.65:39756.service - OpenSSH per-connection server daemon (139.178.89.65:39756). Feb 13 19:04:19.178959 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 39756 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:19.181466 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:19.190457 systemd-logind[1933]: New session 16 of user core. Feb 13 19:04:19.194776 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:04:19.440673 sshd[4987]: Connection closed by 139.178.89.65 port 39756 Feb 13 19:04:19.441732 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:19.447287 systemd[1]: sshd@15-172.31.18.242:22-139.178.89.65:39756.service: Deactivated successfully. Feb 13 19:04:19.451447 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:04:19.454736 systemd-logind[1933]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:04:19.457176 systemd-logind[1933]: Removed session 16. Feb 13 19:04:24.489818 systemd[1]: Started sshd@16-172.31.18.242:22-139.178.89.65:39758.service - OpenSSH per-connection server daemon (139.178.89.65:39758). Feb 13 19:04:24.674307 sshd[5000]: Accepted publickey for core from 139.178.89.65 port 39758 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:24.676830 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:24.684862 systemd-logind[1933]: New session 17 of user core. Feb 13 19:04:24.691859 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:04:24.940133 sshd[5002]: Connection closed by 139.178.89.65 port 39758 Feb 13 19:04:24.941256 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:24.948049 systemd[1]: sshd@16-172.31.18.242:22-139.178.89.65:39758.service: Deactivated successfully. Feb 13 19:04:24.951421 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:04:24.953355 systemd-logind[1933]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:04:24.955957 systemd-logind[1933]: Removed session 17. Feb 13 19:04:29.985010 systemd[1]: Started sshd@17-172.31.18.242:22-139.178.89.65:46498.service - OpenSSH per-connection server daemon (139.178.89.65:46498). Feb 13 19:04:30.170370 sshd[5015]: Accepted publickey for core from 139.178.89.65 port 46498 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:30.173006 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:30.183680 systemd-logind[1933]: New session 18 of user core. Feb 13 19:04:30.191864 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:04:30.437487 sshd[5017]: Connection closed by 139.178.89.65 port 46498 Feb 13 19:04:30.438429 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:30.445325 systemd[1]: sshd@17-172.31.18.242:22-139.178.89.65:46498.service: Deactivated successfully. Feb 13 19:04:30.449579 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:04:30.451839 systemd-logind[1933]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:04:30.453556 systemd-logind[1933]: Removed session 18. Feb 13 19:04:35.479009 systemd[1]: Started sshd@18-172.31.18.242:22-139.178.89.65:46892.service - OpenSSH per-connection server daemon (139.178.89.65:46892). Feb 13 19:04:35.662716 sshd[5028]: Accepted publickey for core from 139.178.89.65 port 46892 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:35.666707 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:35.677078 systemd-logind[1933]: New session 19 of user core. Feb 13 19:04:35.686787 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:04:35.928097 sshd[5031]: Connection closed by 139.178.89.65 port 46892 Feb 13 19:04:35.927777 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:35.934561 systemd[1]: sshd@18-172.31.18.242:22-139.178.89.65:46892.service: Deactivated successfully. Feb 13 19:04:35.938565 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:04:35.944117 systemd-logind[1933]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:04:35.946144 systemd-logind[1933]: Removed session 19. Feb 13 19:04:35.970040 systemd[1]: Started sshd@19-172.31.18.242:22-139.178.89.65:46894.service - OpenSSH per-connection server daemon (139.178.89.65:46894). Feb 13 19:04:36.167255 sshd[5042]: Accepted publickey for core from 139.178.89.65 port 46894 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:36.169766 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:36.179303 systemd-logind[1933]: New session 20 of user core. Feb 13 19:04:36.187808 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:04:36.484703 sshd[5044]: Connection closed by 139.178.89.65 port 46894 Feb 13 19:04:36.485567 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:36.490784 systemd[1]: sshd@19-172.31.18.242:22-139.178.89.65:46894.service: Deactivated successfully. Feb 13 19:04:36.495419 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:04:36.499111 systemd-logind[1933]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:04:36.501814 systemd-logind[1933]: Removed session 20. Feb 13 19:04:36.532023 systemd[1]: Started sshd@20-172.31.18.242:22-139.178.89.65:46900.service - OpenSSH per-connection server daemon (139.178.89.65:46900). Feb 13 19:04:36.727266 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 46900 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:36.729777 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:36.737822 systemd-logind[1933]: New session 21 of user core. Feb 13 19:04:36.746791 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:04:38.031906 sshd[5056]: Connection closed by 139.178.89.65 port 46900 Feb 13 19:04:38.032794 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:38.044093 systemd-logind[1933]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:04:38.046421 systemd[1]: sshd@20-172.31.18.242:22-139.178.89.65:46900.service: Deactivated successfully. Feb 13 19:04:38.056237 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:04:38.077101 systemd-logind[1933]: Removed session 21. Feb 13 19:04:38.089017 systemd[1]: Started sshd@21-172.31.18.242:22-139.178.89.65:46906.service - OpenSSH per-connection server daemon (139.178.89.65:46906). Feb 13 19:04:38.276370 sshd[5072]: Accepted publickey for core from 139.178.89.65 port 46906 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:38.279084 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:38.288989 systemd-logind[1933]: New session 22 of user core. Feb 13 19:04:38.308317 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:04:38.792187 sshd[5075]: Connection closed by 139.178.89.65 port 46906 Feb 13 19:04:38.792806 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:38.798731 systemd[1]: sshd@21-172.31.18.242:22-139.178.89.65:46906.service: Deactivated successfully. Feb 13 19:04:38.802688 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:04:38.807010 systemd-logind[1933]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:04:38.809446 systemd-logind[1933]: Removed session 22. Feb 13 19:04:38.835026 systemd[1]: Started sshd@22-172.31.18.242:22-139.178.89.65:46908.service - OpenSSH per-connection server daemon (139.178.89.65:46908). Feb 13 19:04:39.020766 sshd[5085]: Accepted publickey for core from 139.178.89.65 port 46908 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:39.023202 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:39.032587 systemd-logind[1933]: New session 23 of user core. Feb 13 19:04:39.041757 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:04:39.281725 sshd[5087]: Connection closed by 139.178.89.65 port 46908 Feb 13 19:04:39.282465 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:39.289196 systemd[1]: sshd@22-172.31.18.242:22-139.178.89.65:46908.service: Deactivated successfully. Feb 13 19:04:39.293153 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:04:39.296994 systemd-logind[1933]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:04:39.299059 systemd-logind[1933]: Removed session 23. Feb 13 19:04:44.326995 systemd[1]: Started sshd@23-172.31.18.242:22-139.178.89.65:46910.service - OpenSSH per-connection server daemon (139.178.89.65:46910). Feb 13 19:04:44.513613 sshd[5099]: Accepted publickey for core from 139.178.89.65 port 46910 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:44.516134 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:44.524801 systemd-logind[1933]: New session 24 of user core. Feb 13 19:04:44.531790 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:04:44.780798 sshd[5101]: Connection closed by 139.178.89.65 port 46910 Feb 13 19:04:44.782005 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:44.791762 systemd[1]: sshd@23-172.31.18.242:22-139.178.89.65:46910.service: Deactivated successfully. Feb 13 19:04:44.800049 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:04:44.802704 systemd-logind[1933]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:04:44.805583 systemd-logind[1933]: Removed session 24. Feb 13 19:04:49.823044 systemd[1]: Started sshd@24-172.31.18.242:22-139.178.89.65:52776.service - OpenSSH per-connection server daemon (139.178.89.65:52776). Feb 13 19:04:50.020931 sshd[5116]: Accepted publickey for core from 139.178.89.65 port 52776 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:50.022869 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:50.034715 systemd-logind[1933]: New session 25 of user core. Feb 13 19:04:50.040908 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:04:50.286681 sshd[5118]: Connection closed by 139.178.89.65 port 52776 Feb 13 19:04:50.287852 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:50.294661 systemd[1]: sshd@24-172.31.18.242:22-139.178.89.65:52776.service: Deactivated successfully. Feb 13 19:04:50.298872 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:04:50.300459 systemd-logind[1933]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:04:50.302330 systemd-logind[1933]: Removed session 25. Feb 13 19:04:55.329056 systemd[1]: Started sshd@25-172.31.18.242:22-139.178.89.65:59998.service - OpenSSH per-connection server daemon (139.178.89.65:59998). Feb 13 19:04:55.518334 sshd[5130]: Accepted publickey for core from 139.178.89.65 port 59998 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:55.521010 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:55.531119 systemd-logind[1933]: New session 26 of user core. Feb 13 19:04:55.539833 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:04:55.790877 sshd[5132]: Connection closed by 139.178.89.65 port 59998 Feb 13 19:04:55.791791 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:55.797234 systemd[1]: sshd@25-172.31.18.242:22-139.178.89.65:59998.service: Deactivated successfully. Feb 13 19:04:55.801097 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:04:55.806331 systemd-logind[1933]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:04:55.808273 systemd-logind[1933]: Removed session 26. Feb 13 19:05:00.840989 systemd[1]: Started sshd@26-172.31.18.242:22-139.178.89.65:60006.service - OpenSSH per-connection server daemon (139.178.89.65:60006). Feb 13 19:05:01.026345 sshd[5145]: Accepted publickey for core from 139.178.89.65 port 60006 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:05:01.029256 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:01.037072 systemd-logind[1933]: New session 27 of user core. Feb 13 19:05:01.045787 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:05:01.285562 sshd[5147]: Connection closed by 139.178.89.65 port 60006 Feb 13 19:05:01.286315 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:01.292222 systemd[1]: sshd@26-172.31.18.242:22-139.178.89.65:60006.service: Deactivated successfully. Feb 13 19:05:01.297319 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:05:01.301133 systemd-logind[1933]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:05:01.303091 systemd-logind[1933]: Removed session 27. Feb 13 19:05:01.326020 systemd[1]: Started sshd@27-172.31.18.242:22-139.178.89.65:60016.service - OpenSSH per-connection server daemon (139.178.89.65:60016). Feb 13 19:05:01.521912 sshd[5158]: Accepted publickey for core from 139.178.89.65 port 60016 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:05:01.524437 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:01.534529 systemd-logind[1933]: New session 28 of user core. Feb 13 19:05:01.543779 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:05:04.023457 containerd[1954]: time="2025-02-13T19:05:04.023368132Z" level=info msg="StopContainer for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" with timeout 30 (s)" Feb 13 19:05:04.028971 systemd[1]: run-containerd-runc-k8s.io-ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b-runc.eURPwS.mount: Deactivated successfully. Feb 13 19:05:04.034757 containerd[1954]: time="2025-02-13T19:05:04.034682609Z" level=info msg="Stop container \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" with signal terminated" Feb 13 19:05:04.056881 containerd[1954]: time="2025-02-13T19:05:04.056799761Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:05:04.066896 systemd[1]: cri-containerd-c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6.scope: Deactivated successfully. Feb 13 19:05:04.080024 containerd[1954]: time="2025-02-13T19:05:04.079731641Z" level=info msg="StopContainer for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" with timeout 2 (s)" Feb 13 19:05:04.082346 containerd[1954]: time="2025-02-13T19:05:04.082085957Z" level=info msg="Stop container \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" with signal terminated" Feb 13 19:05:04.100934 systemd-networkd[1866]: lxc_health: Link DOWN Feb 13 19:05:04.100953 systemd-networkd[1866]: lxc_health: Lost carrier Feb 13 19:05:04.128360 systemd[1]: cri-containerd-ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b.scope: Deactivated successfully. Feb 13 19:05:04.129130 systemd[1]: cri-containerd-ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b.scope: Consumed 14.266s CPU time, 125.3M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:05:04.159762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6-rootfs.mount: Deactivated successfully. Feb 13 19:05:04.188130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b-rootfs.mount: Deactivated successfully. Feb 13 19:05:04.188928 containerd[1954]: time="2025-02-13T19:05:04.188303153Z" level=info msg="shim disconnected" id=c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6 namespace=k8s.io Feb 13 19:05:04.188928 containerd[1954]: time="2025-02-13T19:05:04.188389829Z" level=warning msg="cleaning up after shim disconnected" id=c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6 namespace=k8s.io Feb 13 19:05:04.188928 containerd[1954]: time="2025-02-13T19:05:04.188410553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:04.197550 containerd[1954]: time="2025-02-13T19:05:04.196463729Z" level=info msg="shim disconnected" id=ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b namespace=k8s.io Feb 13 19:05:04.197550 containerd[1954]: time="2025-02-13T19:05:04.196733441Z" level=warning msg="cleaning up after shim disconnected" id=ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b namespace=k8s.io Feb 13 19:05:04.197550 containerd[1954]: time="2025-02-13T19:05:04.196756337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:04.223074 containerd[1954]: time="2025-02-13T19:05:04.222995717Z" level=info msg="StopContainer for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" returns successfully" Feb 13 19:05:04.225327 containerd[1954]: time="2025-02-13T19:05:04.225263273Z" level=info msg="StopPodSandbox for \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\"" Feb 13 19:05:04.225540 containerd[1954]: time="2025-02-13T19:05:04.225334241Z" level=info msg="Container to stop \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:05:04.229783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4-shm.mount: Deactivated successfully. Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.238308474Z" level=info msg="StopContainer for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" returns successfully" Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.239343978Z" level=info msg="StopPodSandbox for \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\"" Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.239393370Z" level=info msg="Container to stop \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.239437890Z" level=info msg="Container to stop \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.239463258Z" level=info msg="Container to stop \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.239491326Z" level=info msg="Container to stop \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:05:04.239546 containerd[1954]: time="2025-02-13T19:05:04.239545782Z" level=info msg="Container to stop \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:05:04.243251 systemd[1]: cri-containerd-870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4.scope: Deactivated successfully. Feb 13 19:05:04.258247 systemd[1]: cri-containerd-e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b.scope: Deactivated successfully. Feb 13 19:05:04.310760 containerd[1954]: time="2025-02-13T19:05:04.310022034Z" level=info msg="shim disconnected" id=870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4 namespace=k8s.io Feb 13 19:05:04.313988 containerd[1954]: time="2025-02-13T19:05:04.313915074Z" level=warning msg="cleaning up after shim disconnected" id=870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4 namespace=k8s.io Feb 13 19:05:04.314384 containerd[1954]: time="2025-02-13T19:05:04.311839986Z" level=info msg="shim disconnected" id=e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b namespace=k8s.io Feb 13 19:05:04.314600 containerd[1954]: time="2025-02-13T19:05:04.314550834Z" level=warning msg="cleaning up after shim disconnected" id=e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b namespace=k8s.io Feb 13 19:05:04.314736 containerd[1954]: time="2025-02-13T19:05:04.314708310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:04.314987 containerd[1954]: time="2025-02-13T19:05:04.314954334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:04.347129 containerd[1954]: time="2025-02-13T19:05:04.347061138Z" level=info msg="TearDown network for sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" successfully" Feb 13 19:05:04.347129 containerd[1954]: time="2025-02-13T19:05:04.347113590Z" level=info msg="StopPodSandbox for \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" returns successfully" Feb 13 19:05:04.353452 containerd[1954]: time="2025-02-13T19:05:04.352916430Z" level=info msg="TearDown network for sandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" successfully" Feb 13 19:05:04.353452 containerd[1954]: time="2025-02-13T19:05:04.352993278Z" level=info msg="StopPodSandbox for \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" returns successfully" Feb 13 19:05:04.450025 kubelet[3514]: I0213 19:05:04.449958 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-xtables-lock\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450638 kubelet[3514]: I0213 19:05:04.450038 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5b0f5ca-3437-4fda-aea0-800c870fc242-clustermesh-secrets\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450638 kubelet[3514]: I0213 19:05:04.450083 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-config-path\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450638 kubelet[3514]: I0213 19:05:04.450119 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-bpf-maps\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450638 kubelet[3514]: I0213 19:05:04.450157 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-cilium-config-path\") pod \"0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b\" (UID: \"0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b\") " Feb 13 19:05:04.450638 kubelet[3514]: I0213 19:05:04.450195 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-hostproc\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450638 kubelet[3514]: I0213 19:05:04.450228 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-kernel\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450972 kubelet[3514]: I0213 19:05:04.450264 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-hubble-tls\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450972 kubelet[3514]: I0213 19:05:04.450297 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-run\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450972 kubelet[3514]: I0213 19:05:04.450328 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-lib-modules\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450972 kubelet[3514]: I0213 19:05:04.450363 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-net\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450972 kubelet[3514]: I0213 19:05:04.450397 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cni-path\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.450972 kubelet[3514]: I0213 19:05:04.450430 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-cgroup\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.451281 kubelet[3514]: I0213 19:05:04.450463 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-etc-cni-netd\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.451281 kubelet[3514]: I0213 19:05:04.450531 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gblfj\" (UniqueName: \"kubernetes.io/projected/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-kube-api-access-gblfj\") pod \"0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b\" (UID: \"0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b\") " Feb 13 19:05:04.451281 kubelet[3514]: I0213 19:05:04.450577 3514 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b74wd\" (UniqueName: \"kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-kube-api-access-b74wd\") pod \"e5b0f5ca-3437-4fda-aea0-800c870fc242\" (UID: \"e5b0f5ca-3437-4fda-aea0-800c870fc242\") " Feb 13 19:05:04.454550 kubelet[3514]: I0213 19:05:04.453543 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.454550 kubelet[3514]: I0213 19:05:04.453665 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.454550 kubelet[3514]: I0213 19:05:04.453707 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.454550 kubelet[3514]: I0213 19:05:04.453746 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cni-path" (OuterVolumeSpecName: "cni-path") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.454550 kubelet[3514]: I0213 19:05:04.453783 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.454914 kubelet[3514]: I0213 19:05:04.453820 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.455144 kubelet[3514]: I0213 19:05:04.455097 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.461002 kubelet[3514]: I0213 19:05:04.460950 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:05:04.463579 kubelet[3514]: I0213 19:05:04.461282 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-kube-api-access-b74wd" (OuterVolumeSpecName: "kube-api-access-b74wd") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "kube-api-access-b74wd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:05:04.463901 kubelet[3514]: I0213 19:05:04.462736 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-hostproc" (OuterVolumeSpecName: "hostproc") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.463987 kubelet[3514]: I0213 19:05:04.462786 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.463987 kubelet[3514]: I0213 19:05:04.462822 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:05:04.465627 kubelet[3514]: I0213 19:05:04.465559 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5b0f5ca-3437-4fda-aea0-800c870fc242-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:05:04.467896 kubelet[3514]: I0213 19:05:04.467826 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e5b0f5ca-3437-4fda-aea0-800c870fc242" (UID: "e5b0f5ca-3437-4fda-aea0-800c870fc242"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:05:04.468043 kubelet[3514]: I0213 19:05:04.467837 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-kube-api-access-gblfj" (OuterVolumeSpecName: "kube-api-access-gblfj") pod "0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b" (UID: "0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b"). InnerVolumeSpecName "kube-api-access-gblfj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:05:04.471478 kubelet[3514]: I0213 19:05:04.471394 3514 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b" (UID: "0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:05:04.551578 kubelet[3514]: I0213 19:05:04.551487 3514 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b74wd\" (UniqueName: \"kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-kube-api-access-b74wd\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551578 kubelet[3514]: I0213 19:05:04.551578 3514 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-xtables-lock\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551605 3514 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-config-path\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551633 3514 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-bpf-maps\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551655 3514 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-cilium-config-path\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551678 3514 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5b0f5ca-3437-4fda-aea0-800c870fc242-clustermesh-secrets\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551700 3514 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-hostproc\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551719 3514 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5b0f5ca-3437-4fda-aea0-800c870fc242-hubble-tls\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551739 3514 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-run\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.551792 kubelet[3514]: I0213 19:05:04.551759 3514 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-lib-modules\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.552187 kubelet[3514]: I0213 19:05:04.551779 3514 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-net\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.552187 kubelet[3514]: I0213 19:05:04.551803 3514 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-host-proc-sys-kernel\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.552187 kubelet[3514]: I0213 19:05:04.551823 3514 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cni-path\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.552187 kubelet[3514]: I0213 19:05:04.551843 3514 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-cilium-cgroup\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.552187 kubelet[3514]: I0213 19:05:04.551863 3514 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5b0f5ca-3437-4fda-aea0-800c870fc242-etc-cni-netd\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.552187 kubelet[3514]: I0213 19:05:04.551905 3514 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gblfj\" (UniqueName: \"kubernetes.io/projected/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b-kube-api-access-gblfj\") on node \"ip-172-31-18-242\" DevicePath \"\"" Feb 13 19:05:04.799608 kubelet[3514]: I0213 19:05:04.799375 3514 scope.go:117] "RemoveContainer" containerID="c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6" Feb 13 19:05:04.804952 containerd[1954]: time="2025-02-13T19:05:04.804885788Z" level=info msg="RemoveContainer for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\"" Feb 13 19:05:04.815675 systemd[1]: Removed slice kubepods-besteffort-pod0d0ec3f6_0dd1_4d52_b4c4_ce2056d6e24b.slice - libcontainer container kubepods-besteffort-pod0d0ec3f6_0dd1_4d52_b4c4_ce2056d6e24b.slice. Feb 13 19:05:04.822766 containerd[1954]: time="2025-02-13T19:05:04.822704816Z" level=info msg="RemoveContainer for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" returns successfully" Feb 13 19:05:04.825606 kubelet[3514]: I0213 19:05:04.824416 3514 scope.go:117] "RemoveContainer" containerID="c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6" Feb 13 19:05:04.827326 containerd[1954]: time="2025-02-13T19:05:04.827261456Z" level=error msg="ContainerStatus for \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\": not found" Feb 13 19:05:04.829335 kubelet[3514]: E0213 19:05:04.829258 3514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\": not found" containerID="c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6" Feb 13 19:05:04.829858 kubelet[3514]: I0213 19:05:04.829610 3514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6"} err="failed to get container status \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c34f440694a7f692b45ffc89bdb7523796898b2afe4874057414be23bd1bceb6\": not found" Feb 13 19:05:04.830913 kubelet[3514]: I0213 19:05:04.830231 3514 scope.go:117] "RemoveContainer" containerID="ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b" Feb 13 19:05:04.830252 systemd[1]: Removed slice kubepods-burstable-pode5b0f5ca_3437_4fda_aea0_800c870fc242.slice - libcontainer container kubepods-burstable-pode5b0f5ca_3437_4fda_aea0_800c870fc242.slice. Feb 13 19:05:04.830527 systemd[1]: kubepods-burstable-pode5b0f5ca_3437_4fda_aea0_800c870fc242.slice: Consumed 14.418s CPU time, 125.7M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 19:05:04.836489 containerd[1954]: time="2025-02-13T19:05:04.836003877Z" level=info msg="RemoveContainer for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\"" Feb 13 19:05:04.853084 containerd[1954]: time="2025-02-13T19:05:04.852916965Z" level=info msg="RemoveContainer for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" returns successfully" Feb 13 19:05:04.854291 kubelet[3514]: I0213 19:05:04.854066 3514 scope.go:117] "RemoveContainer" containerID="02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1" Feb 13 19:05:04.860478 containerd[1954]: time="2025-02-13T19:05:04.860407029Z" level=info msg="RemoveContainer for \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\"" Feb 13 19:05:04.873138 containerd[1954]: time="2025-02-13T19:05:04.872813325Z" level=info msg="RemoveContainer for \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\" returns successfully" Feb 13 19:05:04.874947 kubelet[3514]: I0213 19:05:04.874088 3514 scope.go:117] "RemoveContainer" containerID="0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992" Feb 13 19:05:04.879816 containerd[1954]: time="2025-02-13T19:05:04.879754545Z" level=info msg="RemoveContainer for \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\"" Feb 13 19:05:04.888264 containerd[1954]: time="2025-02-13T19:05:04.888189117Z" level=info msg="RemoveContainer for \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\" returns successfully" Feb 13 19:05:04.888596 kubelet[3514]: I0213 19:05:04.888538 3514 scope.go:117] "RemoveContainer" containerID="a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450" Feb 13 19:05:04.890345 containerd[1954]: time="2025-02-13T19:05:04.890289861Z" level=info msg="RemoveContainer for \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\"" Feb 13 19:05:04.896391 containerd[1954]: time="2025-02-13T19:05:04.896342505Z" level=info msg="RemoveContainer for \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\" returns successfully" Feb 13 19:05:04.896870 kubelet[3514]: I0213 19:05:04.896840 3514 scope.go:117] "RemoveContainer" containerID="85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec" Feb 13 19:05:04.903585 containerd[1954]: time="2025-02-13T19:05:04.901482057Z" level=info msg="RemoveContainer for \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\"" Feb 13 19:05:04.912316 containerd[1954]: time="2025-02-13T19:05:04.912264813Z" level=info msg="RemoveContainer for \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\" returns successfully" Feb 13 19:05:04.912914 kubelet[3514]: I0213 19:05:04.912778 3514 scope.go:117] "RemoveContainer" containerID="ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b" Feb 13 19:05:04.913436 containerd[1954]: time="2025-02-13T19:05:04.913336761Z" level=error msg="ContainerStatus for \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\": not found" Feb 13 19:05:04.913884 kubelet[3514]: E0213 19:05:04.913670 3514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\": not found" containerID="ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b" Feb 13 19:05:04.913884 kubelet[3514]: I0213 19:05:04.913719 3514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b"} err="failed to get container status \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec6b220468da37771788d554f9a5c6dc390642a47ca4da00726057e5adb9f24b\": not found" Feb 13 19:05:04.913884 kubelet[3514]: I0213 19:05:04.913755 3514 scope.go:117] "RemoveContainer" containerID="02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1" Feb 13 19:05:04.914308 containerd[1954]: time="2025-02-13T19:05:04.914135181Z" level=error msg="ContainerStatus for \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\": not found" Feb 13 19:05:04.915052 containerd[1954]: time="2025-02-13T19:05:04.914902737Z" level=error msg="ContainerStatus for \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\": not found" Feb 13 19:05:04.915155 kubelet[3514]: E0213 19:05:04.914476 3514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\": not found" containerID="02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1" Feb 13 19:05:04.915155 kubelet[3514]: I0213 19:05:04.914553 3514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1"} err="failed to get container status \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"02ca31979781439cb1ebdef85cb9cc68d9337844496f583426265acb14d751e1\": not found" Feb 13 19:05:04.915155 kubelet[3514]: I0213 19:05:04.914586 3514 scope.go:117] "RemoveContainer" containerID="0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992" Feb 13 19:05:04.915155 kubelet[3514]: E0213 19:05:04.915132 3514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\": not found" containerID="0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992" Feb 13 19:05:04.915382 kubelet[3514]: I0213 19:05:04.915170 3514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992"} err="failed to get container status \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e04be7c32f998f663f109efc4a1f6aa715c77affd9a63041f3c232f7b9dc992\": not found" Feb 13 19:05:04.915382 kubelet[3514]: I0213 19:05:04.915205 3514 scope.go:117] "RemoveContainer" containerID="a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450" Feb 13 19:05:04.915731 containerd[1954]: time="2025-02-13T19:05:04.915666045Z" level=error msg="ContainerStatus for \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\": not found" Feb 13 19:05:04.916342 kubelet[3514]: E0213 19:05:04.916296 3514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\": not found" containerID="a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450" Feb 13 19:05:04.916445 kubelet[3514]: I0213 19:05:04.916353 3514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450"} err="failed to get container status \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\": rpc error: code = NotFound desc = an error occurred when try to find container \"a406f243f1041a8b9e2dcf8a4538dbbc880070daca1081d286f2dc4382721450\": not found" Feb 13 19:05:04.916445 kubelet[3514]: I0213 19:05:04.916394 3514 scope.go:117] "RemoveContainer" containerID="85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec" Feb 13 19:05:04.917030 containerd[1954]: time="2025-02-13T19:05:04.916945173Z" level=error msg="ContainerStatus for \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\": not found" Feb 13 19:05:04.917490 kubelet[3514]: E0213 19:05:04.917150 3514 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\": not found" containerID="85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec" Feb 13 19:05:04.917490 kubelet[3514]: I0213 19:05:04.917191 3514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec"} err="failed to get container status \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\": rpc error: code = NotFound desc = an error occurred when try to find container \"85d3eca1da98a2e6faa11d5f9c9f9fd87dbcf850e7c3b318f9822b4e33216cec\": not found" Feb 13 19:05:05.010548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4-rootfs.mount: Deactivated successfully. Feb 13 19:05:05.010721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b-rootfs.mount: Deactivated successfully. Feb 13 19:05:05.010863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b-shm.mount: Deactivated successfully. Feb 13 19:05:05.011004 systemd[1]: var-lib-kubelet-pods-0d0ec3f6\x2d0dd1\x2d4d52\x2db4c4\x2dce2056d6e24b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgblfj.mount: Deactivated successfully. Feb 13 19:05:05.011138 systemd[1]: var-lib-kubelet-pods-e5b0f5ca\x2d3437\x2d4fda\x2daea0\x2d800c870fc242-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db74wd.mount: Deactivated successfully. Feb 13 19:05:05.011277 systemd[1]: var-lib-kubelet-pods-e5b0f5ca\x2d3437\x2d4fda\x2daea0\x2d800c870fc242-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:05:05.011431 systemd[1]: var-lib-kubelet-pods-e5b0f5ca\x2d3437\x2d4fda\x2daea0\x2d800c870fc242-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:05:05.946426 sshd[5160]: Connection closed by 139.178.89.65 port 60016 Feb 13 19:05:05.947660 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:05.953420 systemd[1]: sshd@27-172.31.18.242:22-139.178.89.65:60016.service: Deactivated successfully. Feb 13 19:05:05.954135 systemd-logind[1933]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:05:05.958042 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:05:05.958586 systemd[1]: session-28.scope: Consumed 1.716s CPU time, 23.3M memory peak. Feb 13 19:05:05.962088 systemd-logind[1933]: Removed session 28. Feb 13 19:05:05.988973 systemd[1]: Started sshd@28-172.31.18.242:22-139.178.89.65:42912.service - OpenSSH per-connection server daemon (139.178.89.65:42912). Feb 13 19:05:06.172416 sshd[5324]: Accepted publickey for core from 139.178.89.65 port 42912 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:05:06.174918 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:06.183789 systemd-logind[1933]: New session 29 of user core. Feb 13 19:05:06.193051 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:05:06.314561 kubelet[3514]: I0213 19:05:06.314390 3514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b" path="/var/lib/kubelet/pods/0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b/volumes" Feb 13 19:05:06.317528 kubelet[3514]: I0213 19:05:06.316867 3514 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b0f5ca-3437-4fda-aea0-800c870fc242" path="/var/lib/kubelet/pods/e5b0f5ca-3437-4fda-aea0-800c870fc242/volumes" Feb 13 19:05:06.378546 ntpd[1927]: Deleting interface #12 lxc_health, fe80::2c6b:50ff:feb5:9497%8#123, interface stats: received=0, sent=0, dropped=0, active_time=87 secs Feb 13 19:05:06.379246 ntpd[1927]: 13 Feb 19:05:06 ntpd[1927]: Deleting interface #12 lxc_health, fe80::2c6b:50ff:feb5:9497%8#123, interface stats: received=0, sent=0, dropped=0, active_time=87 secs Feb 13 19:05:07.509732 sshd[5326]: Connection closed by 139.178.89.65 port 42912 Feb 13 19:05:07.514374 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:07.524884 systemd-logind[1933]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:05:07.527641 systemd[1]: sshd@28-172.31.18.242:22-139.178.89.65:42912.service: Deactivated successfully. Feb 13 19:05:07.539579 kubelet[3514]: I0213 19:05:07.536528 3514 memory_manager.go:355] "RemoveStaleState removing state" podUID="e5b0f5ca-3437-4fda-aea0-800c870fc242" containerName="cilium-agent" Feb 13 19:05:07.539579 kubelet[3514]: I0213 19:05:07.536619 3514 memory_manager.go:355] "RemoveStaleState removing state" podUID="0d0ec3f6-0dd1-4d52-b4c4-ce2056d6e24b" containerName="cilium-operator" Feb 13 19:05:07.539198 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:05:07.542998 systemd[1]: session-29.scope: Consumed 1.114s CPU time, 22.2M memory peak. Feb 13 19:05:07.570513 systemd-logind[1933]: Removed session 29. Feb 13 19:05:07.576697 systemd[1]: Started sshd@29-172.31.18.242:22-139.178.89.65:42916.service - OpenSSH per-connection server daemon (139.178.89.65:42916). Feb 13 19:05:07.601883 systemd[1]: Created slice kubepods-burstable-poddd904aff_6b48_4176_8099_4b70c7e9088a.slice - libcontainer container kubepods-burstable-poddd904aff_6b48_4176_8099_4b70c7e9088a.slice. Feb 13 19:05:07.672882 kubelet[3514]: I0213 19:05:07.672629 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd904aff-6b48-4176-8099-4b70c7e9088a-hubble-tls\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.674885 kubelet[3514]: I0213 19:05:07.673736 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-cilium-cgroup\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.674885 kubelet[3514]: I0213 19:05:07.673804 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd904aff-6b48-4176-8099-4b70c7e9088a-cilium-config-path\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.674885 kubelet[3514]: I0213 19:05:07.673845 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-host-proc-sys-net\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.674885 kubelet[3514]: I0213 19:05:07.673881 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4nft\" (UniqueName: \"kubernetes.io/projected/dd904aff-6b48-4176-8099-4b70c7e9088a-kube-api-access-b4nft\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.674885 kubelet[3514]: I0213 19:05:07.673917 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-cilium-run\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.674885 kubelet[3514]: I0213 19:05:07.673955 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-bpf-maps\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.675275 kubelet[3514]: I0213 19:05:07.673993 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-hostproc\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.675275 kubelet[3514]: I0213 19:05:07.674033 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-cni-path\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.675275 kubelet[3514]: I0213 19:05:07.674073 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd904aff-6b48-4176-8099-4b70c7e9088a-cilium-ipsec-secrets\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.675275 kubelet[3514]: I0213 19:05:07.674114 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-lib-modules\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.675275 kubelet[3514]: I0213 19:05:07.674151 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-xtables-lock\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.675275 kubelet[3514]: I0213 19:05:07.674189 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-etc-cni-netd\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.676790 kubelet[3514]: I0213 19:05:07.674224 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd904aff-6b48-4176-8099-4b70c7e9088a-clustermesh-secrets\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.676790 kubelet[3514]: I0213 19:05:07.674273 3514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd904aff-6b48-4176-8099-4b70c7e9088a-host-proc-sys-kernel\") pod \"cilium-gmlmm\" (UID: \"dd904aff-6b48-4176-8099-4b70c7e9088a\") " pod="kube-system/cilium-gmlmm" Feb 13 19:05:07.812633 sshd[5335]: Accepted publickey for core from 139.178.89.65 port 42916 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:05:07.822164 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:07.839179 systemd-logind[1933]: New session 30 of user core. Feb 13 19:05:07.848760 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:05:07.910309 containerd[1954]: time="2025-02-13T19:05:07.910236348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmlmm,Uid:dd904aff-6b48-4176-8099-4b70c7e9088a,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:07.957837 containerd[1954]: time="2025-02-13T19:05:07.957633348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:07.957837 containerd[1954]: time="2025-02-13T19:05:07.957717060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:07.957837 containerd[1954]: time="2025-02-13T19:05:07.957754200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:07.958391 containerd[1954]: time="2025-02-13T19:05:07.957901128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:07.972550 sshd[5342]: Connection closed by 139.178.89.65 port 42916 Feb 13 19:05:07.973344 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:07.982235 systemd[1]: sshd@29-172.31.18.242:22-139.178.89.65:42916.service: Deactivated successfully. Feb 13 19:05:07.989928 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:05:07.992062 systemd-logind[1933]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:05:08.023837 systemd[1]: Started cri-containerd-8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b.scope - libcontainer container 8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b. Feb 13 19:05:08.028453 systemd[1]: Started sshd@30-172.31.18.242:22-139.178.89.65:42928.service - OpenSSH per-connection server daemon (139.178.89.65:42928). Feb 13 19:05:08.031354 systemd-logind[1933]: Removed session 30. Feb 13 19:05:08.089386 containerd[1954]: time="2025-02-13T19:05:08.088538097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmlmm,Uid:dd904aff-6b48-4176-8099-4b70c7e9088a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\"" Feb 13 19:05:08.097079 containerd[1954]: time="2025-02-13T19:05:08.097018173Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:05:08.120921 containerd[1954]: time="2025-02-13T19:05:08.120820713Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213\"" Feb 13 19:05:08.123545 containerd[1954]: time="2025-02-13T19:05:08.121980057Z" level=info msg="StartContainer for \"4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213\"" Feb 13 19:05:08.171488 systemd[1]: Started cri-containerd-4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213.scope - libcontainer container 4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213. Feb 13 19:05:08.224472 containerd[1954]: time="2025-02-13T19:05:08.224306973Z" level=info msg="StartContainer for \"4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213\" returns successfully" Feb 13 19:05:08.242331 systemd[1]: cri-containerd-4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213.scope: Deactivated successfully. Feb 13 19:05:08.256315 containerd[1954]: time="2025-02-13T19:05:08.256099845Z" level=info msg="StopPodSandbox for \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\"" Feb 13 19:05:08.256697 containerd[1954]: time="2025-02-13T19:05:08.256564077Z" level=info msg="TearDown network for sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" successfully" Feb 13 19:05:08.256697 containerd[1954]: time="2025-02-13T19:05:08.256620993Z" level=info msg="StopPodSandbox for \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" returns successfully" Feb 13 19:05:08.258549 containerd[1954]: time="2025-02-13T19:05:08.257727813Z" level=info msg="RemovePodSandbox for \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\"" Feb 13 19:05:08.258549 containerd[1954]: time="2025-02-13T19:05:08.257781573Z" level=info msg="Forcibly stopping sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\"" Feb 13 19:05:08.258549 containerd[1954]: time="2025-02-13T19:05:08.257888277Z" level=info msg="TearDown network for sandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" successfully" Feb 13 19:05:08.265898 containerd[1954]: time="2025-02-13T19:05:08.265826134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:05:08.266104 containerd[1954]: time="2025-02-13T19:05:08.265912330Z" level=info msg="RemovePodSandbox \"e1bf4fae499833cced2a99c104deb1d04571536c2e1c740c22c29468f9ac546b\" returns successfully" Feb 13 19:05:08.267774 containerd[1954]: time="2025-02-13T19:05:08.267727018Z" level=info msg="StopPodSandbox for \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\"" Feb 13 19:05:08.268225 containerd[1954]: time="2025-02-13T19:05:08.268165534Z" level=info msg="TearDown network for sandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" successfully" Feb 13 19:05:08.268675 sshd[5374]: Accepted publickey for core from 139.178.89.65 port 42928 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:05:08.269137 containerd[1954]: time="2025-02-13T19:05:08.268597690Z" level=info msg="StopPodSandbox for \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" returns successfully" Feb 13 19:05:08.271901 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:08.274065 containerd[1954]: time="2025-02-13T19:05:08.273443242Z" level=info msg="RemovePodSandbox for \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\"" Feb 13 19:05:08.274065 containerd[1954]: time="2025-02-13T19:05:08.273776854Z" level=info msg="Forcibly stopping sandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\"" Feb 13 19:05:08.274065 containerd[1954]: time="2025-02-13T19:05:08.273902830Z" level=info msg="TearDown network for sandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" successfully" Feb 13 19:05:08.284113 containerd[1954]: time="2025-02-13T19:05:08.283191466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:05:08.284113 containerd[1954]: time="2025-02-13T19:05:08.283300330Z" level=info msg="RemovePodSandbox \"870e4888ec287653a690ac276677dd546169f220eb225647230f0412d93190d4\" returns successfully" Feb 13 19:05:08.287554 systemd-logind[1933]: New session 31 of user core. Feb 13 19:05:08.289817 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:05:08.322457 containerd[1954]: time="2025-02-13T19:05:08.322341394Z" level=info msg="shim disconnected" id=4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213 namespace=k8s.io Feb 13 19:05:08.322457 containerd[1954]: time="2025-02-13T19:05:08.322443082Z" level=warning msg="cleaning up after shim disconnected" id=4b626e84a872bdf4a58f8524ce2ab2b552d261dbc4af1b80a39513f5e7254213 namespace=k8s.io Feb 13 19:05:08.322996 containerd[1954]: time="2025-02-13T19:05:08.322465162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:08.344969 containerd[1954]: time="2025-02-13T19:05:08.344752522Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:05:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:05:08.505278 kubelet[3514]: E0213 19:05:08.504936 3514 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:05:08.841831 containerd[1954]: time="2025-02-13T19:05:08.841689876Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:05:08.874477 containerd[1954]: time="2025-02-13T19:05:08.874418749Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3\"" Feb 13 19:05:08.875898 containerd[1954]: time="2025-02-13T19:05:08.875788321Z" level=info msg="StartContainer for \"e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3\"" Feb 13 19:05:08.951820 systemd[1]: Started cri-containerd-e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3.scope - libcontainer container e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3. Feb 13 19:05:09.002176 containerd[1954]: time="2025-02-13T19:05:09.001852509Z" level=info msg="StartContainer for \"e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3\" returns successfully" Feb 13 19:05:09.014510 systemd[1]: cri-containerd-e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3.scope: Deactivated successfully. Feb 13 19:05:09.059526 containerd[1954]: time="2025-02-13T19:05:09.058762461Z" level=info msg="shim disconnected" id=e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3 namespace=k8s.io Feb 13 19:05:09.059526 containerd[1954]: time="2025-02-13T19:05:09.058899261Z" level=warning msg="cleaning up after shim disconnected" id=e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3 namespace=k8s.io Feb 13 19:05:09.059526 containerd[1954]: time="2025-02-13T19:05:09.058920861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:09.782669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2855d3c92d9f91b0644ddd59a9f23488f1f6e42b3d4411d8bfb3c86eaca39d3-rootfs.mount: Deactivated successfully. Feb 13 19:05:09.849871 containerd[1954]: time="2025-02-13T19:05:09.849806137Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:05:09.888533 containerd[1954]: time="2025-02-13T19:05:09.888433178Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3\"" Feb 13 19:05:09.895021 containerd[1954]: time="2025-02-13T19:05:09.892674398Z" level=info msg="StartContainer for \"bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3\"" Feb 13 19:05:09.957831 systemd[1]: Started cri-containerd-bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3.scope - libcontainer container bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3. Feb 13 19:05:10.015819 containerd[1954]: time="2025-02-13T19:05:10.015746326Z" level=info msg="StartContainer for \"bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3\" returns successfully" Feb 13 19:05:10.018977 systemd[1]: cri-containerd-bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3.scope: Deactivated successfully. Feb 13 19:05:10.067601 containerd[1954]: time="2025-02-13T19:05:10.067343842Z" level=info msg="shim disconnected" id=bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3 namespace=k8s.io Feb 13 19:05:10.067601 containerd[1954]: time="2025-02-13T19:05:10.067435534Z" level=warning msg="cleaning up after shim disconnected" id=bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3 namespace=k8s.io Feb 13 19:05:10.067601 containerd[1954]: time="2025-02-13T19:05:10.067458490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:10.782763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc3b5bd22f599e3c309aa126515cd808661534068a1b14758cee9d0fb2eeefd3-rootfs.mount: Deactivated successfully. Feb 13 19:05:10.862632 containerd[1954]: time="2025-02-13T19:05:10.862398926Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:05:10.902481 containerd[1954]: time="2025-02-13T19:05:10.902152863Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36\"" Feb 13 19:05:10.906853 containerd[1954]: time="2025-02-13T19:05:10.906767655Z" level=info msg="StartContainer for \"370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36\"" Feb 13 19:05:10.987826 systemd[1]: Started cri-containerd-370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36.scope - libcontainer container 370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36. Feb 13 19:05:11.048654 containerd[1954]: time="2025-02-13T19:05:11.048387215Z" level=info msg="StartContainer for \"370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36\" returns successfully" Feb 13 19:05:11.055725 systemd[1]: cri-containerd-370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36.scope: Deactivated successfully. Feb 13 19:05:11.138368 containerd[1954]: time="2025-02-13T19:05:11.138056832Z" level=info msg="shim disconnected" id=370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36 namespace=k8s.io Feb 13 19:05:11.138368 containerd[1954]: time="2025-02-13T19:05:11.138129840Z" level=warning msg="cleaning up after shim disconnected" id=370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36 namespace=k8s.io Feb 13 19:05:11.138368 containerd[1954]: time="2025-02-13T19:05:11.138151164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:11.184283 containerd[1954]: time="2025-02-13T19:05:11.184208640Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:05:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:05:11.349334 kubelet[3514]: I0213 19:05:11.346651 3514 setters.go:602] "Node became not ready" node="ip-172-31-18-242" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:05:11Z","lastTransitionTime":"2025-02-13T19:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:05:11.782871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-370dcc9483a17c90c67a8cf1ac08d027a42a9862d071e81d46fe9fd59211cc36-rootfs.mount: Deactivated successfully. Feb 13 19:05:11.867796 containerd[1954]: time="2025-02-13T19:05:11.867624651Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:05:11.906566 containerd[1954]: time="2025-02-13T19:05:11.906359392Z" level=info msg="CreateContainer within sandbox \"8892dc5ca618bacf52525c39efd5c3d7a531b09bb76e94ceec26fece4712cc0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08052f4fed5c4d135cd4d3d1ddb5678cf587394d94629a07446190fd39bbe447\"" Feb 13 19:05:11.908055 containerd[1954]: time="2025-02-13T19:05:11.907861420Z" level=info msg="StartContainer for \"08052f4fed5c4d135cd4d3d1ddb5678cf587394d94629a07446190fd39bbe447\"" Feb 13 19:05:11.972828 systemd[1]: Started cri-containerd-08052f4fed5c4d135cd4d3d1ddb5678cf587394d94629a07446190fd39bbe447.scope - libcontainer container 08052f4fed5c4d135cd4d3d1ddb5678cf587394d94629a07446190fd39bbe447. Feb 13 19:05:12.029473 containerd[1954]: time="2025-02-13T19:05:12.029396724Z" level=info msg="StartContainer for \"08052f4fed5c4d135cd4d3d1ddb5678cf587394d94629a07446190fd39bbe447\" returns successfully" Feb 13 19:05:12.848553 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:05:12.925012 kubelet[3514]: I0213 19:05:12.924918 3514 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gmlmm" podStartSLOduration=5.924897329 podStartE2EDuration="5.924897329s" podCreationTimestamp="2025-02-13 19:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:12.923902181 +0000 UTC m=+124.888736806" watchObservedRunningTime="2025-02-13 19:05:12.924897329 +0000 UTC m=+124.889731942" Feb 13 19:05:17.140237 (udev-worker)[6187]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:17.142157 (udev-worker)[6188]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:17.145194 systemd-networkd[1866]: lxc_health: Link UP Feb 13 19:05:17.169189 systemd-networkd[1866]: lxc_health: Gained carrier Feb 13 19:05:18.344734 systemd-networkd[1866]: lxc_health: Gained IPv6LL Feb 13 19:05:19.507626 kubelet[3514]: E0213 19:05:19.507547 3514 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:48368->127.0.0.1:39307: read tcp 127.0.0.1:48368->127.0.0.1:39307: read: connection reset by peer Feb 13 19:05:19.508213 kubelet[3514]: E0213 19:05:19.507729 3514 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48368->127.0.0.1:39307: write tcp 127.0.0.1:48368->127.0.0.1:39307: write: broken pipe Feb 13 19:05:20.378572 ntpd[1927]: Listen normally on 15 lxc_health [fe80::8022:d2ff:fea7:f421%14]:123 Feb 13 19:05:20.379268 ntpd[1927]: 13 Feb 19:05:20 ntpd[1927]: Listen normally on 15 lxc_health [fe80::8022:d2ff:fea7:f421%14]:123 Feb 13 19:05:23.994135 systemd[1]: run-containerd-runc-k8s.io-08052f4fed5c4d135cd4d3d1ddb5678cf587394d94629a07446190fd39bbe447-runc.2I5m74.mount: Deactivated successfully. Feb 13 19:05:24.094350 kubelet[3514]: E0213 19:05:24.093567 3514 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48386->127.0.0.1:39307: write tcp 127.0.0.1:48386->127.0.0.1:39307: write: connection reset by peer Feb 13 19:05:24.126257 sshd[5441]: Connection closed by 139.178.89.65 port 42928 Feb 13 19:05:24.127231 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:24.134736 systemd[1]: sshd@30-172.31.18.242:22-139.178.89.65:42928.service: Deactivated successfully. Feb 13 19:05:24.141413 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:05:24.147596 systemd-logind[1933]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:05:24.149965 systemd-logind[1933]: Removed session 31. Feb 13 19:05:39.330233 systemd[1]: cri-containerd-0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6.scope: Deactivated successfully. Feb 13 19:05:39.331638 systemd[1]: cri-containerd-0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6.scope: Consumed 4.649s CPU time, 53.7M memory peak. Feb 13 19:05:39.371081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6-rootfs.mount: Deactivated successfully. Feb 13 19:05:39.382355 containerd[1954]: time="2025-02-13T19:05:39.382266244Z" level=info msg="shim disconnected" id=0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6 namespace=k8s.io Feb 13 19:05:39.382355 containerd[1954]: time="2025-02-13T19:05:39.382347784Z" level=warning msg="cleaning up after shim disconnected" id=0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6 namespace=k8s.io Feb 13 19:05:39.383143 containerd[1954]: time="2025-02-13T19:05:39.382369768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:39.960357 kubelet[3514]: I0213 19:05:39.959596 3514 scope.go:117] "RemoveContainer" containerID="0668d3e7662749a35d891830b2ab5cdeb9cf74478d149a9792e8fd825510e3e6" Feb 13 19:05:39.964051 containerd[1954]: time="2025-02-13T19:05:39.964002187Z" level=info msg="CreateContainer within sandbox \"fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:05:39.991694 containerd[1954]: time="2025-02-13T19:05:39.991565491Z" level=info msg="CreateContainer within sandbox \"fe3c0eb543094f8ed5ff11f41df3a1f0b230642ee1285cbbc72170bc411c0472\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f8ca7a1087a200ede988080d3656005cdd416355ec7f04b2404271c687861202\"" Feb 13 19:05:39.992661 containerd[1954]: time="2025-02-13T19:05:39.992249575Z" level=info msg="StartContainer for \"f8ca7a1087a200ede988080d3656005cdd416355ec7f04b2404271c687861202\"" Feb 13 19:05:40.043801 systemd[1]: Started cri-containerd-f8ca7a1087a200ede988080d3656005cdd416355ec7f04b2404271c687861202.scope - libcontainer container f8ca7a1087a200ede988080d3656005cdd416355ec7f04b2404271c687861202. Feb 13 19:05:40.116186 containerd[1954]: time="2025-02-13T19:05:40.115428484Z" level=info msg="StartContainer for \"f8ca7a1087a200ede988080d3656005cdd416355ec7f04b2404271c687861202\" returns successfully" Feb 13 19:05:40.918896 kubelet[3514]: E0213 19:05:40.918819 3514 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-242?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:05:44.271731 systemd[1]: cri-containerd-a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6.scope: Deactivated successfully. Feb 13 19:05:44.276268 systemd[1]: cri-containerd-a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6.scope: Consumed 5.529s CPU time, 24.4M memory peak. Feb 13 19:05:44.316789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6-rootfs.mount: Deactivated successfully. Feb 13 19:05:44.334883 containerd[1954]: time="2025-02-13T19:05:44.334810161Z" level=info msg="shim disconnected" id=a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6 namespace=k8s.io Feb 13 19:05:44.335718 containerd[1954]: time="2025-02-13T19:05:44.335435865Z" level=warning msg="cleaning up after shim disconnected" id=a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6 namespace=k8s.io Feb 13 19:05:44.335718 containerd[1954]: time="2025-02-13T19:05:44.335464089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:05:44.978198 kubelet[3514]: I0213 19:05:44.978133 3514 scope.go:117] "RemoveContainer" containerID="a5144e026665b6d44e6e8f85e6f48481a8196f530f700136c70961b0c5df19c6" Feb 13 19:05:44.981530 containerd[1954]: time="2025-02-13T19:05:44.980995836Z" level=info msg="CreateContainer within sandbox \"fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:05:45.009048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938582168.mount: Deactivated successfully. Feb 13 19:05:45.012435 containerd[1954]: time="2025-02-13T19:05:45.012360572Z" level=info msg="CreateContainer within sandbox \"fe48c793b3d60ac33a9cfa14672c92b1141fbca87d58e817f6a4bafb31b3410b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8ab29bf70f2f1ab8903e8d5a25747a29aa364c1acddb2f2d3045d28265d163cb\"" Feb 13 19:05:45.013623 containerd[1954]: time="2025-02-13T19:05:45.013082276Z" level=info msg="StartContainer for \"8ab29bf70f2f1ab8903e8d5a25747a29aa364c1acddb2f2d3045d28265d163cb\"" Feb 13 19:05:45.081805 systemd[1]: Started cri-containerd-8ab29bf70f2f1ab8903e8d5a25747a29aa364c1acddb2f2d3045d28265d163cb.scope - libcontainer container 8ab29bf70f2f1ab8903e8d5a25747a29aa364c1acddb2f2d3045d28265d163cb. Feb 13 19:05:45.146953 containerd[1954]: time="2025-02-13T19:05:45.146645865Z" level=info msg="StartContainer for \"8ab29bf70f2f1ab8903e8d5a25747a29aa364c1acddb2f2d3045d28265d163cb\" returns successfully" Feb 13 19:05:45.315947 systemd[1]: run-containerd-runc-k8s.io-8ab29bf70f2f1ab8903e8d5a25747a29aa364c1acddb2f2d3045d28265d163cb-runc.gkN4m7.mount: Deactivated successfully. Feb 13 19:05:50.920121 kubelet[3514]: E0213 19:05:50.919575 3514 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-242?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"