Feb 13 16:04:29.177445 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 16:04:29.177491 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025 Feb 13 16:04:29.177516 kernel: KASLR disabled due to lack of seed Feb 13 16:04:29.177533 kernel: efi: EFI v2.7 by EDK II Feb 13 16:04:29.177549 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 16:04:29.177564 kernel: ACPI: Early table checksum verification disabled Feb 13 16:04:29.177581 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 16:04:29.177597 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 16:04:29.177612 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 16:04:29.177628 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 16:04:29.177649 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 16:04:29.177664 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 16:04:29.177680 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 16:04:29.177695 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 16:04:29.177714 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 16:04:29.177735 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 16:04:29.177752 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 16:04:29.177769 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 16:04:29.177785 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 16:04:29.177801 kernel: printk: bootconsole [uart0] enabled Feb 13 16:04:29.177818 kernel: NUMA: Failed to initialise from firmware Feb 13 16:04:29.177834 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:04:29.177851 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 16:04:29.177867 kernel: Zone ranges: Feb 13 16:04:29.177884 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 16:04:29.177900 kernel: DMA32 empty Feb 13 16:04:29.177920 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 16:04:29.177937 kernel: Movable zone start for each node Feb 13 16:04:29.177953 kernel: Early memory node ranges Feb 13 16:04:29.177969 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 16:04:29.177985 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 16:04:29.178002 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 16:04:29.178018 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 16:04:29.178035 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 16:04:29.178051 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 16:04:29.178067 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 16:04:29.178083 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 16:04:29.178099 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:04:29.178120 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 16:04:29.178137 kernel: psci: probing for conduit method from ACPI. Feb 13 16:04:29.178161 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 16:04:29.178179 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 16:04:29.178196 kernel: psci: Trusted OS migration not required Feb 13 16:04:29.178218 kernel: psci: SMC Calling Convention v1.1 Feb 13 16:04:29.178236 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 16:04:29.178253 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 16:04:29.178294 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 16:04:29.178313 kernel: Detected PIPT I-cache on CPU0 Feb 13 16:04:29.178331 kernel: CPU features: detected: GIC system register CPU interface Feb 13 16:04:29.178349 kernel: CPU features: detected: Spectre-v2 Feb 13 16:04:29.178366 kernel: CPU features: detected: Spectre-v3a Feb 13 16:04:29.178385 kernel: CPU features: detected: Spectre-BHB Feb 13 16:04:29.178402 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 16:04:29.178419 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 16:04:29.178444 kernel: alternatives: applying boot alternatives Feb 13 16:04:29.178465 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:04:29.178484 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:04:29.178502 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:04:29.178519 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:04:29.178537 kernel: Fallback order for Node 0: 0 Feb 13 16:04:29.178554 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 16:04:29.178571 kernel: Policy zone: Normal Feb 13 16:04:29.178588 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:04:29.178606 kernel: software IO TLB: area num 2. Feb 13 16:04:29.178624 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 16:04:29.178647 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 16:04:29.178665 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:04:29.178683 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:04:29.178702 kernel: rcu: RCU event tracing is enabled. Feb 13 16:04:29.178721 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:04:29.178764 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:04:29.178822 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:04:29.178859 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:04:29.178880 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:04:29.178897 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 16:04:29.178914 kernel: GICv3: 96 SPIs implemented Feb 13 16:04:29.178937 kernel: GICv3: 0 Extended SPIs implemented Feb 13 16:04:29.178955 kernel: Root IRQ handler: gic_handle_irq Feb 13 16:04:29.178972 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 16:04:29.178990 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 16:04:29.179007 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 16:04:29.179024 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 16:04:29.179042 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 16:04:29.179059 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 16:04:29.179076 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 16:04:29.179093 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 16:04:29.179111 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:04:29.179128 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 16:04:29.179151 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 16:04:29.179176 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 16:04:29.179194 kernel: Console: colour dummy device 80x25 Feb 13 16:04:29.179212 kernel: printk: console [tty1] enabled Feb 13 16:04:29.179230 kernel: ACPI: Core revision 20230628 Feb 13 16:04:29.179249 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 16:04:29.179321 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:04:29.179343 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:04:29.179361 kernel: landlock: Up and running. Feb 13 16:04:29.179385 kernel: SELinux: Initializing. Feb 13 16:04:29.179404 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:04:29.179422 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:04:29.179440 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:04:29.179458 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:04:29.179476 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:04:29.179495 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:04:29.179513 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 16:04:29.179531 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 16:04:29.179553 kernel: Remapping and enabling EFI services. Feb 13 16:04:29.179572 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:04:29.179589 kernel: Detected PIPT I-cache on CPU1 Feb 13 16:04:29.179607 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 16:04:29.179625 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 16:04:29.179643 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 16:04:29.179661 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:04:29.179678 kernel: SMP: Total of 2 processors activated. Feb 13 16:04:29.179696 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 16:04:29.179718 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 16:04:29.179736 kernel: CPU features: detected: CRC32 instructions Feb 13 16:04:29.179754 kernel: CPU: All CPU(s) started at EL1 Feb 13 16:04:29.179784 kernel: alternatives: applying system-wide alternatives Feb 13 16:04:29.179807 kernel: devtmpfs: initialized Feb 13 16:04:29.179826 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:04:29.179844 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:04:29.179863 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:04:29.179881 kernel: SMBIOS 3.0.0 present. Feb 13 16:04:29.179899 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 16:04:29.179923 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:04:29.179942 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 16:04:29.179960 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 16:04:29.179979 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 16:04:29.179998 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:04:29.180017 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Feb 13 16:04:29.180035 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:04:29.180058 kernel: cpuidle: using governor menu Feb 13 16:04:29.180077 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 16:04:29.180095 kernel: ASID allocator initialised with 65536 entries Feb 13 16:04:29.180114 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:04:29.180133 kernel: Serial: AMBA PL011 UART driver Feb 13 16:04:29.180151 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 16:04:29.180169 kernel: Modules: 509040 pages in range for PLT usage Feb 13 16:04:29.180188 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:04:29.180207 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:04:29.180230 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 16:04:29.180248 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 16:04:29.180289 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:04:29.180310 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:04:29.180329 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 16:04:29.180348 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 16:04:29.180367 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:04:29.180385 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:04:29.180404 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:04:29.180429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:04:29.180448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:04:29.180467 kernel: ACPI: Interpreter enabled Feb 13 16:04:29.180485 kernel: ACPI: Using GIC for interrupt routing Feb 13 16:04:29.180504 kernel: ACPI: MCFG table detected, 1 entries Feb 13 16:04:29.180523 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 16:04:29.180816 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:04:29.181545 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 16:04:29.181767 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 16:04:29.181965 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 16:04:29.182161 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 16:04:29.182186 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 16:04:29.182206 kernel: acpiphp: Slot [1] registered Feb 13 16:04:29.182224 kernel: acpiphp: Slot [2] registered Feb 13 16:04:29.182243 kernel: acpiphp: Slot [3] registered Feb 13 16:04:29.182280 kernel: acpiphp: Slot [4] registered Feb 13 16:04:29.182956 kernel: acpiphp: Slot [5] registered Feb 13 16:04:29.182995 kernel: acpiphp: Slot [6] registered Feb 13 16:04:29.183042 kernel: acpiphp: Slot [7] registered Feb 13 16:04:29.183063 kernel: acpiphp: Slot [8] registered Feb 13 16:04:29.183082 kernel: acpiphp: Slot [9] registered Feb 13 16:04:29.183100 kernel: acpiphp: Slot [10] registered Feb 13 16:04:29.183119 kernel: acpiphp: Slot [11] registered Feb 13 16:04:29.183139 kernel: acpiphp: Slot [12] registered Feb 13 16:04:29.183158 kernel: acpiphp: Slot [13] registered Feb 13 16:04:29.183177 kernel: acpiphp: Slot [14] registered Feb 13 16:04:29.183202 kernel: acpiphp: Slot [15] registered Feb 13 16:04:29.183221 kernel: acpiphp: Slot [16] registered Feb 13 16:04:29.183240 kernel: acpiphp: Slot [17] registered Feb 13 16:04:29.183348 kernel: acpiphp: Slot [18] registered Feb 13 16:04:29.183645 kernel: acpiphp: Slot [19] registered Feb 13 16:04:29.183988 kernel: acpiphp: Slot [20] registered Feb 13 16:04:29.184629 kernel: acpiphp: Slot [21] registered Feb 13 16:04:29.184727 kernel: acpiphp: Slot [22] registered Feb 13 16:04:29.184750 kernel: acpiphp: Slot [23] registered Feb 13 16:04:29.184775 kernel: acpiphp: Slot [24] registered Feb 13 16:04:29.184794 kernel: acpiphp: Slot [25] registered Feb 13 16:04:29.184813 kernel: acpiphp: Slot [26] registered Feb 13 16:04:29.184832 kernel: acpiphp: Slot [27] registered Feb 13 16:04:29.184850 kernel: acpiphp: Slot [28] registered Feb 13 16:04:29.184868 kernel: acpiphp: Slot [29] registered Feb 13 16:04:29.184887 kernel: acpiphp: Slot [30] registered Feb 13 16:04:29.184905 kernel: acpiphp: Slot [31] registered Feb 13 16:04:29.184924 kernel: PCI host bridge to bus 0000:00 Feb 13 16:04:29.185175 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 16:04:29.185437 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 16:04:29.185628 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 16:04:29.185813 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 16:04:29.186046 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 16:04:29.186335 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 16:04:29.186714 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 16:04:29.188483 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 16:04:29.188783 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 16:04:29.189007 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:04:29.189253 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 16:04:29.189546 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 16:04:29.189757 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 16:04:29.189972 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 16:04:29.190178 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:04:29.190454 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 16:04:29.190659 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 16:04:29.190860 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 16:04:29.191059 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 16:04:29.191274 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 16:04:29.191478 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 16:04:29.191668 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 16:04:29.191857 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 16:04:29.191883 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 16:04:29.191903 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 16:04:29.191923 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 16:04:29.191942 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 16:04:29.191961 kernel: iommu: Default domain type: Translated Feb 13 16:04:29.191980 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 16:04:29.192006 kernel: efivars: Registered efivars operations Feb 13 16:04:29.192024 kernel: vgaarb: loaded Feb 13 16:04:29.192043 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 16:04:29.192062 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:04:29.192081 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:04:29.192100 kernel: pnp: PnP ACPI init Feb 13 16:04:29.192345 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 16:04:29.192375 kernel: pnp: PnP ACPI: found 1 devices Feb 13 16:04:29.192401 kernel: NET: Registered PF_INET protocol family Feb 13 16:04:29.192421 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:04:29.192440 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 16:04:29.192460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:04:29.192479 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:04:29.192498 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 16:04:29.192517 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 16:04:29.192536 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:04:29.192555 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:04:29.192579 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:04:29.192598 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:04:29.192616 kernel: kvm [1]: HYP mode not available Feb 13 16:04:29.192635 kernel: Initialise system trusted keyrings Feb 13 16:04:29.192654 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 16:04:29.192673 kernel: Key type asymmetric registered Feb 13 16:04:29.192691 kernel: Asymmetric key parser 'x509' registered Feb 13 16:04:29.192710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 16:04:29.192729 kernel: io scheduler mq-deadline registered Feb 13 16:04:29.192753 kernel: io scheduler kyber registered Feb 13 16:04:29.192772 kernel: io scheduler bfq registered Feb 13 16:04:29.193000 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 16:04:29.193029 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 16:04:29.193050 kernel: ACPI: button: Power Button [PWRB] Feb 13 16:04:29.193070 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 16:04:29.193089 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 16:04:29.193109 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:04:29.193135 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 16:04:29.193424 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 16:04:29.193455 kernel: printk: console [ttyS0] disabled Feb 13 16:04:29.193475 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 16:04:29.193494 kernel: printk: console [ttyS0] enabled Feb 13 16:04:29.193513 kernel: printk: bootconsole [uart0] disabled Feb 13 16:04:29.193532 kernel: thunder_xcv, ver 1.0 Feb 13 16:04:29.193551 kernel: thunder_bgx, ver 1.0 Feb 13 16:04:29.193569 kernel: nicpf, ver 1.0 Feb 13 16:04:29.193594 kernel: nicvf, ver 1.0 Feb 13 16:04:29.193812 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 16:04:29.194012 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:04:28 UTC (1739462668) Feb 13 16:04:29.194038 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:04:29.194059 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 16:04:29.194078 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 16:04:29.194097 kernel: watchdog: Hard watchdog permanently disabled Feb 13 16:04:29.194115 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:04:29.194139 kernel: Segment Routing with IPv6 Feb 13 16:04:29.194158 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:04:29.194177 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:04:29.194196 kernel: Key type dns_resolver registered Feb 13 16:04:29.194214 kernel: registered taskstats version 1 Feb 13 16:04:29.194233 kernel: Loading compiled-in X.509 certificates Feb 13 16:04:29.194252 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5' Feb 13 16:04:29.194294 kernel: Key type .fscrypt registered Feb 13 16:04:29.194314 kernel: Key type fscrypt-provisioning registered Feb 13 16:04:29.194339 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:04:29.194358 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:04:29.194377 kernel: ima: No architecture policies found Feb 13 16:04:29.194396 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 16:04:29.194414 kernel: clk: Disabling unused clocks Feb 13 16:04:29.194433 kernel: Freeing unused kernel memory: 39360K Feb 13 16:04:29.194452 kernel: Run /init as init process Feb 13 16:04:29.194470 kernel: with arguments: Feb 13 16:04:29.194488 kernel: /init Feb 13 16:04:29.194507 kernel: with environment: Feb 13 16:04:29.194530 kernel: HOME=/ Feb 13 16:04:29.194549 kernel: TERM=linux Feb 13 16:04:29.194567 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:04:29.194591 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:04:29.194615 systemd[1]: Detected virtualization amazon. Feb 13 16:04:29.194635 systemd[1]: Detected architecture arm64. Feb 13 16:04:29.194655 systemd[1]: Running in initrd. Feb 13 16:04:29.194680 systemd[1]: No hostname configured, using default hostname. Feb 13 16:04:29.194701 systemd[1]: Hostname set to . Feb 13 16:04:29.194721 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:04:29.194741 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:04:29.194762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:29.194782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:29.194804 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:04:29.194825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:04:29.194851 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:04:29.194872 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:04:29.194896 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:04:29.194918 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:04:29.194938 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:29.194959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:29.194980 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:04:29.195005 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:04:29.195025 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:04:29.195045 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:04:29.195066 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:04:29.195087 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:04:29.195107 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:04:29.195128 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:04:29.195148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:29.195168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:29.195194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:29.195215 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:04:29.195235 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:04:29.195255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:04:29.195300 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:04:29.195321 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:04:29.195342 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:04:29.195363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:04:29.195389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:29.195410 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:04:29.195431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:29.195493 systemd-journald[250]: Collecting audit messages is disabled. Feb 13 16:04:29.195542 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:04:29.195564 systemd-journald[250]: Journal started Feb 13 16:04:29.195601 systemd-journald[250]: Runtime Journal (/run/log/journal/ec28f0561b67766a667448c737aa96aa) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:04:29.205224 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:04:29.179227 systemd-modules-load[251]: Inserted module 'overlay' Feb 13 16:04:29.226276 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:04:29.235333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:04:29.237904 systemd-modules-load[251]: Inserted module 'br_netfilter' Feb 13 16:04:29.239985 kernel: Bridge firewalling registered Feb 13 16:04:29.240623 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:04:29.246019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:29.261896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:29.267242 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:29.282717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:29.299811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:29.308606 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:04:29.343526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:04:29.349241 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:29.359620 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:04:29.373355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:29.378010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:29.391105 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:04:29.418794 dracut-cmdline[284]: dracut-dracut-053 Feb 13 16:04:29.425143 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:04:29.479875 systemd-resolved[288]: Positive Trust Anchors: Feb 13 16:04:29.481757 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:04:29.481824 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:04:29.560300 kernel: SCSI subsystem initialized Feb 13 16:04:29.567302 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:04:29.580303 kernel: iscsi: registered transport (tcp) Feb 13 16:04:29.602321 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:04:29.602393 kernel: QLogic iSCSI HBA Driver Feb 13 16:04:29.687305 kernel: random: crng init done Feb 13 16:04:29.687696 systemd-resolved[288]: Defaulting to hostname 'linux'. Feb 13 16:04:29.691512 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:04:29.696543 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:29.716424 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:04:29.726634 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:04:29.774539 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:04:29.774621 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:04:29.774650 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:04:29.841312 kernel: raid6: neonx8 gen() 6604 MB/s Feb 13 16:04:29.858292 kernel: raid6: neonx4 gen() 6437 MB/s Feb 13 16:04:29.875292 kernel: raid6: neonx2 gen() 5377 MB/s Feb 13 16:04:29.892293 kernel: raid6: neonx1 gen() 3914 MB/s Feb 13 16:04:29.909299 kernel: raid6: int64x8 gen() 3782 MB/s Feb 13 16:04:29.926291 kernel: raid6: int64x4 gen() 3675 MB/s Feb 13 16:04:29.943292 kernel: raid6: int64x2 gen() 3552 MB/s Feb 13 16:04:29.961082 kernel: raid6: int64x1 gen() 2762 MB/s Feb 13 16:04:29.961115 kernel: raid6: using algorithm neonx8 gen() 6604 MB/s Feb 13 16:04:29.979048 kernel: raid6: .... xor() 4920 MB/s, rmw enabled Feb 13 16:04:29.979085 kernel: raid6: using neon recovery algorithm Feb 13 16:04:29.987529 kernel: xor: measuring software checksum speed Feb 13 16:04:29.987585 kernel: 8regs : 10699 MB/sec Feb 13 16:04:29.988640 kernel: 32regs : 11945 MB/sec Feb 13 16:04:29.989827 kernel: arm64_neon : 9564 MB/sec Feb 13 16:04:29.989859 kernel: xor: using function: 32regs (11945 MB/sec) Feb 13 16:04:30.073311 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:04:30.093113 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:04:30.102624 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:30.147380 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 16:04:30.156997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:30.170193 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:04:30.207675 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Feb 13 16:04:30.263191 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:04:30.281203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:04:30.390777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:30.406446 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:04:30.465086 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:04:30.473891 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:04:30.479017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:30.483706 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:04:30.494935 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:04:30.542059 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:04:30.605396 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 16:04:30.605463 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 16:04:30.626458 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 16:04:30.626740 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 16:04:30.626982 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ef:18:c2:cf:9b Feb 13 16:04:30.629332 (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:30.631036 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:04:30.636406 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:30.654097 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:30.658584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:04:30.658915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:30.663594 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:30.682963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:30.706309 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 16:04:30.708400 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 16:04:30.720291 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 16:04:30.729508 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:04:30.729591 kernel: GPT:9289727 != 16777215 Feb 13 16:04:30.729618 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:04:30.729646 kernel: GPT:9289727 != 16777215 Feb 13 16:04:30.729672 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:04:30.729697 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:30.745411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:30.757495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:04:30.819592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:30.826311 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (521) Feb 13 16:04:30.875356 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (520) Feb 13 16:04:30.883784 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 16:04:30.947637 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 16:04:31.000652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:04:31.016529 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 16:04:31.018939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 16:04:31.044668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:04:31.060711 disk-uuid[661]: Primary Header is updated. Feb 13 16:04:31.060711 disk-uuid[661]: Secondary Entries is updated. Feb 13 16:04:31.060711 disk-uuid[661]: Secondary Header is updated. Feb 13 16:04:31.070334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:31.076304 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:31.085311 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:32.096357 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:04:32.097818 disk-uuid[662]: The operation has completed successfully. Feb 13 16:04:32.277735 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:04:32.277940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:04:32.333583 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:04:32.341652 sh[1006]: Success Feb 13 16:04:32.366594 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 16:04:32.483722 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:04:32.501504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:04:32.505797 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:04:32.553527 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19 Feb 13 16:04:32.553606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:32.555335 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:04:32.556629 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:04:32.557756 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:04:32.589321 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 16:04:32.603516 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:04:32.607924 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:04:32.618596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:04:32.625418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:04:32.660164 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:32.660248 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:32.661530 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:32.669391 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:32.686683 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:04:32.691424 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:32.702560 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:04:32.717582 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:04:32.863403 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:04:32.882084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:04:32.906907 ignition[1113]: Ignition 2.19.0 Feb 13 16:04:32.906935 ignition[1113]: Stage: fetch-offline Feb 13 16:04:32.908631 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:32.913228 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:04:32.908675 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:32.909751 ignition[1113]: Ignition finished successfully Feb 13 16:04:32.957487 systemd-networkd[1205]: lo: Link UP Feb 13 16:04:32.957511 systemd-networkd[1205]: lo: Gained carrier Feb 13 16:04:32.961938 systemd-networkd[1205]: Enumeration completed Feb 13 16:04:32.963634 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:32.963641 systemd-networkd[1205]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:04:32.964163 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:04:32.971508 systemd-networkd[1205]: eth0: Link UP Feb 13 16:04:32.971517 systemd-networkd[1205]: eth0: Gained carrier Feb 13 16:04:32.971537 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:32.997595 systemd[1]: Reached target network.target - Network. Feb 13 16:04:33.007552 systemd-networkd[1205]: eth0: DHCPv4 address 172.31.19.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:04:33.007668 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:04:33.038571 ignition[1209]: Ignition 2.19.0 Feb 13 16:04:33.038599 ignition[1209]: Stage: fetch Feb 13 16:04:33.040424 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:33.040471 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:33.040920 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.050590 ignition[1209]: PUT result: OK Feb 13 16:04:33.063354 ignition[1209]: parsed url from cmdline: "" Feb 13 16:04:33.063434 ignition[1209]: no config URL provided Feb 13 16:04:33.063454 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:04:33.063483 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:04:33.063519 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.069591 ignition[1209]: PUT result: OK Feb 13 16:04:33.069917 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 16:04:33.073558 ignition[1209]: GET result: OK Feb 13 16:04:33.073720 ignition[1209]: parsing config with SHA512: 1368df2ae1d0320b76cda4c2c22a821f3b063d7b92a176ed32d5d473f58df61cc469e8cba4260e831174287923458a58e23b5a93415434a6b0721da16ecae0ea Feb 13 16:04:33.084929 unknown[1209]: fetched base config from "system" Feb 13 16:04:33.084955 unknown[1209]: fetched base config from "system" Feb 13 16:04:33.084970 unknown[1209]: fetched user config from "aws" Feb 13 16:04:33.092604 ignition[1209]: fetch: fetch complete Feb 13 16:04:33.092636 ignition[1209]: fetch: fetch passed Feb 13 16:04:33.092751 ignition[1209]: Ignition finished successfully Feb 13 16:04:33.099350 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:04:33.120677 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:04:33.151081 ignition[1216]: Ignition 2.19.0 Feb 13 16:04:33.153520 ignition[1216]: Stage: kargs Feb 13 16:04:33.154381 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:33.154413 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:33.154587 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.157468 ignition[1216]: PUT result: OK Feb 13 16:04:33.168582 ignition[1216]: kargs: kargs passed Feb 13 16:04:33.168956 ignition[1216]: Ignition finished successfully Feb 13 16:04:33.176106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:04:33.186570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:04:33.220740 ignition[1223]: Ignition 2.19.0 Feb 13 16:04:33.220763 ignition[1223]: Stage: disks Feb 13 16:04:33.221549 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:33.221578 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:33.221748 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.225599 ignition[1223]: PUT result: OK Feb 13 16:04:33.237386 ignition[1223]: disks: disks passed Feb 13 16:04:33.237500 ignition[1223]: Ignition finished successfully Feb 13 16:04:33.242238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:04:33.246293 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:04:33.248911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:04:33.255519 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:04:33.257580 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:04:33.259658 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:04:33.274704 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:04:33.321739 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:04:33.330551 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:04:33.341481 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:04:33.443310 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none. Feb 13 16:04:33.444377 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:04:33.446940 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:04:33.464413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:04:33.470671 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:04:33.474800 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 16:04:33.478247 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:04:33.478324 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:04:33.496289 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Feb 13 16:04:33.503505 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:33.503578 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:33.503606 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:33.505506 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:04:33.522208 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:04:33.528417 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:33.530430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:04:33.644549 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:04:33.654057 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:04:33.663001 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:04:33.671803 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:04:33.824699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:04:33.835490 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:04:33.849626 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:04:33.865970 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:04:33.870292 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:33.906361 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:04:33.917757 ignition[1363]: INFO : Ignition 2.19.0 Feb 13 16:04:33.917757 ignition[1363]: INFO : Stage: mount Feb 13 16:04:33.921130 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:33.921130 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:33.925371 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:33.928768 ignition[1363]: INFO : PUT result: OK Feb 13 16:04:33.933916 ignition[1363]: INFO : mount: mount passed Feb 13 16:04:33.933916 ignition[1363]: INFO : Ignition finished successfully Feb 13 16:04:33.937584 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:04:33.948533 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:04:33.982696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:04:34.006306 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Feb 13 16:04:34.009924 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:04:34.009989 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:04:34.010016 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:04:34.017317 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:04:34.019414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:04:34.059534 ignition[1392]: INFO : Ignition 2.19.0 Feb 13 16:04:34.059534 ignition[1392]: INFO : Stage: files Feb 13 16:04:34.062821 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:34.062821 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:34.062821 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:34.070180 ignition[1392]: INFO : PUT result: OK Feb 13 16:04:34.073861 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:04:34.077203 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:04:34.077203 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:04:34.084440 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:04:34.087214 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:04:34.090140 unknown[1392]: wrote ssh authorized keys file for user: core Feb 13 16:04:34.092303 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:04:34.096806 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:04:34.096806 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 16:04:34.196237 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 16:04:34.291399 systemd-networkd[1205]: eth0: Gained IPv6LL Feb 13 16:04:34.349606 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:04:34.353412 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 16:04:34.353412 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 16:04:34.829168 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 16:04:34.947324 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 16:04:34.947324 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:04:34.954406 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.988653 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.988653 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:34.988653 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 16:04:35.368762 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 16:04:35.670196 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:04:35.670196 ignition[1392]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:04:35.677753 ignition[1392]: INFO : files: files passed Feb 13 16:04:35.677753 ignition[1392]: INFO : Ignition finished successfully Feb 13 16:04:35.705339 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:04:35.721669 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:04:35.728557 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:04:35.746090 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:04:35.748083 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:04:35.764447 initrd-setup-root-after-ignition[1420]: grep: Feb 13 16:04:35.764447 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:35.769725 initrd-setup-root-after-ignition[1420]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:35.769725 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:04:35.774576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:04:35.784455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:04:35.795567 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:04:35.847469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:04:35.847885 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:04:35.855568 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:04:35.859471 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:04:35.859737 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:04:35.876540 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:04:35.903400 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:04:35.924683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:04:35.950344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:35.953233 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:35.968092 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:04:35.971559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:04:35.971789 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:04:35.976490 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:04:35.984489 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:04:35.986631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:04:35.992335 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:04:35.994904 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:04:36.001060 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:04:36.003213 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:04:36.005697 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:04:36.008000 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:04:36.017835 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:04:36.019533 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:04:36.019761 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:04:36.022210 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:36.024597 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:36.027029 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:04:36.032516 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:36.037068 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:04:36.037340 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:04:36.038313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:04:36.038535 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:04:36.039246 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:04:36.039926 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:04:36.067656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:04:36.071572 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:04:36.073549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:04:36.073915 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:36.082203 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:04:36.084255 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:04:36.099471 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:04:36.103431 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:04:36.125320 ignition[1444]: INFO : Ignition 2.19.0 Feb 13 16:04:36.125320 ignition[1444]: INFO : Stage: umount Feb 13 16:04:36.131096 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:04:36.131096 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:04:36.135407 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:04:36.139254 ignition[1444]: INFO : PUT result: OK Feb 13 16:04:36.142679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:04:36.145731 ignition[1444]: INFO : umount: umount passed Feb 13 16:04:36.145731 ignition[1444]: INFO : Ignition finished successfully Feb 13 16:04:36.151728 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:04:36.151976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:04:36.155779 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:04:36.155877 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:04:36.158010 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:04:36.158619 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:04:36.163348 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:04:36.163437 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:04:36.176804 systemd[1]: Stopped target network.target - Network. Feb 13 16:04:36.178491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:04:36.178588 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:04:36.180844 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:04:36.182540 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:04:36.191955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:36.194482 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:04:36.200555 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:04:36.202453 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:04:36.202533 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:04:36.204483 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:04:36.204556 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:04:36.206544 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:04:36.206627 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:04:36.208689 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:04:36.208801 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:04:36.213387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:04:36.225301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:04:36.237360 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:04:36.238013 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:04:36.241860 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:04:36.242037 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:04:36.253375 systemd-networkd[1205]: eth0: DHCPv6 lease lost Feb 13 16:04:36.257370 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:04:36.260420 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:04:36.265126 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:04:36.265442 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:04:36.272553 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:04:36.272659 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:36.297440 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:04:36.299656 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:04:36.299779 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:04:36.302969 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:04:36.303067 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:36.305864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:04:36.306898 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:36.309197 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:04:36.309361 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:36.313958 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:36.352963 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:04:36.353332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:36.356701 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:04:36.356829 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:36.361182 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:04:36.361307 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:36.365564 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:04:36.365660 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:04:36.368450 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:04:36.368534 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:04:36.375775 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:04:36.375870 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:04:36.396534 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:04:36.398821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:04:36.398935 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:36.401569 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:04:36.401650 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:36.404225 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:04:36.404321 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:36.406909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:04:36.406991 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:36.410200 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:04:36.410661 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:04:36.460903 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:04:36.462550 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:04:36.466523 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:04:36.485541 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:04:36.502706 systemd[1]: Switching root. Feb 13 16:04:36.547039 systemd-journald[250]: Journal stopped Feb 13 16:04:38.505616 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Feb 13 16:04:38.505743 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:04:38.505787 kernel: SELinux: policy capability open_perms=1 Feb 13 16:04:38.505819 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:04:38.505856 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:04:38.505895 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:04:38.505927 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:04:38.505957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:04:38.505989 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:04:38.506020 kernel: audit: type=1403 audit(1739462676.917:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:04:38.506054 systemd[1]: Successfully loaded SELinux policy in 51.996ms. Feb 13 16:04:38.506105 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.074ms. Feb 13 16:04:38.506145 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:04:38.506178 systemd[1]: Detected virtualization amazon. Feb 13 16:04:38.506209 systemd[1]: Detected architecture arm64. Feb 13 16:04:38.506244 systemd[1]: Detected first boot. Feb 13 16:04:38.506306 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:04:38.506343 zram_generator::config[1486]: No configuration found. Feb 13 16:04:38.506377 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:04:38.506410 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:04:38.506444 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:04:38.506481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:04:38.506515 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:04:38.506549 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:04:38.506584 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:04:38.506616 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:04:38.506648 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:04:38.506681 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:04:38.506713 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:04:38.506747 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:04:38.506781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:04:38.506812 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:04:38.506842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:04:38.506874 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:04:38.506906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:04:38.506939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:04:38.506969 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:04:38.507001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:04:38.507036 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:04:38.507066 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:04:38.507099 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:04:38.507130 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:04:38.507161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:04:38.507193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:04:38.507225 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:04:38.513319 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:04:38.513399 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:04:38.513432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:04:38.513463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:04:38.513494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:04:38.513528 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:04:38.513561 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:04:38.513594 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:04:38.513625 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:04:38.513655 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:04:38.513692 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:04:38.513722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:04:38.513752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:04:38.513783 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:04:38.513816 systemd[1]: Reached target machines.target - Containers. Feb 13 16:04:38.513846 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:04:38.513876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:38.513907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:04:38.513942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:04:38.513974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:38.514008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:04:38.514038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:38.514071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:04:38.514103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:38.514136 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:04:38.514179 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:04:38.514209 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:04:38.514243 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:04:38.519868 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:04:38.519913 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:04:38.519948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:04:38.519981 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:04:38.520012 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:04:38.520054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:04:38.520086 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:04:38.520119 systemd[1]: Stopped verity-setup.service. Feb 13 16:04:38.520157 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:04:38.520189 kernel: fuse: init (API version 7.39) Feb 13 16:04:38.520220 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:04:38.520250 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:04:38.520313 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:04:38.520344 kernel: ACPI: bus type drm_connector registered Feb 13 16:04:38.520374 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:04:38.520404 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:04:38.520434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:04:38.520470 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:04:38.520500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:04:38.520530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:38.520560 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:38.520594 kernel: loop: module loaded Feb 13 16:04:38.520623 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:04:38.520655 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:04:38.520727 systemd-journald[1571]: Collecting audit messages is disabled. Feb 13 16:04:38.520779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:38.520811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:38.520844 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:04:38.520875 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:04:38.520910 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:38.520941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:38.520974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:04:38.521004 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:04:38.521033 systemd-journald[1571]: Journal started Feb 13 16:04:38.521086 systemd-journald[1571]: Runtime Journal (/run/log/journal/ec28f0561b67766a667448c737aa96aa) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:04:37.924758 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:04:38.534192 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:04:37.953874 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 16:04:37.954686 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:04:38.528885 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:04:38.533044 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:04:38.548394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:04:38.560812 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:04:38.565449 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:04:38.565505 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:04:38.572674 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:04:38.584054 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:04:38.595526 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:04:38.598631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:38.608707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:04:38.616154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:04:38.618481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:04:38.623652 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:04:38.625887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:04:38.631721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:04:38.645721 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:04:38.653643 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:04:38.663005 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:04:38.665721 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:04:38.668650 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:04:38.673939 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:04:38.723770 systemd-journald[1571]: Time spent on flushing to /var/log/journal/ec28f0561b67766a667448c737aa96aa is 125.121ms for 913 entries. Feb 13 16:04:38.723770 systemd-journald[1571]: System Journal (/var/log/journal/ec28f0561b67766a667448c737aa96aa) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:04:38.869234 systemd-journald[1571]: Received client request to flush runtime journal. Feb 13 16:04:38.869451 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 16:04:38.869492 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:04:38.794580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:04:38.797783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:04:38.801823 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:04:38.815677 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:04:38.823589 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:04:38.881005 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:04:38.897095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:04:38.920753 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 16:04:38.924306 kernel: loop1: detected capacity change from 0 to 52536 Feb 13 16:04:38.926830 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Feb 13 16:04:38.926871 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Feb 13 16:04:38.928886 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:04:38.936787 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:04:38.945939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:04:38.961636 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:04:39.045332 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 16:04:39.070563 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:04:39.081700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:04:39.144681 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 16:04:39.145385 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 16:04:39.154191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:04:39.233452 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 16:04:39.285300 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 16:04:39.319296 kernel: loop5: detected capacity change from 0 to 52536 Feb 13 16:04:39.335392 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 16:04:39.376316 kernel: loop7: detected capacity change from 0 to 114432 Feb 13 16:04:39.400898 (sd-merge)[1644]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 16:04:39.402394 (sd-merge)[1644]: Merged extensions into '/usr'. Feb 13 16:04:39.411968 systemd[1]: Reloading requested from client PID 1598 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:04:39.412005 systemd[1]: Reloading... Feb 13 16:04:39.551299 ldconfig[1593]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:04:39.609807 zram_generator::config[1667]: No configuration found. Feb 13 16:04:39.879834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:04:39.999799 systemd[1]: Reloading finished in 586 ms. Feb 13 16:04:40.039325 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:04:40.042090 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:04:40.044958 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:04:40.066602 systemd[1]: Starting ensure-sysext.service... Feb 13 16:04:40.077590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:04:40.082689 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:04:40.106221 systemd[1]: Reloading requested from client PID 1724 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:04:40.106279 systemd[1]: Reloading... Feb 13 16:04:40.135011 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:04:40.135703 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:04:40.138203 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:04:40.138991 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Feb 13 16:04:40.139307 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Feb 13 16:04:40.146980 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:04:40.147007 systemd-tmpfiles[1725]: Skipping /boot Feb 13 16:04:40.176437 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:04:40.176466 systemd-tmpfiles[1725]: Skipping /boot Feb 13 16:04:40.191860 systemd-udevd[1726]: Using default interface naming scheme 'v255'. Feb 13 16:04:40.322341 zram_generator::config[1760]: No configuration found. Feb 13 16:04:40.404857 (udev-worker)[1764]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:40.691626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:04:40.743515 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1763) Feb 13 16:04:40.854504 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:04:40.855814 systemd[1]: Reloading finished in 748 ms. Feb 13 16:04:40.884616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:04:40.897448 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:04:40.980866 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:40.988030 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:04:40.992871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:41.002834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:41.008894 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:41.022438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:41.025696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:41.033740 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:04:41.043851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:04:41.053814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:04:41.063240 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:04:41.072082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:04:41.081590 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:04:41.085048 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:41.087368 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:41.130059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:41.130456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:41.162317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:04:41.170124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:41.170517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:41.187116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:04:41.195810 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:04:41.210868 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:04:41.222749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:04:41.228779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:04:41.239825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:04:41.241996 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:04:41.245795 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:04:41.248189 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:04:41.257204 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:04:41.265413 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:04:41.270157 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:04:41.275793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:04:41.277381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:04:41.286342 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:04:41.287582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:04:41.298981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:04:41.301067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:04:41.305430 systemd[1]: Finished ensure-sysext.service. Feb 13 16:04:41.306567 lvm[1946]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:04:41.323812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:04:41.337744 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:04:41.339690 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:04:41.341224 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:04:41.354067 augenrules[1965]: No rules Feb 13 16:04:41.360044 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:41.375934 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:04:41.376340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:04:41.379489 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:04:41.387158 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:04:41.403786 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:04:41.406294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:04:41.408383 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:04:41.430317 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:04:41.436451 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:04:41.472873 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:04:41.482379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:04:41.494991 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:04:41.600173 systemd-networkd[1929]: lo: Link UP Feb 13 16:04:41.600765 systemd-networkd[1929]: lo: Gained carrier Feb 13 16:04:41.603907 systemd-networkd[1929]: Enumeration completed Feb 13 16:04:41.604106 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:04:41.606396 systemd-resolved[1933]: Positive Trust Anchors: Feb 13 16:04:41.606420 systemd-resolved[1933]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:04:41.606483 systemd-resolved[1933]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:04:41.607185 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:41.607205 systemd-networkd[1929]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:04:41.610373 systemd-networkd[1929]: eth0: Link UP Feb 13 16:04:41.610698 systemd-networkd[1929]: eth0: Gained carrier Feb 13 16:04:41.610732 systemd-networkd[1929]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:04:41.615686 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:04:41.621386 systemd-networkd[1929]: eth0: DHCPv4 address 172.31.19.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:04:41.628816 systemd-resolved[1933]: Defaulting to hostname 'linux'. Feb 13 16:04:41.636743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:04:41.639289 systemd[1]: Reached target network.target - Network. Feb 13 16:04:41.641558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:04:41.643860 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:04:41.646077 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:04:41.653317 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:04:41.655974 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:04:41.658454 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:04:41.660923 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:04:41.663377 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:04:41.663431 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:04:41.665454 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:04:41.668758 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:04:41.673567 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:04:41.691591 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:04:41.694858 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:04:41.697388 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:04:41.699570 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:04:41.701537 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:04:41.701592 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:04:41.721197 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:04:41.725995 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:04:41.731641 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:04:41.742504 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:04:41.747568 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:04:41.750461 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:04:41.755182 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:04:41.765061 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 16:04:41.791329 jq[1994]: false Feb 13 16:04:41.798817 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:04:41.810301 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 16:04:41.823954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:04:41.833831 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:04:41.845879 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:04:41.850772 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:04:41.853107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:04:41.857596 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:04:41.868541 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:04:41.880997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:04:41.884521 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:04:41.961187 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:04:41.961250 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: ---------------------------------------------------- Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: corporation. Support and training for ntp-4 are Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: available at https://www.nwtime.org/support Feb 13 16:04:41.961811 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: ---------------------------------------------------- Feb 13 16:04:41.961350 ntpd[1997]: ---------------------------------------------------- Feb 13 16:04:41.961372 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:04:41.961392 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:04:41.961411 ntpd[1997]: corporation. Support and training for ntp-4 are Feb 13 16:04:41.961430 ntpd[1997]: available at https://www.nwtime.org/support Feb 13 16:04:41.961450 ntpd[1997]: ---------------------------------------------------- Feb 13 16:04:41.967644 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:04:41.973108 ntpd[1997]: proto: precision = 0.108 usec (-23) Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: proto: precision = 0.108 usec (-23) Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: basedate set to 2025-02-01 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: gps base set to 2025-02-02 (week 2352) Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Listen normally on 3 eth0 172.31.19.223:123 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Listen normally on 4 lo [::1]:123 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: bind(21) AF_INET6 fe80::4ef:18ff:fec2:cf9b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: unable to create socket on eth0 (5) for fe80::4ef:18ff:fec2:cf9b%2#123 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: failed to init interface for address fe80::4ef:18ff:fec2:cf9b%2 Feb 13 16:04:41.994838 ntpd[1997]: 13 Feb 16:04:41 ntpd[1997]: Listening on routing socket on fd #21 for interface updates Feb 13 16:04:41.970370 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:04:41.977783 ntpd[1997]: basedate set to 2025-02-01 Feb 13 16:04:41.980017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:04:41.977815 ntpd[1997]: gps base set to 2025-02-02 (week 2352) Feb 13 16:04:41.984511 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:04:41.979357 dbus-daemon[1993]: [system] SELinux support is enabled Feb 13 16:04:42.021054 extend-filesystems[1995]: Found loop4 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found loop5 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found loop6 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found loop7 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p1 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p2 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p3 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found usr Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p4 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p6 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p7 Feb 13 16:04:42.021054 extend-filesystems[1995]: Found nvme0n1p9 Feb 13 16:04:42.007190 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:04:41.987162 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:04:42.118841 jq[2007]: true Feb 13 16:04:42.119013 tar[2011]: linux-arm64/helm Feb 13 16:04:42.137563 extend-filesystems[1995]: Checking size of /dev/nvme0n1p9 Feb 13 16:04:42.141444 ntpd[1997]: 13 Feb 16:04:42 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.141444 ntpd[1997]: 13 Feb 16:04:42 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.007666 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:04:41.987242 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:04:42.021139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:04:41.987573 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:04:42.021238 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:04:41.987636 ntpd[1997]: Listen normally on 3 eth0 172.31.19.223:123 Feb 13 16:04:42.026489 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:04:41.987707 ntpd[1997]: Listen normally on 4 lo [::1]:123 Feb 13 16:04:42.026531 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:04:41.987779 ntpd[1997]: bind(21) AF_INET6 fe80::4ef:18ff:fec2:cf9b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:04:42.047558 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 16:04:41.987816 ntpd[1997]: unable to create socket on eth0 (5) for fe80::4ef:18ff:fec2:cf9b%2#123 Feb 13 16:04:42.128855 (ntainerd)[2030]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:04:41.987844 ntpd[1997]: failed to init interface for address fe80::4ef:18ff:fec2:cf9b%2 Feb 13 16:04:42.167470 extend-filesystems[1995]: Resized partition /dev/nvme0n1p9 Feb 13 16:04:41.987894 ntpd[1997]: Listening on routing socket on fd #21 for interface updates Feb 13 16:04:42.027244 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.027734 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:04:42.031509 dbus-daemon[1993]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1929 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 16:04:42.032858 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 16:04:42.199528 update_engine[2005]: I20250213 16:04:42.191651 2005 main.cc:92] Flatcar Update Engine starting Feb 13 16:04:42.200596 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 16:04:42.209491 extend-filesystems[2046]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:04:42.222934 update_engine[2005]: I20250213 16:04:42.222213 2005 update_check_scheduler.cc:74] Next update check in 8m24s Feb 13 16:04:42.222618 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:04:42.241034 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:04:42.243624 jq[2039]: true Feb 13 16:04:42.244847 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 16:04:42.336109 coreos-metadata[1992]: Feb 13 16:04:42.335 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:04:42.345832 coreos-metadata[1992]: Feb 13 16:04:42.345 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 16:04:42.350218 coreos-metadata[1992]: Feb 13 16:04:42.349 INFO Fetch successful Feb 13 16:04:42.350218 coreos-metadata[1992]: Feb 13 16:04:42.349 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 16:04:42.353298 coreos-metadata[1992]: Feb 13 16:04:42.352 INFO Fetch successful Feb 13 16:04:42.353298 coreos-metadata[1992]: Feb 13 16:04:42.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 16:04:42.368493 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 16:04:42.368616 coreos-metadata[1992]: Feb 13 16:04:42.361 INFO Fetch successful Feb 13 16:04:42.368616 coreos-metadata[1992]: Feb 13 16:04:42.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 16:04:42.368616 coreos-metadata[1992]: Feb 13 16:04:42.363 INFO Fetch successful Feb 13 16:04:42.368616 coreos-metadata[1992]: Feb 13 16:04:42.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 16:04:42.369346 coreos-metadata[1992]: Feb 13 16:04:42.368 INFO Fetch failed with 404: resource not found Feb 13 16:04:42.369346 coreos-metadata[1992]: Feb 13 16:04:42.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 16:04:42.369346 coreos-metadata[1992]: Feb 13 16:04:42.369 INFO Fetch successful Feb 13 16:04:42.369346 coreos-metadata[1992]: Feb 13 16:04:42.369 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.369 INFO Fetch successful Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.375 INFO Fetch successful Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.375 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.378 INFO Fetch successful Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.378 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 16:04:42.384782 coreos-metadata[1992]: Feb 13 16:04:42.382 INFO Fetch successful Feb 13 16:04:42.385121 extend-filesystems[2046]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 16:04:42.385121 extend-filesystems[2046]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 16:04:42.385121 extend-filesystems[2046]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 16:04:42.434986 extend-filesystems[1995]: Resized filesystem in /dev/nvme0n1p9 Feb 13 16:04:42.387393 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:04:42.387747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:04:42.485578 systemd-logind[2004]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 16:04:42.488710 systemd-logind[2004]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 16:04:42.489629 systemd-logind[2004]: New seat seat0. Feb 13 16:04:42.498525 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:04:42.503562 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:04:42.523777 bash[2079]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:04:42.509977 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:04:42.527488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:04:42.540973 systemd[1]: Starting sshkeys.service... Feb 13 16:04:42.548298 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1763) Feb 13 16:04:42.582760 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:04:42.616024 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:04:42.735988 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 16:04:42.738386 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 16:04:42.749245 dbus-daemon[1993]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2036 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 16:04:42.759546 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 16:04:42.788280 coreos-metadata[2093]: Feb 13 16:04:42.788 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:04:42.796451 coreos-metadata[2093]: Feb 13 16:04:42.796 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 16:04:42.800858 coreos-metadata[2093]: Feb 13 16:04:42.800 INFO Fetch successful Feb 13 16:04:42.800858 coreos-metadata[2093]: Feb 13 16:04:42.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 16:04:42.803493 systemd-networkd[1929]: eth0: Gained IPv6LL Feb 13 16:04:42.804702 coreos-metadata[2093]: Feb 13 16:04:42.804 INFO Fetch successful Feb 13 16:04:42.817712 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:04:42.818434 unknown[2093]: wrote ssh authorized keys file for user: core Feb 13 16:04:42.824569 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:04:42.836543 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 16:04:42.849087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:04:42.868917 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:04:42.944595 polkitd[2112]: Started polkitd version 121 Feb 13 16:04:43.008305 update-ssh-keys[2122]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:04:43.012190 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:04:43.029356 systemd[1]: Finished sshkeys.service. Feb 13 16:04:43.026211 polkitd[2112]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 16:04:43.026393 polkitd[2112]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 16:04:43.037419 polkitd[2112]: Finished loading, compiling and executing 2 rules Feb 13 16:04:43.044173 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 16:04:43.044506 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 16:04:43.045239 polkitd[2112]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 16:04:43.084626 amazon-ssm-agent[2119]: Initializing new seelog logger Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: New Seelog Logger Creation Complete Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.098247 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.106687 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO Proxy environment variables: Feb 13 16:04:43.111696 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.111696 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.114288 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.133989 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.133989 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:04:43.133989 amazon-ssm-agent[2119]: 2025/02/13 16:04:43 processing appconfig overrides Feb 13 16:04:43.141209 locksmithd[2047]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:04:43.146375 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:04:43.184695 systemd-hostnamed[2036]: Hostname set to (transient) Feb 13 16:04:43.186233 systemd-resolved[1933]: System hostname changed to 'ip-172-31-19-223'. Feb 13 16:04:43.213157 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO http_proxy: Feb 13 16:04:43.226469 containerd[2030]: time="2025-02-13T16:04:43.226337637Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 16:04:43.318851 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO no_proxy: Feb 13 16:04:43.380854 containerd[2030]: time="2025-02-13T16:04:43.380750145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389250165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389361237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389397501Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389689965Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389722569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389851101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.389881065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.390161505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.390196113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.390375 containerd[2030]: time="2025-02-13T16:04:43.390227229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.393904 containerd[2030]: time="2025-02-13T16:04:43.390252201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.393904 containerd[2030]: time="2025-02-13T16:04:43.392996673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.396947 containerd[2030]: time="2025-02-13T16:04:43.394461777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:04:43.396947 containerd[2030]: time="2025-02-13T16:04:43.396522393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:04:43.396947 containerd[2030]: time="2025-02-13T16:04:43.396570945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:04:43.396947 containerd[2030]: time="2025-02-13T16:04:43.396786117Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:04:43.396947 containerd[2030]: time="2025-02-13T16:04:43.396887481Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.408051310Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.408162754Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.408200866Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.408242110Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.408330718Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.408637282Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409044526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409347586Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409393090Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409426966Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409461574Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409496962Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409528258Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410312 containerd[2030]: time="2025-02-13T16:04:43.409559506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409591006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409623346Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409652566Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409682854Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409726234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409758838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409789210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409831090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409863682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409894402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409937242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.409972834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.410005330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.410976 containerd[2030]: time="2025-02-13T16:04:43.410040994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.411628 containerd[2030]: time="2025-02-13T16:04:43.410069866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.411628 containerd[2030]: time="2025-02-13T16:04:43.410098546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.411628 containerd[2030]: time="2025-02-13T16:04:43.410130706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.411628 containerd[2030]: time="2025-02-13T16:04:43.410167126Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:04:43.411628 containerd[2030]: time="2025-02-13T16:04:43.410222578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.410255146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.417673354Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.417812674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418102834Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418146898Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418182466Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418207750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418242562Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418290106Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:04:43.419553 containerd[2030]: time="2025-02-13T16:04:43.418324462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:04:43.420065 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO https_proxy: Feb 13 16:04:43.420128 containerd[2030]: time="2025-02-13T16:04:43.418852258Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:04:43.421437 containerd[2030]: time="2025-02-13T16:04:43.420818266Z" level=info msg="Connect containerd service" Feb 13 16:04:43.421437 containerd[2030]: time="2025-02-13T16:04:43.420909466Z" level=info msg="using legacy CRI server" Feb 13 16:04:43.421437 containerd[2030]: time="2025-02-13T16:04:43.420928966Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:04:43.421437 containerd[2030]: time="2025-02-13T16:04:43.421096978Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.426746902Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.427630606Z" level=info msg="Start subscribing containerd event" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.427711342Z" level=info msg="Start recovering state" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.427835614Z" level=info msg="Start event monitor" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.427860322Z" level=info msg="Start snapshots syncer" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.427883830Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:04:43.429021 containerd[2030]: time="2025-02-13T16:04:43.427907278Z" level=info msg="Start streaming server" Feb 13 16:04:43.431174 containerd[2030]: time="2025-02-13T16:04:43.431124154Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:04:43.434282 containerd[2030]: time="2025-02-13T16:04:43.432308842Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:04:43.447842 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:04:43.448925 containerd[2030]: time="2025-02-13T16:04:43.448062562Z" level=info msg="containerd successfully booted in 0.230032s" Feb 13 16:04:43.523093 sshd_keygen[2023]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:04:43.523692 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO Checking if agent identity type OnPrem can be assumed Feb 13 16:04:43.598415 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:04:43.616701 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:04:43.627566 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO Checking if agent identity type EC2 can be assumed Feb 13 16:04:43.629797 systemd[1]: Started sshd@0-172.31.19.223:22-139.178.68.195:60222.service - OpenSSH per-connection server daemon (139.178.68.195:60222). Feb 13 16:04:43.687023 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:04:43.687502 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:04:43.704893 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:04:43.726363 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO Agent will take identity from EC2 Feb 13 16:04:43.770754 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:04:43.787948 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:04:43.805929 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:04:43.811145 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:04:43.825418 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:43.896905 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 60222 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:43.904092 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:43.925854 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:43.937684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:04:43.949802 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:04:43.967251 systemd-logind[2004]: New session 1 of user core. Feb 13 16:04:44.003824 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:04:44.023951 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:04:44.023957 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:04:44.047994 (systemd)[2237]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:04:44.124862 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 16:04:44.231350 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 16:04:44.280439 tar[2011]: linux-arm64/LICENSE Feb 13 16:04:44.281002 tar[2011]: linux-arm64/README.md Feb 13 16:04:44.322912 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:04:44.332345 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 16:04:44.388936 systemd[2237]: Queued start job for default target default.target. Feb 13 16:04:44.399950 systemd[2237]: Created slice app.slice - User Application Slice. Feb 13 16:04:44.400025 systemd[2237]: Reached target paths.target - Paths. Feb 13 16:04:44.400060 systemd[2237]: Reached target timers.target - Timers. Feb 13 16:04:44.408707 systemd[2237]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:04:44.432112 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 16:04:44.437955 systemd[2237]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:04:44.438254 systemd[2237]: Reached target sockets.target - Sockets. Feb 13 16:04:44.438455 systemd[2237]: Reached target basic.target - Basic System. Feb 13 16:04:44.438567 systemd[2237]: Reached target default.target - Main User Target. Feb 13 16:04:44.438638 systemd[2237]: Startup finished in 368ms. Feb 13 16:04:44.439452 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:04:44.453612 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:04:44.532465 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [Registrar] Starting registrar module Feb 13 16:04:44.627552 systemd[1]: Started sshd@1-172.31.19.223:22-139.178.68.195:60230.service - OpenSSH per-connection server daemon (139.178.68.195:60230). Feb 13 16:04:44.633026 amazon-ssm-agent[2119]: 2025-02-13 16:04:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 16:04:44.824375 sshd[2251]: Accepted publickey for core from 139.178.68.195 port 60230 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:44.825277 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:44.834346 systemd-logind[2004]: New session 2 of user core. Feb 13 16:04:44.840514 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:04:44.961976 ntpd[1997]: Listen normally on 6 eth0 [fe80::4ef:18ff:fec2:cf9b%2]:123 Feb 13 16:04:44.962702 ntpd[1997]: 13 Feb 16:04:44 ntpd[1997]: Listen normally on 6 eth0 [fe80::4ef:18ff:fec2:cf9b%2]:123 Feb 13 16:04:44.978211 sshd[2251]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:44.985928 systemd[1]: sshd@1-172.31.19.223:22-139.178.68.195:60230.service: Deactivated successfully. Feb 13 16:04:44.992671 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:04:44.998359 systemd-logind[2004]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:04:45.021791 systemd[1]: Started sshd@2-172.31.19.223:22-139.178.68.195:60242.service - OpenSSH per-connection server daemon (139.178.68.195:60242). Feb 13 16:04:45.026802 systemd-logind[2004]: Removed session 2. Feb 13 16:04:45.085489 amazon-ssm-agent[2119]: 2025-02-13 16:04:45 INFO [EC2Identity] EC2 registration was successful. Feb 13 16:04:45.124807 amazon-ssm-agent[2119]: 2025-02-13 16:04:45 INFO [CredentialRefresher] credentialRefresher has started Feb 13 16:04:45.125055 amazon-ssm-agent[2119]: 2025-02-13 16:04:45 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 16:04:45.125609 amazon-ssm-agent[2119]: 2025-02-13 16:04:45 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 16:04:45.186498 amazon-ssm-agent[2119]: 2025-02-13 16:04:45 INFO [CredentialRefresher] Next credential rotation will be in 31.9249784141 minutes Feb 13 16:04:45.206835 sshd[2259]: Accepted publickey for core from 139.178.68.195 port 60242 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:45.209975 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:45.218946 systemd-logind[2004]: New session 3 of user core. Feb 13 16:04:45.229558 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:04:45.363643 sshd[2259]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:45.368709 systemd[1]: sshd@2-172.31.19.223:22-139.178.68.195:60242.service: Deactivated successfully. Feb 13 16:04:45.372918 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:04:45.376172 systemd-logind[2004]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:04:45.378174 systemd-logind[2004]: Removed session 3. Feb 13 16:04:45.772072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:04:45.775384 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:04:45.781443 systemd[1]: Startup finished in 1.145s (kernel) + 8.117s (initrd) + 8.913s (userspace) = 18.177s. Feb 13 16:04:45.787680 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:04:46.154619 amazon-ssm-agent[2119]: 2025-02-13 16:04:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 16:04:46.255749 amazon-ssm-agent[2119]: 2025-02-13 16:04:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2280) started Feb 13 16:04:46.358229 amazon-ssm-agent[2119]: 2025-02-13 16:04:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 16:04:46.896924 kubelet[2270]: E0213 16:04:46.896858 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:04:46.900303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:04:46.900633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:04:46.901147 systemd[1]: kubelet.service: Consumed 1.280s CPU time. Feb 13 16:04:48.782288 systemd-resolved[1933]: Clock change detected. Flushing caches. Feb 13 16:04:55.228217 systemd[1]: Started sshd@3-172.31.19.223:22-139.178.68.195:53590.service - OpenSSH per-connection server daemon (139.178.68.195:53590). Feb 13 16:04:55.395308 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 53590 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:55.397891 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:55.405886 systemd-logind[2004]: New session 4 of user core. Feb 13 16:04:55.413013 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:04:55.542672 sshd[2293]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:55.547490 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:04:55.549040 systemd[1]: sshd@3-172.31.19.223:22-139.178.68.195:53590.service: Deactivated successfully. Feb 13 16:04:55.554593 systemd-logind[2004]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:04:55.556340 systemd-logind[2004]: Removed session 4. Feb 13 16:04:55.587234 systemd[1]: Started sshd@4-172.31.19.223:22-139.178.68.195:53600.service - OpenSSH per-connection server daemon (139.178.68.195:53600). Feb 13 16:04:55.753542 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 53600 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:55.756098 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:55.764118 systemd-logind[2004]: New session 5 of user core. Feb 13 16:04:55.774053 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:04:55.895011 sshd[2300]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:55.901814 systemd[1]: sshd@4-172.31.19.223:22-139.178.68.195:53600.service: Deactivated successfully. Feb 13 16:04:55.906597 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:04:55.908906 systemd-logind[2004]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:04:55.910855 systemd-logind[2004]: Removed session 5. Feb 13 16:04:55.932887 systemd[1]: Started sshd@5-172.31.19.223:22-139.178.68.195:53614.service - OpenSSH per-connection server daemon (139.178.68.195:53614). Feb 13 16:04:56.114380 sshd[2307]: Accepted publickey for core from 139.178.68.195 port 53614 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:56.116999 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:56.124358 systemd-logind[2004]: New session 6 of user core. Feb 13 16:04:56.134291 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:04:56.261816 sshd[2307]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:56.267266 systemd[1]: sshd@5-172.31.19.223:22-139.178.68.195:53614.service: Deactivated successfully. Feb 13 16:04:56.270653 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:04:56.275278 systemd-logind[2004]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:04:56.277263 systemd-logind[2004]: Removed session 6. Feb 13 16:04:56.302278 systemd[1]: Started sshd@6-172.31.19.223:22-139.178.68.195:53628.service - OpenSSH per-connection server daemon (139.178.68.195:53628). Feb 13 16:04:56.466121 sshd[2314]: Accepted publickey for core from 139.178.68.195 port 53628 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:56.468691 sshd[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:56.476029 systemd-logind[2004]: New session 7 of user core. Feb 13 16:04:56.488019 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:04:56.602354 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:04:56.603058 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:56.617939 sudo[2317]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:56.641216 sshd[2314]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:56.647577 systemd-logind[2004]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:04:56.649307 systemd[1]: sshd@6-172.31.19.223:22-139.178.68.195:53628.service: Deactivated successfully. Feb 13 16:04:56.653405 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:04:56.656336 systemd-logind[2004]: Removed session 7. Feb 13 16:04:56.678262 systemd[1]: Started sshd@7-172.31.19.223:22-139.178.68.195:42742.service - OpenSSH per-connection server daemon (139.178.68.195:42742). Feb 13 16:04:56.788619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:04:56.795182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:04:56.855593 sshd[2322]: Accepted publickey for core from 139.178.68.195 port 42742 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:56.859067 sshd[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:56.871898 systemd-logind[2004]: New session 8 of user core. Feb 13 16:04:56.883090 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:04:56.997659 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:04:56.998342 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:57.009646 sudo[2329]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:57.021921 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 16:04:57.022598 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:57.050331 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:57.058552 auditctl[2332]: No rules Feb 13 16:04:57.059275 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:04:57.059664 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:57.076400 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:04:57.135219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:04:57.137568 augenrules[2356]: No rules Feb 13 16:04:57.139597 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:04:57.142148 sudo[2328]: pam_unix(sudo:session): session closed for user root Feb 13 16:04:57.143996 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:04:57.167603 sshd[2322]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:57.176256 systemd[1]: sshd@7-172.31.19.223:22-139.178.68.195:42742.service: Deactivated successfully. Feb 13 16:04:57.183160 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:04:57.185227 systemd-logind[2004]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:04:57.203124 systemd-logind[2004]: Removed session 8. Feb 13 16:04:57.214504 systemd[1]: Started sshd@8-172.31.19.223:22-139.178.68.195:42746.service - OpenSSH per-connection server daemon (139.178.68.195:42746). Feb 13 16:04:57.241341 kubelet[2354]: E0213 16:04:57.241248 2354 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:04:57.248818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:04:57.249182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:04:57.383089 sshd[2369]: Accepted publickey for core from 139.178.68.195 port 42746 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:04:57.385649 sshd[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:57.394198 systemd-logind[2004]: New session 9 of user core. Feb 13 16:04:57.404016 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:04:57.507691 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:04:57.508901 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:04:57.957452 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:04:57.957528 (dockerd)[2388]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:04:58.312303 dockerd[2388]: time="2025-02-13T16:04:58.312218083Z" level=info msg="Starting up" Feb 13 16:04:58.421071 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3735810599-merged.mount: Deactivated successfully. Feb 13 16:04:58.453372 dockerd[2388]: time="2025-02-13T16:04:58.452987744Z" level=info msg="Loading containers: start." Feb 13 16:04:58.606836 kernel: Initializing XFRM netlink socket Feb 13 16:04:58.641169 (udev-worker)[2411]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:04:58.724338 systemd-networkd[1929]: docker0: Link UP Feb 13 16:04:58.751270 dockerd[2388]: time="2025-02-13T16:04:58.751119177Z" level=info msg="Loading containers: done." Feb 13 16:04:58.773742 dockerd[2388]: time="2025-02-13T16:04:58.773663397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:04:58.774013 dockerd[2388]: time="2025-02-13T16:04:58.773842953Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 16:04:58.774077 dockerd[2388]: time="2025-02-13T16:04:58.774046581Z" level=info msg="Daemon has completed initialization" Feb 13 16:04:58.834117 dockerd[2388]: time="2025-02-13T16:04:58.833930169Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:04:58.834237 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:04:59.953869 containerd[2030]: time="2025-02-13T16:04:59.953748515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 16:05:00.640300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379196507.mount: Deactivated successfully. Feb 13 16:05:03.131412 containerd[2030]: time="2025-02-13T16:05:03.131134055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:03.133315 containerd[2030]: time="2025-02-13T16:05:03.133246787Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 16:05:03.135368 containerd[2030]: time="2025-02-13T16:05:03.135281999Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:03.141016 containerd[2030]: time="2025-02-13T16:05:03.140916203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:03.143675 containerd[2030]: time="2025-02-13T16:05:03.143387891Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 3.189545128s" Feb 13 16:05:03.143675 containerd[2030]: time="2025-02-13T16:05:03.143446739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 16:05:03.145043 containerd[2030]: time="2025-02-13T16:05:03.144986411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 16:05:05.533811 containerd[2030]: time="2025-02-13T16:05:05.532171311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:05.534987 containerd[2030]: time="2025-02-13T16:05:05.534942207Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 16:05:05.537015 containerd[2030]: time="2025-02-13T16:05:05.536975775Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:05.542563 containerd[2030]: time="2025-02-13T16:05:05.542508303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:05.544924 containerd[2030]: time="2025-02-13T16:05:05.544860975Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.399811276s" Feb 13 16:05:05.545208 containerd[2030]: time="2025-02-13T16:05:05.544921395Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 16:05:05.545674 containerd[2030]: time="2025-02-13T16:05:05.545610951Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 16:05:07.289108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:05:07.300236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:07.661058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:07.673550 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:07.771746 kubelet[2596]: E0213 16:05:07.771524 2596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:07.776701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:07.777307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:08.022980 containerd[2030]: time="2025-02-13T16:05:08.022345011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:08.024989 containerd[2030]: time="2025-02-13T16:05:08.024930771Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 16:05:08.025678 containerd[2030]: time="2025-02-13T16:05:08.025593339Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:08.031441 containerd[2030]: time="2025-02-13T16:05:08.031387347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:08.033942 containerd[2030]: time="2025-02-13T16:05:08.033722763Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 2.488048416s" Feb 13 16:05:08.033942 containerd[2030]: time="2025-02-13T16:05:08.033805275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 16:05:08.034572 containerd[2030]: time="2025-02-13T16:05:08.034529523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 16:05:09.352020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537983264.mount: Deactivated successfully. Feb 13 16:05:09.949940 containerd[2030]: time="2025-02-13T16:05:09.949883001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:09.951893 containerd[2030]: time="2025-02-13T16:05:09.951837933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 16:05:09.952171 containerd[2030]: time="2025-02-13T16:05:09.952031997Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:09.955806 containerd[2030]: time="2025-02-13T16:05:09.955684617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:09.957614 containerd[2030]: time="2025-02-13T16:05:09.957391125Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.922794822s" Feb 13 16:05:09.957614 containerd[2030]: time="2025-02-13T16:05:09.957448773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 16:05:09.958651 containerd[2030]: time="2025-02-13T16:05:09.958342953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:05:10.556524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900318592.mount: Deactivated successfully. Feb 13 16:05:11.683139 containerd[2030]: time="2025-02-13T16:05:11.683049537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.686048 containerd[2030]: time="2025-02-13T16:05:11.685978557Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 16:05:11.688716 containerd[2030]: time="2025-02-13T16:05:11.688643661Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.694836 containerd[2030]: time="2025-02-13T16:05:11.694723449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:11.697206 containerd[2030]: time="2025-02-13T16:05:11.696988581Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.738589432s" Feb 13 16:05:11.697206 containerd[2030]: time="2025-02-13T16:05:11.697049241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 16:05:11.698183 containerd[2030]: time="2025-02-13T16:05:11.697930845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 16:05:12.313997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285266032.mount: Deactivated successfully. Feb 13 16:05:12.327848 containerd[2030]: time="2025-02-13T16:05:12.327600932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:12.330739 containerd[2030]: time="2025-02-13T16:05:12.330670628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 16:05:12.333446 containerd[2030]: time="2025-02-13T16:05:12.333384356Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:12.345157 containerd[2030]: time="2025-02-13T16:05:12.343880457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:12.346903 containerd[2030]: time="2025-02-13T16:05:12.346156677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 648.17362ms" Feb 13 16:05:12.346903 containerd[2030]: time="2025-02-13T16:05:12.346219449Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 16:05:12.347503 containerd[2030]: time="2025-02-13T16:05:12.347465157Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 16:05:12.980488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669052474.mount: Deactivated successfully. Feb 13 16:05:13.030474 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 16:05:17.788698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 16:05:17.797323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:18.167262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:18.180341 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:05:18.264203 kubelet[2727]: E0213 16:05:18.263926 2727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:05:18.269831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:05:18.270162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:05:18.321123 containerd[2030]: time="2025-02-13T16:05:18.321054362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:18.324206 containerd[2030]: time="2025-02-13T16:05:18.324116174Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 16:05:18.326531 containerd[2030]: time="2025-02-13T16:05:18.326439938Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:18.333024 containerd[2030]: time="2025-02-13T16:05:18.332943542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:18.335820 containerd[2030]: time="2025-02-13T16:05:18.335499686Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.987881745s" Feb 13 16:05:18.335820 containerd[2030]: time="2025-02-13T16:05:18.335564510Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 16:05:26.614647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:26.625301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:26.686484 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Feb 13 16:05:26.686700 systemd[1]: Reloading... Feb 13 16:05:26.909859 zram_generator::config[2798]: No configuration found. Feb 13 16:05:27.139594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:27.312441 systemd[1]: Reloading finished in 624 ms. Feb 13 16:05:27.415978 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:05:27.416193 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:05:27.417869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:27.430479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:27.438886 update_engine[2005]: I20250213 16:05:27.438813 2005 update_attempter.cc:509] Updating boot flags... Feb 13 16:05:27.532799 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2870) Feb 13 16:05:27.901092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:27.904027 (kubelet)[2958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:05:27.977582 kubelet[2958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:27.978081 kubelet[2958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:05:27.978190 kubelet[2958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:27.978475 kubelet[2958]: I0213 16:05:27.978422 2958 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:05:31.733754 kubelet[2958]: I0213 16:05:31.732261 2958 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:05:31.733754 kubelet[2958]: I0213 16:05:31.732322 2958 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:05:31.733754 kubelet[2958]: I0213 16:05:31.733078 2958 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:05:31.777167 kubelet[2958]: E0213 16:05:31.777116 2958 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:31.783308 kubelet[2958]: I0213 16:05:31.783050 2958 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:31.795362 kubelet[2958]: E0213 16:05:31.795212 2958 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:05:31.795528 kubelet[2958]: I0213 16:05:31.795389 2958 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:05:31.801735 kubelet[2958]: I0213 16:05:31.801693 2958 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:05:31.801979 kubelet[2958]: I0213 16:05:31.801949 2958 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:05:31.802284 kubelet[2958]: I0213 16:05:31.802235 2958 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:05:31.802564 kubelet[2958]: I0213 16:05:31.802291 2958 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:05:31.802739 kubelet[2958]: I0213 16:05:31.802615 2958 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:05:31.802739 kubelet[2958]: I0213 16:05:31.802637 2958 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:05:31.802909 kubelet[2958]: I0213 16:05:31.802881 2958 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:31.809239 kubelet[2958]: I0213 16:05:31.809080 2958 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:05:31.809239 kubelet[2958]: I0213 16:05:31.809125 2958 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:05:31.809239 kubelet[2958]: I0213 16:05:31.809185 2958 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:05:31.809239 kubelet[2958]: I0213 16:05:31.809205 2958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:05:31.811716 kubelet[2958]: W0213 16:05:31.811101 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-223&limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:31.811716 kubelet[2958]: E0213 16:05:31.811190 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-223&limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:31.812259 kubelet[2958]: I0213 16:05:31.812218 2958 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:05:31.815212 kubelet[2958]: I0213 16:05:31.815152 2958 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:05:31.816459 kubelet[2958]: W0213 16:05:31.816412 2958 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:05:31.817566 kubelet[2958]: I0213 16:05:31.817505 2958 server.go:1269] "Started kubelet" Feb 13 16:05:31.817836 kubelet[2958]: W0213 16:05:31.817729 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:31.817931 kubelet[2958]: E0213 16:05:31.817860 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:31.822686 kubelet[2958]: I0213 16:05:31.822307 2958 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:05:31.824800 kubelet[2958]: I0213 16:05:31.824430 2958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:05:31.824953 kubelet[2958]: I0213 16:05:31.824851 2958 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:05:31.825347 kubelet[2958]: I0213 16:05:31.825320 2958 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:05:31.832066 kubelet[2958]: E0213 16:05:31.826211 2958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.223:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-223.1823d0237602d4ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-223,UID:ip-172-31-19-223,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-223,},FirstTimestamp:2025-02-13 16:05:31.817473197 +0000 UTC m=+3.906799988,LastTimestamp:2025-02-13 16:05:31.817473197 +0000 UTC m=+3.906799988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-223,}" Feb 13 16:05:31.836562 kubelet[2958]: I0213 16:05:31.835534 2958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:05:31.839791 kubelet[2958]: I0213 16:05:31.839725 2958 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:05:31.846191 kubelet[2958]: I0213 16:05:31.846146 2958 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:05:31.851730 kubelet[2958]: I0213 16:05:31.851686 2958 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:05:31.851730 kubelet[2958]: E0213 16:05:31.847312 2958 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-223\" not found" Feb 13 16:05:31.851730 kubelet[2958]: E0213 16:05:31.849274 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-223?timeout=10s\": dial tcp 172.31.19.223:6443: connect: connection refused" interval="200ms" Feb 13 16:05:31.852027 kubelet[2958]: I0213 16:05:31.851848 2958 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:05:31.852027 kubelet[2958]: W0213 16:05:31.849140 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:31.852027 kubelet[2958]: E0213 16:05:31.851914 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:31.852217 kubelet[2958]: I0213 16:05:31.852174 2958 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:05:31.852217 kubelet[2958]: I0213 16:05:31.852194 2958 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:05:31.853388 kubelet[2958]: I0213 16:05:31.852342 2958 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:05:31.872877 kubelet[2958]: I0213 16:05:31.872622 2958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:05:31.875526 kubelet[2958]: I0213 16:05:31.875023 2958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:05:31.875526 kubelet[2958]: I0213 16:05:31.875064 2958 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:05:31.875526 kubelet[2958]: I0213 16:05:31.875095 2958 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:05:31.875526 kubelet[2958]: E0213 16:05:31.875164 2958 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:05:31.877693 kubelet[2958]: E0213 16:05:31.877640 2958 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:05:31.882307 kubelet[2958]: W0213 16:05:31.882206 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:31.882468 kubelet[2958]: E0213 16:05:31.882312 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:31.900358 kubelet[2958]: I0213 16:05:31.900320 2958 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:05:31.900358 kubelet[2958]: I0213 16:05:31.900354 2958 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:05:31.900557 kubelet[2958]: I0213 16:05:31.900386 2958 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:31.904723 kubelet[2958]: I0213 16:05:31.904674 2958 policy_none.go:49] "None policy: Start" Feb 13 16:05:31.905849 kubelet[2958]: I0213 16:05:31.905793 2958 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:05:31.905849 kubelet[2958]: I0213 16:05:31.905845 2958 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:05:31.922731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:05:31.943871 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:05:31.951945 kubelet[2958]: E0213 16:05:31.951852 2958 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-223\" not found" Feb 13 16:05:31.957343 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:05:31.961851 kubelet[2958]: I0213 16:05:31.960176 2958 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:05:31.961851 kubelet[2958]: I0213 16:05:31.960513 2958 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:05:31.961851 kubelet[2958]: I0213 16:05:31.960531 2958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:05:31.961851 kubelet[2958]: I0213 16:05:31.961668 2958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:05:31.965294 kubelet[2958]: E0213 16:05:31.965254 2958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-223\" not found" Feb 13 16:05:31.994434 systemd[1]: Created slice kubepods-burstable-pod216c9d7eaf0949d3ae8dbf347ec305f8.slice - libcontainer container kubepods-burstable-pod216c9d7eaf0949d3ae8dbf347ec305f8.slice. Feb 13 16:05:32.026672 systemd[1]: Created slice kubepods-burstable-poded7edaea573afaad58efb61cd151e606.slice - libcontainer container kubepods-burstable-poded7edaea573afaad58efb61cd151e606.slice. Feb 13 16:05:32.043962 systemd[1]: Created slice kubepods-burstable-pod790058de5bf66108be425bf05040be2a.slice - libcontainer container kubepods-burstable-pod790058de5bf66108be425bf05040be2a.slice. Feb 13 16:05:32.052922 kubelet[2958]: E0213 16:05:32.052852 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-223?timeout=10s\": dial tcp 172.31.19.223:6443: connect: connection refused" interval="400ms" Feb 13 16:05:32.063524 kubelet[2958]: I0213 16:05:32.063453 2958 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-223" Feb 13 16:05:32.064029 kubelet[2958]: E0213 16:05:32.063980 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.223:6443/api/v1/nodes\": dial tcp 172.31.19.223:6443: connect: connection refused" node="ip-172-31-19-223" Feb 13 16:05:32.153675 kubelet[2958]: I0213 16:05:32.153573 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/216c9d7eaf0949d3ae8dbf347ec305f8-ca-certs\") pod \"kube-apiserver-ip-172-31-19-223\" (UID: \"216c9d7eaf0949d3ae8dbf347ec305f8\") " pod="kube-system/kube-apiserver-ip-172-31-19-223" Feb 13 16:05:32.153675 kubelet[2958]: I0213 16:05:32.153641 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/216c9d7eaf0949d3ae8dbf347ec305f8-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-223\" (UID: \"216c9d7eaf0949d3ae8dbf347ec305f8\") " pod="kube-system/kube-apiserver-ip-172-31-19-223" Feb 13 16:05:32.154095 kubelet[2958]: I0213 16:05:32.153683 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/216c9d7eaf0949d3ae8dbf347ec305f8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-223\" (UID: \"216c9d7eaf0949d3ae8dbf347ec305f8\") " pod="kube-system/kube-apiserver-ip-172-31-19-223" Feb 13 16:05:32.154095 kubelet[2958]: I0213 16:05:32.153721 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:32.154095 kubelet[2958]: I0213 16:05:32.153756 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/790058de5bf66108be425bf05040be2a-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-223\" (UID: \"790058de5bf66108be425bf05040be2a\") " pod="kube-system/kube-scheduler-ip-172-31-19-223" Feb 13 16:05:32.154095 kubelet[2958]: I0213 16:05:32.153819 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:32.154095 kubelet[2958]: I0213 16:05:32.153856 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:32.154450 kubelet[2958]: I0213 16:05:32.153891 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:32.154450 kubelet[2958]: I0213 16:05:32.153929 2958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:32.266373 kubelet[2958]: I0213 16:05:32.266233 2958 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-223" Feb 13 16:05:32.267294 kubelet[2958]: E0213 16:05:32.266940 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.223:6443/api/v1/nodes\": dial tcp 172.31.19.223:6443: connect: connection refused" node="ip-172-31-19-223" Feb 13 16:05:32.321540 containerd[2030]: time="2025-02-13T16:05:32.321447256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-223,Uid:216c9d7eaf0949d3ae8dbf347ec305f8,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:32.339021 containerd[2030]: time="2025-02-13T16:05:32.338882524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-223,Uid:ed7edaea573afaad58efb61cd151e606,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:32.349316 containerd[2030]: time="2025-02-13T16:05:32.348940876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-223,Uid:790058de5bf66108be425bf05040be2a,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:32.453542 kubelet[2958]: E0213 16:05:32.453473 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-223?timeout=10s\": dial tcp 172.31.19.223:6443: connect: connection refused" interval="800ms" Feb 13 16:05:32.669440 kubelet[2958]: I0213 16:05:32.669385 2958 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-223" Feb 13 16:05:32.669956 kubelet[2958]: E0213 16:05:32.669883 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.223:6443/api/v1/nodes\": dial tcp 172.31.19.223:6443: connect: connection refused" node="ip-172-31-19-223" Feb 13 16:05:32.843957 kubelet[2958]: W0213 16:05:32.843795 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:32.843957 kubelet[2958]: E0213 16:05:32.843898 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:32.848567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002510985.mount: Deactivated successfully. Feb 13 16:05:32.864070 containerd[2030]: time="2025-02-13T16:05:32.863990094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:32.866161 containerd[2030]: time="2025-02-13T16:05:32.866090178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:32.868083 containerd[2030]: time="2025-02-13T16:05:32.867990990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 16:05:32.869921 containerd[2030]: time="2025-02-13T16:05:32.869871138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:05:32.872042 containerd[2030]: time="2025-02-13T16:05:32.871993398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:32.875113 containerd[2030]: time="2025-02-13T16:05:32.874831975Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:32.876620 containerd[2030]: time="2025-02-13T16:05:32.876464779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:05:32.881025 containerd[2030]: time="2025-02-13T16:05:32.880937179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:05:32.885660 containerd[2030]: time="2025-02-13T16:05:32.884989399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.002319ms" Feb 13 16:05:32.889689 containerd[2030]: time="2025-02-13T16:05:32.889618483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.066695ms" Feb 13 16:05:32.894276 containerd[2030]: time="2025-02-13T16:05:32.894012883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.960803ms" Feb 13 16:05:33.092633 kubelet[2958]: W0213 16:05:33.092419 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:33.092633 kubelet[2958]: E0213 16:05:33.092490 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:33.118944 containerd[2030]: time="2025-02-13T16:05:33.118083316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:33.118944 containerd[2030]: time="2025-02-13T16:05:33.118192360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:33.118944 containerd[2030]: time="2025-02-13T16:05:33.118231252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:33.118944 containerd[2030]: time="2025-02-13T16:05:33.118526620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:33.124010 containerd[2030]: time="2025-02-13T16:05:33.122824576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:33.124010 containerd[2030]: time="2025-02-13T16:05:33.123845716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:33.124010 containerd[2030]: time="2025-02-13T16:05:33.123873148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:33.125349 containerd[2030]: time="2025-02-13T16:05:33.124034848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:33.126214 containerd[2030]: time="2025-02-13T16:05:33.125825620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:33.130443 containerd[2030]: time="2025-02-13T16:05:33.130009432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:33.130443 containerd[2030]: time="2025-02-13T16:05:33.130071676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:33.131028 containerd[2030]: time="2025-02-13T16:05:33.130956928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:33.168341 systemd[1]: Started cri-containerd-1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510.scope - libcontainer container 1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510. Feb 13 16:05:33.189082 systemd[1]: Started cri-containerd-6d09a7ae8ca615f1bfe4ed4d6571286bedbde085387de6fe0563736c28d4ad59.scope - libcontainer container 6d09a7ae8ca615f1bfe4ed4d6571286bedbde085387de6fe0563736c28d4ad59. Feb 13 16:05:33.202959 systemd[1]: Started cri-containerd-e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f.scope - libcontainer container e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f. Feb 13 16:05:33.254940 kubelet[2958]: E0213 16:05:33.254868 2958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-223?timeout=10s\": dial tcp 172.31.19.223:6443: connect: connection refused" interval="1.6s" Feb 13 16:05:33.307830 containerd[2030]: time="2025-02-13T16:05:33.306677597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-223,Uid:790058de5bf66108be425bf05040be2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510\"" Feb 13 16:05:33.318576 containerd[2030]: time="2025-02-13T16:05:33.318512057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-223,Uid:216c9d7eaf0949d3ae8dbf347ec305f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d09a7ae8ca615f1bfe4ed4d6571286bedbde085387de6fe0563736c28d4ad59\"" Feb 13 16:05:33.332426 containerd[2030]: time="2025-02-13T16:05:33.332325017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-223,Uid:ed7edaea573afaad58efb61cd151e606,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f\"" Feb 13 16:05:33.337641 containerd[2030]: time="2025-02-13T16:05:33.337562885Z" level=info msg="CreateContainer within sandbox \"1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:05:33.339722 containerd[2030]: time="2025-02-13T16:05:33.339515309Z" level=info msg="CreateContainer within sandbox \"6d09a7ae8ca615f1bfe4ed4d6571286bedbde085387de6fe0563736c28d4ad59\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:05:33.342423 containerd[2030]: time="2025-02-13T16:05:33.342354365Z" level=info msg="CreateContainer within sandbox \"e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:05:33.384841 containerd[2030]: time="2025-02-13T16:05:33.384530285Z" level=info msg="CreateContainer within sandbox \"1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc\"" Feb 13 16:05:33.386289 containerd[2030]: time="2025-02-13T16:05:33.386230481Z" level=info msg="StartContainer for \"53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc\"" Feb 13 16:05:33.400210 containerd[2030]: time="2025-02-13T16:05:33.400027013Z" level=info msg="CreateContainer within sandbox \"6d09a7ae8ca615f1bfe4ed4d6571286bedbde085387de6fe0563736c28d4ad59\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10d4da57a2f815bd3485b80343863c47aeb02861e915eb4f50cc67723ca4bd54\"" Feb 13 16:05:33.402279 containerd[2030]: time="2025-02-13T16:05:33.402196757Z" level=info msg="StartContainer for \"10d4da57a2f815bd3485b80343863c47aeb02861e915eb4f50cc67723ca4bd54\"" Feb 13 16:05:33.405799 kubelet[2958]: W0213 16:05:33.405484 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-223&limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:33.405925 kubelet[2958]: E0213 16:05:33.405840 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-223&limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:33.413908 containerd[2030]: time="2025-02-13T16:05:33.413700893Z" level=info msg="CreateContainer within sandbox \"e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777\"" Feb 13 16:05:33.415430 containerd[2030]: time="2025-02-13T16:05:33.415371869Z" level=info msg="StartContainer for \"5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777\"" Feb 13 16:05:33.443098 systemd[1]: Started cri-containerd-53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc.scope - libcontainer container 53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc. Feb 13 16:05:33.463246 kubelet[2958]: W0213 16:05:33.463043 2958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.223:6443: connect: connection refused Feb 13 16:05:33.463246 kubelet[2958]: E0213 16:05:33.463168 2958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.223:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:05:33.474470 kubelet[2958]: I0213 16:05:33.473948 2958 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-223" Feb 13 16:05:33.474470 kubelet[2958]: E0213 16:05:33.474400 2958 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.223:6443/api/v1/nodes\": dial tcp 172.31.19.223:6443: connect: connection refused" node="ip-172-31-19-223" Feb 13 16:05:33.484093 systemd[1]: Started cri-containerd-10d4da57a2f815bd3485b80343863c47aeb02861e915eb4f50cc67723ca4bd54.scope - libcontainer container 10d4da57a2f815bd3485b80343863c47aeb02861e915eb4f50cc67723ca4bd54. Feb 13 16:05:33.524105 systemd[1]: Started cri-containerd-5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777.scope - libcontainer container 5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777. Feb 13 16:05:33.578869 containerd[2030]: time="2025-02-13T16:05:33.578225670Z" level=info msg="StartContainer for \"53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc\" returns successfully" Feb 13 16:05:33.635883 containerd[2030]: time="2025-02-13T16:05:33.635198418Z" level=info msg="StartContainer for \"10d4da57a2f815bd3485b80343863c47aeb02861e915eb4f50cc67723ca4bd54\" returns successfully" Feb 13 16:05:33.654518 containerd[2030]: time="2025-02-13T16:05:33.654353622Z" level=info msg="StartContainer for \"5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777\" returns successfully" Feb 13 16:05:35.077166 kubelet[2958]: I0213 16:05:35.076854 2958 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-223" Feb 13 16:05:37.358301 kubelet[2958]: E0213 16:05:37.358238 2958 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-223\" not found" node="ip-172-31-19-223" Feb 13 16:05:37.490411 kubelet[2958]: I0213 16:05:37.490355 2958 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-223" Feb 13 16:05:37.827731 kubelet[2958]: I0213 16:05:37.826019 2958 apiserver.go:52] "Watching apiserver" Feb 13 16:05:37.851911 kubelet[2958]: I0213 16:05:37.851863 2958 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:05:39.401268 systemd[1]: Reloading requested from client PID 3239 ('systemctl') (unit session-9.scope)... Feb 13 16:05:39.401820 systemd[1]: Reloading... Feb 13 16:05:39.654842 zram_generator::config[3282]: No configuration found. Feb 13 16:05:39.922078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:40.157641 systemd[1]: Reloading finished in 755 ms. Feb 13 16:05:40.255280 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:40.270594 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:05:40.271280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:40.272889 systemd[1]: kubelet.service: Consumed 4.592s CPU time, 114.3M memory peak, 0B memory swap peak. Feb 13 16:05:40.284169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:05:40.613633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:05:40.633943 (kubelet)[3338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:05:40.737240 kubelet[3338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:40.737240 kubelet[3338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:05:40.737240 kubelet[3338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:05:40.737826 kubelet[3338]: I0213 16:05:40.737323 3338 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:05:40.750122 kubelet[3338]: I0213 16:05:40.750066 3338 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:05:40.750122 kubelet[3338]: I0213 16:05:40.750111 3338 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:05:40.750893 kubelet[3338]: I0213 16:05:40.750548 3338 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:05:40.753910 kubelet[3338]: I0213 16:05:40.753585 3338 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:05:40.758022 kubelet[3338]: I0213 16:05:40.757966 3338 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:05:40.765548 kubelet[3338]: E0213 16:05:40.765378 3338 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:05:40.765548 kubelet[3338]: I0213 16:05:40.765439 3338 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:05:40.778160 kubelet[3338]: I0213 16:05:40.777757 3338 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:05:40.778160 kubelet[3338]: I0213 16:05:40.778027 3338 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:05:40.778378 kubelet[3338]: I0213 16:05:40.778247 3338 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:05:40.780596 kubelet[3338]: I0213 16:05:40.778288 3338 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:05:40.780596 kubelet[3338]: I0213 16:05:40.779328 3338 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:05:40.780596 kubelet[3338]: I0213 16:05:40.779368 3338 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:05:40.780596 kubelet[3338]: I0213 16:05:40.779434 3338 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:40.780596 kubelet[3338]: I0213 16:05:40.779613 3338 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:05:40.781069 kubelet[3338]: I0213 16:05:40.779635 3338 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:05:40.781069 kubelet[3338]: I0213 16:05:40.779673 3338 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:05:40.781069 kubelet[3338]: I0213 16:05:40.779694 3338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:05:40.783048 kubelet[3338]: I0213 16:05:40.782992 3338 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:05:40.784129 kubelet[3338]: I0213 16:05:40.783930 3338 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:05:40.784881 kubelet[3338]: I0213 16:05:40.784657 3338 server.go:1269] "Started kubelet" Feb 13 16:05:40.792988 kubelet[3338]: I0213 16:05:40.792555 3338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:05:40.807796 kubelet[3338]: I0213 16:05:40.804082 3338 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:05:40.807796 kubelet[3338]: I0213 16:05:40.805703 3338 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:05:40.815111 kubelet[3338]: I0213 16:05:40.814971 3338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:05:40.817817 kubelet[3338]: I0213 16:05:40.817741 3338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:05:40.818742 kubelet[3338]: I0213 16:05:40.818707 3338 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:05:40.822261 kubelet[3338]: E0213 16:05:40.822206 3338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-223\" not found" Feb 13 16:05:40.842041 kubelet[3338]: I0213 16:05:40.825425 3338 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:05:40.847895 kubelet[3338]: I0213 16:05:40.826203 3338 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:05:40.853147 sudo[3357]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 16:05:40.854335 sudo[3357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 16:05:40.877039 kubelet[3338]: I0213 16:05:40.837571 3338 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:05:40.877205 kubelet[3338]: I0213 16:05:40.861110 3338 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:05:40.879092 kubelet[3338]: I0213 16:05:40.877414 3338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:05:40.879579 kubelet[3338]: E0213 16:05:40.879530 3338 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:05:40.892557 kubelet[3338]: I0213 16:05:40.892480 3338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:05:40.898963 kubelet[3338]: I0213 16:05:40.895899 3338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:05:40.898963 kubelet[3338]: I0213 16:05:40.895948 3338 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:05:40.898963 kubelet[3338]: I0213 16:05:40.895979 3338 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:05:40.898963 kubelet[3338]: E0213 16:05:40.896072 3338 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:05:40.902014 kubelet[3338]: I0213 16:05:40.901331 3338 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:05:41.009796 kubelet[3338]: E0213 16:05:41.009695 3338 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:05:41.116084 kubelet[3338]: I0213 16:05:41.116038 3338 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:05:41.116084 kubelet[3338]: I0213 16:05:41.116071 3338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:05:41.116251 kubelet[3338]: I0213 16:05:41.116107 3338 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:05:41.116387 kubelet[3338]: I0213 16:05:41.116352 3338 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:05:41.116557 kubelet[3338]: I0213 16:05:41.116382 3338 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:05:41.116557 kubelet[3338]: I0213 16:05:41.116425 3338 policy_none.go:49] "None policy: Start" Feb 13 16:05:41.120606 kubelet[3338]: I0213 16:05:41.120217 3338 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:05:41.120606 kubelet[3338]: I0213 16:05:41.120262 3338 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:05:41.121447 kubelet[3338]: I0213 16:05:41.120901 3338 state_mem.go:75] "Updated machine memory state" Feb 13 16:05:41.136326 kubelet[3338]: I0213 16:05:41.136187 3338 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:05:41.138136 kubelet[3338]: I0213 16:05:41.137820 3338 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:05:41.139010 kubelet[3338]: I0213 16:05:41.137850 3338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:05:41.146303 kubelet[3338]: I0213 16:05:41.145622 3338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:05:41.267346 kubelet[3338]: I0213 16:05:41.267295 3338 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-223" Feb 13 16:05:41.282050 kubelet[3338]: I0213 16:05:41.281982 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/216c9d7eaf0949d3ae8dbf347ec305f8-ca-certs\") pod \"kube-apiserver-ip-172-31-19-223\" (UID: \"216c9d7eaf0949d3ae8dbf347ec305f8\") " pod="kube-system/kube-apiserver-ip-172-31-19-223" Feb 13 16:05:41.282207 kubelet[3338]: I0213 16:05:41.282065 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/216c9d7eaf0949d3ae8dbf347ec305f8-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-223\" (UID: \"216c9d7eaf0949d3ae8dbf347ec305f8\") " pod="kube-system/kube-apiserver-ip-172-31-19-223" Feb 13 16:05:41.282294 kubelet[3338]: I0213 16:05:41.282248 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/216c9d7eaf0949d3ae8dbf347ec305f8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-223\" (UID: \"216c9d7eaf0949d3ae8dbf347ec305f8\") " pod="kube-system/kube-apiserver-ip-172-31-19-223" Feb 13 16:05:41.282375 kubelet[3338]: I0213 16:05:41.282328 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:41.282437 kubelet[3338]: I0213 16:05:41.282405 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:41.282497 kubelet[3338]: I0213 16:05:41.282445 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:41.282609 kubelet[3338]: I0213 16:05:41.282525 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:41.282696 kubelet[3338]: I0213 16:05:41.282638 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed7edaea573afaad58efb61cd151e606-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-223\" (UID: \"ed7edaea573afaad58efb61cd151e606\") " pod="kube-system/kube-controller-manager-ip-172-31-19-223" Feb 13 16:05:41.282752 kubelet[3338]: I0213 16:05:41.282675 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/790058de5bf66108be425bf05040be2a-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-223\" (UID: \"790058de5bf66108be425bf05040be2a\") " pod="kube-system/kube-scheduler-ip-172-31-19-223" Feb 13 16:05:41.293991 kubelet[3338]: I0213 16:05:41.293933 3338 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-223" Feb 13 16:05:41.294177 kubelet[3338]: I0213 16:05:41.294057 3338 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-223" Feb 13 16:05:41.795911 kubelet[3338]: I0213 16:05:41.795867 3338 apiserver.go:52] "Watching apiserver" Feb 13 16:05:41.842578 kubelet[3338]: I0213 16:05:41.842471 3338 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:05:41.970477 sudo[3357]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:42.099359 kubelet[3338]: I0213 16:05:42.099024 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-223" podStartSLOduration=1.099000648 podStartE2EDuration="1.099000648s" podCreationTimestamp="2025-02-13 16:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:42.074835108 +0000 UTC m=+1.429929824" watchObservedRunningTime="2025-02-13 16:05:42.099000648 +0000 UTC m=+1.454095304" Feb 13 16:05:42.119524 kubelet[3338]: I0213 16:05:42.119407 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-223" podStartSLOduration=1.119384424 podStartE2EDuration="1.119384424s" podCreationTimestamp="2025-02-13 16:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:42.102968892 +0000 UTC m=+1.458063548" watchObservedRunningTime="2025-02-13 16:05:42.119384424 +0000 UTC m=+1.474479080" Feb 13 16:05:42.146672 kubelet[3338]: I0213 16:05:42.146586 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-223" podStartSLOduration=1.146564473 podStartE2EDuration="1.146564473s" podCreationTimestamp="2025-02-13 16:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:42.121991184 +0000 UTC m=+1.477085864" watchObservedRunningTime="2025-02-13 16:05:42.146564473 +0000 UTC m=+1.501659129" Feb 13 16:05:44.702396 kubelet[3338]: I0213 16:05:44.702334 3338 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:05:44.705521 containerd[2030]: time="2025-02-13T16:05:44.704650637Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:05:44.706068 kubelet[3338]: I0213 16:05:44.705160 3338 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:05:45.059150 sudo[2373]: pam_unix(sudo:session): session closed for user root Feb 13 16:05:45.084123 sshd[2369]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:45.090953 systemd[1]: sshd@8-172.31.19.223:22-139.178.68.195:42746.service: Deactivated successfully. Feb 13 16:05:45.095231 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:05:45.095556 systemd[1]: session-9.scope: Consumed 12.001s CPU time, 153.2M memory peak, 0B memory swap peak. Feb 13 16:05:45.097941 systemd-logind[2004]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:05:45.100256 systemd-logind[2004]: Removed session 9. Feb 13 16:05:45.555168 systemd[1]: Created slice kubepods-besteffort-pod70994d38_caea_4f39_bbe4_8527d5549554.slice - libcontainer container kubepods-besteffort-pod70994d38_caea_4f39_bbe4_8527d5549554.slice. Feb 13 16:05:45.583015 systemd[1]: Created slice kubepods-burstable-pode16796d3_d81b_46ee_a5f7_8d13e54c1552.slice - libcontainer container kubepods-burstable-pode16796d3_d81b_46ee_a5f7_8d13e54c1552.slice. Feb 13 16:05:45.610030 kubelet[3338]: I0213 16:05:45.609982 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-net\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.610252 kubelet[3338]: I0213 16:05:45.610228 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e16796d3-d81b-46ee-a5f7-8d13e54c1552-clustermesh-secrets\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.610392 kubelet[3338]: I0213 16:05:45.610367 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-config-path\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.610552 kubelet[3338]: I0213 16:05:45.610498 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70994d38-caea-4f39-bbe4-8527d5549554-lib-modules\") pod \"kube-proxy-z4tbx\" (UID: \"70994d38-caea-4f39-bbe4-8527d5549554\") " pod="kube-system/kube-proxy-z4tbx" Feb 13 16:05:45.610787 kubelet[3338]: I0213 16:05:45.610640 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-run\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.610787 kubelet[3338]: I0213 16:05:45.610733 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-bpf-maps\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.611091 kubelet[3338]: I0213 16:05:45.611018 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-etc-cni-netd\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.611363 kubelet[3338]: I0213 16:05:45.611063 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hubble-tls\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.611363 kubelet[3338]: I0213 16:05:45.611319 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cni-path\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.611786 kubelet[3338]: I0213 16:05:45.611641 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-lib-modules\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.613033 kubelet[3338]: I0213 16:05:45.612874 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-xtables-lock\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.613306 kubelet[3338]: I0213 16:05:45.613118 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hostproc\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.613753 kubelet[3338]: I0213 16:05:45.613520 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-cgroup\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.613753 kubelet[3338]: I0213 16:05:45.613688 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70994d38-caea-4f39-bbe4-8527d5549554-xtables-lock\") pod \"kube-proxy-z4tbx\" (UID: \"70994d38-caea-4f39-bbe4-8527d5549554\") " pod="kube-system/kube-proxy-z4tbx" Feb 13 16:05:45.614366 kubelet[3338]: I0213 16:05:45.614065 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-kernel\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.614366 kubelet[3338]: I0213 16:05:45.614196 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56rs\" (UniqueName: \"kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-kube-api-access-z56rs\") pod \"cilium-gkff6\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " pod="kube-system/cilium-gkff6" Feb 13 16:05:45.614366 kubelet[3338]: I0213 16:05:45.614254 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70994d38-caea-4f39-bbe4-8527d5549554-kube-proxy\") pod \"kube-proxy-z4tbx\" (UID: \"70994d38-caea-4f39-bbe4-8527d5549554\") " pod="kube-system/kube-proxy-z4tbx" Feb 13 16:05:45.614366 kubelet[3338]: I0213 16:05:45.614324 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmbv\" (UniqueName: \"kubernetes.io/projected/70994d38-caea-4f39-bbe4-8527d5549554-kube-api-access-2pmbv\") pod \"kube-proxy-z4tbx\" (UID: \"70994d38-caea-4f39-bbe4-8527d5549554\") " pod="kube-system/kube-proxy-z4tbx" Feb 13 16:05:45.821641 systemd[1]: Created slice kubepods-besteffort-pod6c75a819_2f96_49d1_bd18_cfbc2f0cc6ae.slice - libcontainer container kubepods-besteffort-pod6c75a819_2f96_49d1_bd18_cfbc2f0cc6ae.slice. Feb 13 16:05:45.875374 containerd[2030]: time="2025-02-13T16:05:45.875138395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4tbx,Uid:70994d38-caea-4f39-bbe4-8527d5549554,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:45.892202 containerd[2030]: time="2025-02-13T16:05:45.892129615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gkff6,Uid:e16796d3-d81b-46ee-a5f7-8d13e54c1552,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:45.917192 kubelet[3338]: I0213 16:05:45.917135 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-cilium-config-path\") pod \"cilium-operator-5d85765b45-xcpsg\" (UID: \"6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae\") " pod="kube-system/cilium-operator-5d85765b45-xcpsg" Feb 13 16:05:45.918306 kubelet[3338]: I0213 16:05:45.918164 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6qnj\" (UniqueName: \"kubernetes.io/projected/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-kube-api-access-j6qnj\") pod \"cilium-operator-5d85765b45-xcpsg\" (UID: \"6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae\") " pod="kube-system/cilium-operator-5d85765b45-xcpsg" Feb 13 16:05:45.936680 containerd[2030]: time="2025-02-13T16:05:45.932390911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:45.936680 containerd[2030]: time="2025-02-13T16:05:45.932503027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:45.936680 containerd[2030]: time="2025-02-13T16:05:45.932532451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:45.936680 containerd[2030]: time="2025-02-13T16:05:45.932701807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:45.974429 containerd[2030]: time="2025-02-13T16:05:45.974297756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:45.974429 containerd[2030]: time="2025-02-13T16:05:45.974387048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:45.975132 containerd[2030]: time="2025-02-13T16:05:45.975026600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:45.975589 containerd[2030]: time="2025-02-13T16:05:45.975509840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:45.977397 systemd[1]: Started cri-containerd-676d22f53808d07000f220cf19684af8631f6676d950b5ba102d32f2176f2fe9.scope - libcontainer container 676d22f53808d07000f220cf19684af8631f6676d950b5ba102d32f2176f2fe9. Feb 13 16:05:46.021513 systemd[1]: Started cri-containerd-6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354.scope - libcontainer container 6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354. Feb 13 16:05:46.066469 containerd[2030]: time="2025-02-13T16:05:46.066399364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4tbx,Uid:70994d38-caea-4f39-bbe4-8527d5549554,Namespace:kube-system,Attempt:0,} returns sandbox id \"676d22f53808d07000f220cf19684af8631f6676d950b5ba102d32f2176f2fe9\"" Feb 13 16:05:46.078497 containerd[2030]: time="2025-02-13T16:05:46.077840932Z" level=info msg="CreateContainer within sandbox \"676d22f53808d07000f220cf19684af8631f6676d950b5ba102d32f2176f2fe9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:05:46.112084 containerd[2030]: time="2025-02-13T16:05:46.112010512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gkff6,Uid:e16796d3-d81b-46ee-a5f7-8d13e54c1552,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\"" Feb 13 16:05:46.118876 containerd[2030]: time="2025-02-13T16:05:46.118733572Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 16:05:46.131228 containerd[2030]: time="2025-02-13T16:05:46.131087068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xcpsg,Uid:6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:46.134526 containerd[2030]: time="2025-02-13T16:05:46.134475352Z" level=info msg="CreateContainer within sandbox \"676d22f53808d07000f220cf19684af8631f6676d950b5ba102d32f2176f2fe9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e42a982c62d0a193be09c84ab3973dbe1a712ee9c072e2eb757baccf6461b511\"" Feb 13 16:05:46.135851 containerd[2030]: time="2025-02-13T16:05:46.135707692Z" level=info msg="StartContainer for \"e42a982c62d0a193be09c84ab3973dbe1a712ee9c072e2eb757baccf6461b511\"" Feb 13 16:05:46.190297 containerd[2030]: time="2025-02-13T16:05:46.189560777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:46.190297 containerd[2030]: time="2025-02-13T16:05:46.189752477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:46.190297 containerd[2030]: time="2025-02-13T16:05:46.190008881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:46.191703 containerd[2030]: time="2025-02-13T16:05:46.190338257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:46.194338 systemd[1]: Started cri-containerd-e42a982c62d0a193be09c84ab3973dbe1a712ee9c072e2eb757baccf6461b511.scope - libcontainer container e42a982c62d0a193be09c84ab3973dbe1a712ee9c072e2eb757baccf6461b511. Feb 13 16:05:46.230142 systemd[1]: Started cri-containerd-b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3.scope - libcontainer container b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3. Feb 13 16:05:46.276989 containerd[2030]: time="2025-02-13T16:05:46.276813545Z" level=info msg="StartContainer for \"e42a982c62d0a193be09c84ab3973dbe1a712ee9c072e2eb757baccf6461b511\" returns successfully" Feb 13 16:05:46.320001 containerd[2030]: time="2025-02-13T16:05:46.319476581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xcpsg,Uid:6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\"" Feb 13 16:05:47.033141 kubelet[3338]: I0213 16:05:47.032980 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z4tbx" podStartSLOduration=2.0329352529999998 podStartE2EDuration="2.032935253s" podCreationTimestamp="2025-02-13 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:47.031072529 +0000 UTC m=+6.386167197" watchObservedRunningTime="2025-02-13 16:05:47.032935253 +0000 UTC m=+6.388029909" Feb 13 16:05:51.516969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845668025.mount: Deactivated successfully. Feb 13 16:05:54.040657 containerd[2030]: time="2025-02-13T16:05:54.040571736Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:54.042426 containerd[2030]: time="2025-02-13T16:05:54.042355344Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 16:05:54.044955 containerd[2030]: time="2025-02-13T16:05:54.044880912Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:54.048793 containerd[2030]: time="2025-02-13T16:05:54.048297516Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.92941884s" Feb 13 16:05:54.048793 containerd[2030]: time="2025-02-13T16:05:54.048364548Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 16:05:54.052728 containerd[2030]: time="2025-02-13T16:05:54.052656036Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 16:05:54.054482 containerd[2030]: time="2025-02-13T16:05:54.054278700Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:05:54.085985 containerd[2030]: time="2025-02-13T16:05:54.085910004Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460\"" Feb 13 16:05:54.087925 containerd[2030]: time="2025-02-13T16:05:54.086814228Z" level=info msg="StartContainer for \"f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460\"" Feb 13 16:05:54.148124 systemd[1]: Started cri-containerd-f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460.scope - libcontainer container f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460. Feb 13 16:05:54.202189 containerd[2030]: time="2025-02-13T16:05:54.201964332Z" level=info msg="StartContainer for \"f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460\" returns successfully" Feb 13 16:05:54.222215 systemd[1]: cri-containerd-f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460.scope: Deactivated successfully. Feb 13 16:05:55.073292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460-rootfs.mount: Deactivated successfully. Feb 13 16:05:55.590121 containerd[2030]: time="2025-02-13T16:05:55.590008695Z" level=info msg="shim disconnected" id=f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460 namespace=k8s.io Feb 13 16:05:55.591365 containerd[2030]: time="2025-02-13T16:05:55.590808771Z" level=warning msg="cleaning up after shim disconnected" id=f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460 namespace=k8s.io Feb 13 16:05:55.591365 containerd[2030]: time="2025-02-13T16:05:55.590841267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:56.061800 containerd[2030]: time="2025-02-13T16:05:56.061705166Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:05:56.104969 containerd[2030]: time="2025-02-13T16:05:56.104603246Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e\"" Feb 13 16:05:56.109041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232125774.mount: Deactivated successfully. Feb 13 16:05:56.118456 containerd[2030]: time="2025-02-13T16:05:56.118301942Z" level=info msg="StartContainer for \"7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e\"" Feb 13 16:05:56.184105 systemd[1]: Started cri-containerd-7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e.scope - libcontainer container 7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e. Feb 13 16:05:56.233613 containerd[2030]: time="2025-02-13T16:05:56.232522299Z" level=info msg="StartContainer for \"7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e\" returns successfully" Feb 13 16:05:56.259592 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:05:56.261160 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:56.261424 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:56.270244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:56.270664 systemd[1]: cri-containerd-7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e.scope: Deactivated successfully. Feb 13 16:05:56.305264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:56.324276 containerd[2030]: time="2025-02-13T16:05:56.323741823Z" level=info msg="shim disconnected" id=7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e namespace=k8s.io Feb 13 16:05:56.324276 containerd[2030]: time="2025-02-13T16:05:56.323937543Z" level=warning msg="cleaning up after shim disconnected" id=7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e namespace=k8s.io Feb 13 16:05:56.324276 containerd[2030]: time="2025-02-13T16:05:56.323961087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:57.071743 containerd[2030]: time="2025-02-13T16:05:57.071546859Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:05:57.088243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e-rootfs.mount: Deactivated successfully. Feb 13 16:05:57.134083 containerd[2030]: time="2025-02-13T16:05:57.133896723Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401\"" Feb 13 16:05:57.136311 containerd[2030]: time="2025-02-13T16:05:57.136252143Z" level=info msg="StartContainer for \"272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401\"" Feb 13 16:05:57.285120 systemd[1]: Started cri-containerd-272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401.scope - libcontainer container 272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401. Feb 13 16:05:57.356186 containerd[2030]: time="2025-02-13T16:05:57.355344304Z" level=info msg="StartContainer for \"272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401\" returns successfully" Feb 13 16:05:57.372036 systemd[1]: cri-containerd-272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401.scope: Deactivated successfully. Feb 13 16:05:57.513712 containerd[2030]: time="2025-02-13T16:05:57.513642845Z" level=info msg="shim disconnected" id=272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401 namespace=k8s.io Feb 13 16:05:57.514813 containerd[2030]: time="2025-02-13T16:05:57.514220645Z" level=warning msg="cleaning up after shim disconnected" id=272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401 namespace=k8s.io Feb 13 16:05:57.514813 containerd[2030]: time="2025-02-13T16:05:57.514259681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:57.654531 containerd[2030]: time="2025-02-13T16:05:57.653905242Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.655729 containerd[2030]: time="2025-02-13T16:05:57.655671198Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 16:05:57.656528 containerd[2030]: time="2025-02-13T16:05:57.656223654Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:05:57.659603 containerd[2030]: time="2025-02-13T16:05:57.659263722Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.606537606s" Feb 13 16:05:57.659603 containerd[2030]: time="2025-02-13T16:05:57.659361546Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 16:05:57.665952 containerd[2030]: time="2025-02-13T16:05:57.665882430Z" level=info msg="CreateContainer within sandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 16:05:57.690927 containerd[2030]: time="2025-02-13T16:05:57.690846198Z" level=info msg="CreateContainer within sandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\"" Feb 13 16:05:57.691749 containerd[2030]: time="2025-02-13T16:05:57.691683798Z" level=info msg="StartContainer for \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\"" Feb 13 16:05:57.739102 systemd[1]: Started cri-containerd-00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f.scope - libcontainer container 00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f. Feb 13 16:05:57.781730 containerd[2030]: time="2025-02-13T16:05:57.781610958Z" level=info msg="StartContainer for \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\" returns successfully" Feb 13 16:05:58.083705 containerd[2030]: time="2025-02-13T16:05:58.083631604Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:05:58.091576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401-rootfs.mount: Deactivated successfully. Feb 13 16:05:58.114480 containerd[2030]: time="2025-02-13T16:05:58.114424156Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23\"" Feb 13 16:05:58.120157 containerd[2030]: time="2025-02-13T16:05:58.120088408Z" level=info msg="StartContainer for \"b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23\"" Feb 13 16:05:58.211859 systemd[1]: Started cri-containerd-b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23.scope - libcontainer container b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23. Feb 13 16:05:58.248828 kubelet[3338]: I0213 16:05:58.248706 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xcpsg" podStartSLOduration=1.910249868 podStartE2EDuration="13.248657645s" podCreationTimestamp="2025-02-13 16:05:45 +0000 UTC" firstStartedPulling="2025-02-13 16:05:46.322580693 +0000 UTC m=+5.677675337" lastFinishedPulling="2025-02-13 16:05:57.66098847 +0000 UTC m=+17.016083114" observedRunningTime="2025-02-13 16:05:58.136467496 +0000 UTC m=+17.491562140" watchObservedRunningTime="2025-02-13 16:05:58.248657645 +0000 UTC m=+17.603752301" Feb 13 16:05:58.322719 systemd[1]: cri-containerd-b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23.scope: Deactivated successfully. Feb 13 16:05:58.333872 containerd[2030]: time="2025-02-13T16:05:58.331819169Z" level=info msg="StartContainer for \"b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23\" returns successfully" Feb 13 16:05:58.415816 containerd[2030]: time="2025-02-13T16:05:58.414902513Z" level=info msg="shim disconnected" id=b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23 namespace=k8s.io Feb 13 16:05:58.415816 containerd[2030]: time="2025-02-13T16:05:58.414997253Z" level=warning msg="cleaning up after shim disconnected" id=b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23 namespace=k8s.io Feb 13 16:05:58.415816 containerd[2030]: time="2025-02-13T16:05:58.415019513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:58.447442 containerd[2030]: time="2025-02-13T16:05:58.447182754Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:05:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:05:59.088602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23-rootfs.mount: Deactivated successfully. Feb 13 16:05:59.093689 containerd[2030]: time="2025-02-13T16:05:59.093623489Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:05:59.125286 containerd[2030]: time="2025-02-13T16:05:59.125199185Z" level=info msg="CreateContainer within sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\"" Feb 13 16:05:59.127642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296012298.mount: Deactivated successfully. Feb 13 16:05:59.130812 containerd[2030]: time="2025-02-13T16:05:59.128462705Z" level=info msg="StartContainer for \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\"" Feb 13 16:05:59.217157 systemd[1]: Started cri-containerd-a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0.scope - libcontainer container a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0. Feb 13 16:05:59.347350 containerd[2030]: time="2025-02-13T16:05:59.346701294Z" level=info msg="StartContainer for \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\" returns successfully" Feb 13 16:05:59.641193 kubelet[3338]: I0213 16:05:59.640673 3338 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 16:05:59.717804 systemd[1]: Created slice kubepods-burstable-pode4f1a0b3_3059_42a5_b277_ddfc56080cb1.slice - libcontainer container kubepods-burstable-pode4f1a0b3_3059_42a5_b277_ddfc56080cb1.slice. Feb 13 16:05:59.737709 systemd[1]: Created slice kubepods-burstable-pod0d171998_b358_4843_a9fa_96eac209100b.slice - libcontainer container kubepods-burstable-pod0d171998_b358_4843_a9fa_96eac209100b.slice. Feb 13 16:05:59.832309 kubelet[3338]: I0213 16:05:59.832042 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d171998-b358-4843-a9fa-96eac209100b-config-volume\") pod \"coredns-6f6b679f8f-rbks7\" (UID: \"0d171998-b358-4843-a9fa-96eac209100b\") " pod="kube-system/coredns-6f6b679f8f-rbks7" Feb 13 16:05:59.832309 kubelet[3338]: I0213 16:05:59.832117 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmx2q\" (UniqueName: \"kubernetes.io/projected/e4f1a0b3-3059-42a5-b277-ddfc56080cb1-kube-api-access-lmx2q\") pod \"coredns-6f6b679f8f-l8l7h\" (UID: \"e4f1a0b3-3059-42a5-b277-ddfc56080cb1\") " pod="kube-system/coredns-6f6b679f8f-l8l7h" Feb 13 16:05:59.832309 kubelet[3338]: I0213 16:05:59.832160 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4f1a0b3-3059-42a5-b277-ddfc56080cb1-config-volume\") pod \"coredns-6f6b679f8f-l8l7h\" (UID: \"e4f1a0b3-3059-42a5-b277-ddfc56080cb1\") " pod="kube-system/coredns-6f6b679f8f-l8l7h" Feb 13 16:05:59.832309 kubelet[3338]: I0213 16:05:59.832201 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbs4w\" (UniqueName: \"kubernetes.io/projected/0d171998-b358-4843-a9fa-96eac209100b-kube-api-access-tbs4w\") pod \"coredns-6f6b679f8f-rbks7\" (UID: \"0d171998-b358-4843-a9fa-96eac209100b\") " pod="kube-system/coredns-6f6b679f8f-rbks7" Feb 13 16:06:00.026793 containerd[2030]: time="2025-02-13T16:06:00.026171957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l8l7h,Uid:e4f1a0b3-3059-42a5-b277-ddfc56080cb1,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:00.057576 containerd[2030]: time="2025-02-13T16:06:00.057325266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rbks7,Uid:0d171998-b358-4843-a9fa-96eac209100b,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:02.425591 systemd-networkd[1929]: cilium_host: Link UP Feb 13 16:06:02.429054 systemd-networkd[1929]: cilium_net: Link UP Feb 13 16:06:02.429613 systemd-networkd[1929]: cilium_net: Gained carrier Feb 13 16:06:02.430115 (udev-worker)[4131]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:02.431080 systemd-networkd[1929]: cilium_host: Gained carrier Feb 13 16:06:02.432699 (udev-worker)[4167]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:02.606501 systemd-networkd[1929]: cilium_vxlan: Link UP Feb 13 16:06:02.606517 systemd-networkd[1929]: cilium_vxlan: Gained carrier Feb 13 16:06:03.083819 kernel: NET: Registered PF_ALG protocol family Feb 13 16:06:03.135109 systemd-networkd[1929]: cilium_host: Gained IPv6LL Feb 13 16:06:03.263044 systemd-networkd[1929]: cilium_net: Gained IPv6LL Feb 13 16:06:04.402600 systemd-networkd[1929]: lxc_health: Link UP Feb 13 16:06:04.412652 systemd-networkd[1929]: lxc_health: Gained carrier Feb 13 16:06:04.607006 systemd-networkd[1929]: cilium_vxlan: Gained IPv6LL Feb 13 16:06:04.697489 (udev-worker)[4496]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:04.699416 systemd-networkd[1929]: lxcffd6f4a97634: Link UP Feb 13 16:06:04.711815 kernel: eth0: renamed from tmpbb5d4 Feb 13 16:06:04.720779 systemd-networkd[1929]: lxcffd6f4a97634: Gained carrier Feb 13 16:06:05.117835 systemd-networkd[1929]: lxc6fff94194314: Link UP Feb 13 16:06:05.127146 kernel: eth0: renamed from tmp79043 Feb 13 16:06:05.133889 systemd-networkd[1929]: lxc6fff94194314: Gained carrier Feb 13 16:06:05.887895 systemd-networkd[1929]: lxcffd6f4a97634: Gained IPv6LL Feb 13 16:06:05.928971 kubelet[3338]: I0213 16:06:05.928876 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gkff6" podStartSLOduration=12.993087583 podStartE2EDuration="20.928852371s" podCreationTimestamp="2025-02-13 16:05:45 +0000 UTC" firstStartedPulling="2025-02-13 16:05:46.114659752 +0000 UTC m=+5.469754384" lastFinishedPulling="2025-02-13 16:05:54.050424516 +0000 UTC m=+13.405519172" observedRunningTime="2025-02-13 16:06:00.225404118 +0000 UTC m=+19.580498786" watchObservedRunningTime="2025-02-13 16:06:05.928852371 +0000 UTC m=+25.283947051" Feb 13 16:06:06.143077 systemd-networkd[1929]: lxc_health: Gained IPv6LL Feb 13 16:06:06.462984 systemd-networkd[1929]: lxc6fff94194314: Gained IPv6LL Feb 13 16:06:08.781577 ntpd[1997]: Listen normally on 7 cilium_host 192.168.0.138:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 7 cilium_host 192.168.0.138:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 8 cilium_net [fe80::d857:86ff:fe73:ecec%4]:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 9 cilium_host [fe80::44dc:3fff:fe1a:1383%5]:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 10 cilium_vxlan [fe80::cc62:acff:febb:f18f%6]:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 11 lxc_health [fe80::1849:c3ff:fe99:f2df%8]:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 12 lxcffd6f4a97634 [fe80::d477:d0ff:fed2:22d7%10]:123 Feb 13 16:06:08.782178 ntpd[1997]: 13 Feb 16:06:08 ntpd[1997]: Listen normally on 13 lxc6fff94194314 [fe80::881b:8eff:fecc:2885%12]:123 Feb 13 16:06:08.781714 ntpd[1997]: Listen normally on 8 cilium_net [fe80::d857:86ff:fe73:ecec%4]:123 Feb 13 16:06:08.781830 ntpd[1997]: Listen normally on 9 cilium_host [fe80::44dc:3fff:fe1a:1383%5]:123 Feb 13 16:06:08.781919 ntpd[1997]: Listen normally on 10 cilium_vxlan [fe80::cc62:acff:febb:f18f%6]:123 Feb 13 16:06:08.781989 ntpd[1997]: Listen normally on 11 lxc_health [fe80::1849:c3ff:fe99:f2df%8]:123 Feb 13 16:06:08.782056 ntpd[1997]: Listen normally on 12 lxcffd6f4a97634 [fe80::d477:d0ff:fed2:22d7%10]:123 Feb 13 16:06:08.782125 ntpd[1997]: Listen normally on 13 lxc6fff94194314 [fe80::881b:8eff:fecc:2885%12]:123 Feb 13 16:06:12.993819 containerd[2030]: time="2025-02-13T16:06:12.992639398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:12.993819 containerd[2030]: time="2025-02-13T16:06:12.992758570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:12.993819 containerd[2030]: time="2025-02-13T16:06:12.992898730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:12.997027 containerd[2030]: time="2025-02-13T16:06:12.993186838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:13.013827 containerd[2030]: time="2025-02-13T16:06:13.010356690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:13.013827 containerd[2030]: time="2025-02-13T16:06:13.010466046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:13.013827 containerd[2030]: time="2025-02-13T16:06:13.010504050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:13.013827 containerd[2030]: time="2025-02-13T16:06:13.010653138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:13.080120 systemd[1]: Started cri-containerd-790436dc5abad986c6a89d8c56855f9fd5beeb34935745f00674e346630eed02.scope - libcontainer container 790436dc5abad986c6a89d8c56855f9fd5beeb34935745f00674e346630eed02. Feb 13 16:06:13.084057 systemd[1]: Started cri-containerd-bb5d4bd9a78dc3643b99313098f74c34e1164f70544257008f123d25f23632af.scope - libcontainer container bb5d4bd9a78dc3643b99313098f74c34e1164f70544257008f123d25f23632af. Feb 13 16:06:13.224237 containerd[2030]: time="2025-02-13T16:06:13.224179399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rbks7,Uid:0d171998-b358-4843-a9fa-96eac209100b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb5d4bd9a78dc3643b99313098f74c34e1164f70544257008f123d25f23632af\"" Feb 13 16:06:13.237364 containerd[2030]: time="2025-02-13T16:06:13.237313171Z" level=info msg="CreateContainer within sandbox \"bb5d4bd9a78dc3643b99313098f74c34e1164f70544257008f123d25f23632af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:13.276485 containerd[2030]: time="2025-02-13T16:06:13.276306439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l8l7h,Uid:e4f1a0b3-3059-42a5-b277-ddfc56080cb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"790436dc5abad986c6a89d8c56855f9fd5beeb34935745f00674e346630eed02\"" Feb 13 16:06:13.284218 containerd[2030]: time="2025-02-13T16:06:13.283925887Z" level=info msg="CreateContainer within sandbox \"790436dc5abad986c6a89d8c56855f9fd5beeb34935745f00674e346630eed02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:06:13.307483 containerd[2030]: time="2025-02-13T16:06:13.307250371Z" level=info msg="CreateContainer within sandbox \"bb5d4bd9a78dc3643b99313098f74c34e1164f70544257008f123d25f23632af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42c0b5151ee9a8a77a50534f0d0a838b78cfab4ddb24e84cc13ef2352f61dada\"" Feb 13 16:06:13.309904 containerd[2030]: time="2025-02-13T16:06:13.308836843Z" level=info msg="StartContainer for \"42c0b5151ee9a8a77a50534f0d0a838b78cfab4ddb24e84cc13ef2352f61dada\"" Feb 13 16:06:13.333423 containerd[2030]: time="2025-02-13T16:06:13.333349015Z" level=info msg="CreateContainer within sandbox \"790436dc5abad986c6a89d8c56855f9fd5beeb34935745f00674e346630eed02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"916f8db31b8e31f8d26bab27de516f6cb9a694db39cbce1cd43d132e5ba722ea\"" Feb 13 16:06:13.334590 containerd[2030]: time="2025-02-13T16:06:13.334536103Z" level=info msg="StartContainer for \"916f8db31b8e31f8d26bab27de516f6cb9a694db39cbce1cd43d132e5ba722ea\"" Feb 13 16:06:13.401661 systemd[1]: Started cri-containerd-42c0b5151ee9a8a77a50534f0d0a838b78cfab4ddb24e84cc13ef2352f61dada.scope - libcontainer container 42c0b5151ee9a8a77a50534f0d0a838b78cfab4ddb24e84cc13ef2352f61dada. Feb 13 16:06:13.438349 systemd[1]: Started cri-containerd-916f8db31b8e31f8d26bab27de516f6cb9a694db39cbce1cd43d132e5ba722ea.scope - libcontainer container 916f8db31b8e31f8d26bab27de516f6cb9a694db39cbce1cd43d132e5ba722ea. Feb 13 16:06:13.497488 containerd[2030]: time="2025-02-13T16:06:13.497415164Z" level=info msg="StartContainer for \"42c0b5151ee9a8a77a50534f0d0a838b78cfab4ddb24e84cc13ef2352f61dada\" returns successfully" Feb 13 16:06:13.509898 containerd[2030]: time="2025-02-13T16:06:13.509709716Z" level=info msg="StartContainer for \"916f8db31b8e31f8d26bab27de516f6cb9a694db39cbce1cd43d132e5ba722ea\" returns successfully" Feb 13 16:06:13.715522 kubelet[3338]: I0213 16:06:13.715317 3338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:06:14.203490 kubelet[3338]: I0213 16:06:14.203394 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-l8l7h" podStartSLOduration=29.203369756 podStartE2EDuration="29.203369756s" podCreationTimestamp="2025-02-13 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:14.199573784 +0000 UTC m=+33.554668524" watchObservedRunningTime="2025-02-13 16:06:14.203369756 +0000 UTC m=+33.558464400" Feb 13 16:06:14.203895 kubelet[3338]: I0213 16:06:14.203552 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rbks7" podStartSLOduration=29.203540684 podStartE2EDuration="29.203540684s" podCreationTimestamp="2025-02-13 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:14.17482478 +0000 UTC m=+33.529919436" watchObservedRunningTime="2025-02-13 16:06:14.203540684 +0000 UTC m=+33.558635352" Feb 13 16:06:26.549349 systemd[1]: Started sshd@9-172.31.19.223:22-139.178.68.195:53834.service - OpenSSH per-connection server daemon (139.178.68.195:53834). Feb 13 16:06:26.739861 sshd[4701]: Accepted publickey for core from 139.178.68.195 port 53834 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:26.742674 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:26.750173 systemd-logind[2004]: New session 10 of user core. Feb 13 16:06:26.758083 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:06:27.024086 sshd[4701]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:27.030675 systemd[1]: sshd@9-172.31.19.223:22-139.178.68.195:53834.service: Deactivated successfully. Feb 13 16:06:27.034057 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:06:27.036373 systemd-logind[2004]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:06:27.039279 systemd-logind[2004]: Removed session 10. Feb 13 16:06:32.065312 systemd[1]: Started sshd@10-172.31.19.223:22-139.178.68.195:53838.service - OpenSSH per-connection server daemon (139.178.68.195:53838). Feb 13 16:06:32.248075 sshd[4715]: Accepted publickey for core from 139.178.68.195 port 53838 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:32.251328 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:32.260231 systemd-logind[2004]: New session 11 of user core. Feb 13 16:06:32.266017 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:06:32.507058 sshd[4715]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:32.513738 systemd[1]: sshd@10-172.31.19.223:22-139.178.68.195:53838.service: Deactivated successfully. Feb 13 16:06:32.519643 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:06:32.521961 systemd-logind[2004]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:06:32.523689 systemd-logind[2004]: Removed session 11. Feb 13 16:06:37.546289 systemd[1]: Started sshd@11-172.31.19.223:22-139.178.68.195:54670.service - OpenSSH per-connection server daemon (139.178.68.195:54670). Feb 13 16:06:37.725994 sshd[4729]: Accepted publickey for core from 139.178.68.195 port 54670 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:37.728674 sshd[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:37.736966 systemd-logind[2004]: New session 12 of user core. Feb 13 16:06:37.744080 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:06:37.980559 sshd[4729]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:37.986140 systemd-logind[2004]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:06:37.986960 systemd[1]: sshd@11-172.31.19.223:22-139.178.68.195:54670.service: Deactivated successfully. Feb 13 16:06:37.991312 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:06:37.996568 systemd-logind[2004]: Removed session 12. Feb 13 16:06:43.021316 systemd[1]: Started sshd@12-172.31.19.223:22-139.178.68.195:54686.service - OpenSSH per-connection server daemon (139.178.68.195:54686). Feb 13 16:06:43.195208 sshd[4746]: Accepted publickey for core from 139.178.68.195 port 54686 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:43.197936 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:43.206122 systemd-logind[2004]: New session 13 of user core. Feb 13 16:06:43.218093 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:06:43.456259 sshd[4746]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:43.462673 systemd[1]: sshd@12-172.31.19.223:22-139.178.68.195:54686.service: Deactivated successfully. Feb 13 16:06:43.467118 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:06:43.468284 systemd-logind[2004]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:06:43.470561 systemd-logind[2004]: Removed session 13. Feb 13 16:06:48.495320 systemd[1]: Started sshd@13-172.31.19.223:22-139.178.68.195:50590.service - OpenSSH per-connection server daemon (139.178.68.195:50590). Feb 13 16:06:48.681578 sshd[4764]: Accepted publickey for core from 139.178.68.195 port 50590 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:48.684385 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:48.692825 systemd-logind[2004]: New session 14 of user core. Feb 13 16:06:48.698046 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:06:48.943142 sshd[4764]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:48.949568 systemd[1]: sshd@13-172.31.19.223:22-139.178.68.195:50590.service: Deactivated successfully. Feb 13 16:06:48.953940 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:06:48.956397 systemd-logind[2004]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:06:48.958199 systemd-logind[2004]: Removed session 14. Feb 13 16:06:48.982267 systemd[1]: Started sshd@14-172.31.19.223:22-139.178.68.195:50596.service - OpenSSH per-connection server daemon (139.178.68.195:50596). Feb 13 16:06:49.162243 sshd[4778]: Accepted publickey for core from 139.178.68.195 port 50596 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:49.164897 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:49.171924 systemd-logind[2004]: New session 15 of user core. Feb 13 16:06:49.180048 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:06:49.498599 sshd[4778]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:49.508751 systemd[1]: sshd@14-172.31.19.223:22-139.178.68.195:50596.service: Deactivated successfully. Feb 13 16:06:49.517058 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:06:49.523175 systemd-logind[2004]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:06:49.546686 systemd[1]: Started sshd@15-172.31.19.223:22-139.178.68.195:50606.service - OpenSSH per-connection server daemon (139.178.68.195:50606). Feb 13 16:06:49.548983 systemd-logind[2004]: Removed session 15. Feb 13 16:06:49.730906 sshd[4790]: Accepted publickey for core from 139.178.68.195 port 50606 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:49.733480 sshd[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:49.742246 systemd-logind[2004]: New session 16 of user core. Feb 13 16:06:49.751136 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:06:49.987957 sshd[4790]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:49.995701 systemd[1]: sshd@15-172.31.19.223:22-139.178.68.195:50606.service: Deactivated successfully. Feb 13 16:06:50.000720 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:06:50.003456 systemd-logind[2004]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:06:50.006053 systemd-logind[2004]: Removed session 16. Feb 13 16:06:55.027336 systemd[1]: Started sshd@16-172.31.19.223:22-139.178.68.195:50616.service - OpenSSH per-connection server daemon (139.178.68.195:50616). Feb 13 16:06:55.203579 sshd[4804]: Accepted publickey for core from 139.178.68.195 port 50616 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:55.206253 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:55.213595 systemd-logind[2004]: New session 17 of user core. Feb 13 16:06:55.225054 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:06:55.457556 sshd[4804]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:55.463854 systemd[1]: sshd@16-172.31.19.223:22-139.178.68.195:50616.service: Deactivated successfully. Feb 13 16:06:55.467305 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:06:55.469318 systemd-logind[2004]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:06:55.471677 systemd-logind[2004]: Removed session 17. Feb 13 16:07:00.498305 systemd[1]: Started sshd@17-172.31.19.223:22-139.178.68.195:33006.service - OpenSSH per-connection server daemon (139.178.68.195:33006). Feb 13 16:07:00.679394 sshd[4817]: Accepted publickey for core from 139.178.68.195 port 33006 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:00.681855 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:00.689878 systemd-logind[2004]: New session 18 of user core. Feb 13 16:07:00.696040 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:07:00.940018 sshd[4817]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:00.946256 systemd[1]: sshd@17-172.31.19.223:22-139.178.68.195:33006.service: Deactivated successfully. Feb 13 16:07:00.950897 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:07:00.953866 systemd-logind[2004]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:07:00.955549 systemd-logind[2004]: Removed session 18. Feb 13 16:07:05.981357 systemd[1]: Started sshd@18-172.31.19.223:22-139.178.68.195:33016.service - OpenSSH per-connection server daemon (139.178.68.195:33016). Feb 13 16:07:06.158014 sshd[4829]: Accepted publickey for core from 139.178.68.195 port 33016 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:06.160913 sshd[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:06.168729 systemd-logind[2004]: New session 19 of user core. Feb 13 16:07:06.173534 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:07:06.415561 sshd[4829]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:06.424220 systemd[1]: sshd@18-172.31.19.223:22-139.178.68.195:33016.service: Deactivated successfully. Feb 13 16:07:06.427928 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:07:06.430355 systemd-logind[2004]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:07:06.432707 systemd-logind[2004]: Removed session 19. Feb 13 16:07:06.455384 systemd[1]: Started sshd@19-172.31.19.223:22-139.178.68.195:58728.service - OpenSSH per-connection server daemon (139.178.68.195:58728). Feb 13 16:07:06.629443 sshd[4842]: Accepted publickey for core from 139.178.68.195 port 58728 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:06.632251 sshd[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:06.643023 systemd-logind[2004]: New session 20 of user core. Feb 13 16:07:06.651047 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:07:06.942124 sshd[4842]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:06.947633 systemd[1]: sshd@19-172.31.19.223:22-139.178.68.195:58728.service: Deactivated successfully. Feb 13 16:07:06.952328 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:07:06.953928 systemd-logind[2004]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:07:06.957806 systemd-logind[2004]: Removed session 20. Feb 13 16:07:06.984326 systemd[1]: Started sshd@20-172.31.19.223:22-139.178.68.195:58738.service - OpenSSH per-connection server daemon (139.178.68.195:58738). Feb 13 16:07:07.162196 sshd[4853]: Accepted publickey for core from 139.178.68.195 port 58738 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:07.164978 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:07.172359 systemd-logind[2004]: New session 21 of user core. Feb 13 16:07:07.182009 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:07:09.817116 sshd[4853]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:09.827561 systemd[1]: sshd@20-172.31.19.223:22-139.178.68.195:58738.service: Deactivated successfully. Feb 13 16:07:09.837497 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:07:09.844059 systemd-logind[2004]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:07:09.861296 systemd[1]: Started sshd@21-172.31.19.223:22-139.178.68.195:58744.service - OpenSSH per-connection server daemon (139.178.68.195:58744). Feb 13 16:07:09.864967 systemd-logind[2004]: Removed session 21. Feb 13 16:07:10.042539 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 58744 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:10.046211 sshd[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:10.058422 systemd-logind[2004]: New session 22 of user core. Feb 13 16:07:10.070043 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:07:10.550898 sshd[4870]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:10.558406 systemd-logind[2004]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:07:10.559074 systemd[1]: sshd@21-172.31.19.223:22-139.178.68.195:58744.service: Deactivated successfully. Feb 13 16:07:10.565580 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:07:10.569242 systemd-logind[2004]: Removed session 22. Feb 13 16:07:10.589318 systemd[1]: Started sshd@22-172.31.19.223:22-139.178.68.195:58750.service - OpenSSH per-connection server daemon (139.178.68.195:58750). Feb 13 16:07:10.769214 sshd[4881]: Accepted publickey for core from 139.178.68.195 port 58750 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:10.771855 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:10.779724 systemd-logind[2004]: New session 23 of user core. Feb 13 16:07:10.785045 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:07:11.024087 sshd[4881]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:11.029510 systemd-logind[2004]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:07:11.030134 systemd[1]: sshd@22-172.31.19.223:22-139.178.68.195:58750.service: Deactivated successfully. Feb 13 16:07:11.033757 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:07:11.039074 systemd-logind[2004]: Removed session 23. Feb 13 16:07:16.063351 systemd[1]: Started sshd@23-172.31.19.223:22-139.178.68.195:58760.service - OpenSSH per-connection server daemon (139.178.68.195:58760). Feb 13 16:07:16.229560 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 58760 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:16.232227 sshd[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:16.239649 systemd-logind[2004]: New session 24 of user core. Feb 13 16:07:16.247087 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:07:16.481282 sshd[4895]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:16.487845 systemd-logind[2004]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:07:16.489357 systemd[1]: sshd@23-172.31.19.223:22-139.178.68.195:58760.service: Deactivated successfully. Feb 13 16:07:16.494080 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:07:16.496912 systemd-logind[2004]: Removed session 24. Feb 13 16:07:21.519314 systemd[1]: Started sshd@24-172.31.19.223:22-139.178.68.195:35932.service - OpenSSH per-connection server daemon (139.178.68.195:35932). Feb 13 16:07:21.699075 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 35932 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:21.701941 sshd[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:21.710633 systemd-logind[2004]: New session 25 of user core. Feb 13 16:07:21.718094 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:07:21.948481 sshd[4913]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:21.955090 systemd[1]: sshd@24-172.31.19.223:22-139.178.68.195:35932.service: Deactivated successfully. Feb 13 16:07:21.960217 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:07:21.961747 systemd-logind[2004]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:07:21.963506 systemd-logind[2004]: Removed session 25. Feb 13 16:07:26.989318 systemd[1]: Started sshd@25-172.31.19.223:22-139.178.68.195:37884.service - OpenSSH per-connection server daemon (139.178.68.195:37884). Feb 13 16:07:27.179231 sshd[4926]: Accepted publickey for core from 139.178.68.195 port 37884 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:27.181893 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:27.189870 systemd-logind[2004]: New session 26 of user core. Feb 13 16:07:27.197081 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:07:27.433147 sshd[4926]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:27.438206 systemd[1]: sshd@25-172.31.19.223:22-139.178.68.195:37884.service: Deactivated successfully. Feb 13 16:07:27.442128 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:07:27.445409 systemd-logind[2004]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:07:27.447813 systemd-logind[2004]: Removed session 26. Feb 13 16:07:32.472420 systemd[1]: Started sshd@26-172.31.19.223:22-139.178.68.195:37900.service - OpenSSH per-connection server daemon (139.178.68.195:37900). Feb 13 16:07:32.646585 sshd[4939]: Accepted publickey for core from 139.178.68.195 port 37900 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:32.649298 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:32.657150 systemd-logind[2004]: New session 27 of user core. Feb 13 16:07:32.666050 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:07:32.899089 sshd[4939]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:32.906845 systemd[1]: sshd@26-172.31.19.223:22-139.178.68.195:37900.service: Deactivated successfully. Feb 13 16:07:32.911548 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:07:32.912815 systemd-logind[2004]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:07:32.914398 systemd-logind[2004]: Removed session 27. Feb 13 16:07:32.935311 systemd[1]: Started sshd@27-172.31.19.223:22-139.178.68.195:37916.service - OpenSSH per-connection server daemon (139.178.68.195:37916). Feb 13 16:07:33.120359 sshd[4951]: Accepted publickey for core from 139.178.68.195 port 37916 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:33.123129 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:33.132022 systemd-logind[2004]: New session 28 of user core. Feb 13 16:07:33.140044 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:07:36.049892 containerd[2030]: time="2025-02-13T16:07:36.049673810Z" level=info msg="StopContainer for \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\" with timeout 30 (s)" Feb 13 16:07:36.051306 containerd[2030]: time="2025-02-13T16:07:36.050645114Z" level=info msg="Stop container \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\" with signal terminated" Feb 13 16:07:36.081626 systemd[1]: run-containerd-runc-k8s.io-a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0-runc.lkC4F7.mount: Deactivated successfully. Feb 13 16:07:36.097531 systemd[1]: cri-containerd-00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f.scope: Deactivated successfully. Feb 13 16:07:36.107402 containerd[2030]: time="2025-02-13T16:07:36.107313591Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:07:36.126650 containerd[2030]: time="2025-02-13T16:07:36.126583935Z" level=info msg="StopContainer for \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\" with timeout 2 (s)" Feb 13 16:07:36.128585 containerd[2030]: time="2025-02-13T16:07:36.128391291Z" level=info msg="Stop container \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\" with signal terminated" Feb 13 16:07:36.146741 systemd-networkd[1929]: lxc_health: Link DOWN Feb 13 16:07:36.146761 systemd-networkd[1929]: lxc_health: Lost carrier Feb 13 16:07:36.185360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f-rootfs.mount: Deactivated successfully. Feb 13 16:07:36.188640 systemd[1]: cri-containerd-a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0.scope: Deactivated successfully. Feb 13 16:07:36.189530 systemd[1]: cri-containerd-a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0.scope: Consumed 14.205s CPU time. Feb 13 16:07:36.191721 kubelet[3338]: E0213 16:07:36.191555 3338 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:07:36.205581 containerd[2030]: time="2025-02-13T16:07:36.205266999Z" level=info msg="shim disconnected" id=00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f namespace=k8s.io Feb 13 16:07:36.205581 containerd[2030]: time="2025-02-13T16:07:36.205385199Z" level=warning msg="cleaning up after shim disconnected" id=00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f namespace=k8s.io Feb 13 16:07:36.205581 containerd[2030]: time="2025-02-13T16:07:36.205406211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:36.234421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0-rootfs.mount: Deactivated successfully. Feb 13 16:07:36.244550 containerd[2030]: time="2025-02-13T16:07:36.244408539Z" level=info msg="shim disconnected" id=a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0 namespace=k8s.io Feb 13 16:07:36.244857 containerd[2030]: time="2025-02-13T16:07:36.244556295Z" level=warning msg="cleaning up after shim disconnected" id=a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0 namespace=k8s.io Feb 13 16:07:36.244857 containerd[2030]: time="2025-02-13T16:07:36.244579095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:36.247215 containerd[2030]: time="2025-02-13T16:07:36.246949887Z" level=info msg="StopContainer for \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\" returns successfully" Feb 13 16:07:36.248354 containerd[2030]: time="2025-02-13T16:07:36.248207763Z" level=info msg="StopPodSandbox for \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\"" Feb 13 16:07:36.248592 containerd[2030]: time="2025-02-13T16:07:36.248558919Z" level=info msg="Container to stop \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:07:36.253654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3-shm.mount: Deactivated successfully. Feb 13 16:07:36.270947 systemd[1]: cri-containerd-b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3.scope: Deactivated successfully. Feb 13 16:07:36.280207 containerd[2030]: time="2025-02-13T16:07:36.280040223Z" level=warning msg="cleanup warnings time=\"2025-02-13T16:07:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 16:07:36.287447 containerd[2030]: time="2025-02-13T16:07:36.287276896Z" level=info msg="StopContainer for \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\" returns successfully" Feb 13 16:07:36.288651 containerd[2030]: time="2025-02-13T16:07:36.288384292Z" level=info msg="StopPodSandbox for \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\"" Feb 13 16:07:36.288651 containerd[2030]: time="2025-02-13T16:07:36.288454960Z" level=info msg="Container to stop \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:07:36.288651 containerd[2030]: time="2025-02-13T16:07:36.288482344Z" level=info msg="Container to stop \"f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:07:36.288651 containerd[2030]: time="2025-02-13T16:07:36.288506248Z" level=info msg="Container to stop \"7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:07:36.288651 containerd[2030]: time="2025-02-13T16:07:36.288529132Z" level=info msg="Container to stop \"272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:07:36.289550 containerd[2030]: time="2025-02-13T16:07:36.288553276Z" level=info msg="Container to stop \"b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:07:36.303197 systemd[1]: cri-containerd-6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354.scope: Deactivated successfully. Feb 13 16:07:36.334946 containerd[2030]: time="2025-02-13T16:07:36.334591696Z" level=info msg="shim disconnected" id=b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3 namespace=k8s.io Feb 13 16:07:36.334946 containerd[2030]: time="2025-02-13T16:07:36.334664872Z" level=warning msg="cleaning up after shim disconnected" id=b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3 namespace=k8s.io Feb 13 16:07:36.334946 containerd[2030]: time="2025-02-13T16:07:36.334687996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:36.356828 containerd[2030]: time="2025-02-13T16:07:36.356722084Z" level=info msg="shim disconnected" id=6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354 namespace=k8s.io Feb 13 16:07:36.358277 containerd[2030]: time="2025-02-13T16:07:36.357972592Z" level=warning msg="cleaning up after shim disconnected" id=6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354 namespace=k8s.io Feb 13 16:07:36.358277 containerd[2030]: time="2025-02-13T16:07:36.358022536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:36.393440 containerd[2030]: time="2025-02-13T16:07:36.393385984Z" level=info msg="TearDown network for sandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" successfully" Feb 13 16:07:36.393786 containerd[2030]: time="2025-02-13T16:07:36.393622948Z" level=info msg="StopPodSandbox for \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" returns successfully" Feb 13 16:07:36.402341 containerd[2030]: time="2025-02-13T16:07:36.402292048Z" level=info msg="TearDown network for sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" successfully" Feb 13 16:07:36.402706 containerd[2030]: time="2025-02-13T16:07:36.402526300Z" level=info msg="StopPodSandbox for \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" returns successfully" Feb 13 16:07:36.462087 kubelet[3338]: I0213 16:07:36.461349 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6qnj\" (UniqueName: \"kubernetes.io/projected/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-kube-api-access-j6qnj\") pod \"6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae\" (UID: \"6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae\") " Feb 13 16:07:36.462087 kubelet[3338]: I0213 16:07:36.461415 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-cilium-config-path\") pod \"6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae\" (UID: \"6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae\") " Feb 13 16:07:36.466849 kubelet[3338]: I0213 16:07:36.466685 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae" (UID: "6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:07:36.468071 kubelet[3338]: I0213 16:07:36.467998 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-kube-api-access-j6qnj" (OuterVolumeSpecName: "kube-api-access-j6qnj") pod "6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae" (UID: "6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae"). InnerVolumeSpecName "kube-api-access-j6qnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:07:36.562443 kubelet[3338]: I0213 16:07:36.562288 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-xtables-lock\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562443 kubelet[3338]: I0213 16:07:36.562356 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cni-path\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562443 kubelet[3338]: I0213 16:07:36.562395 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-kernel\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562697 kubelet[3338]: I0213 16:07:36.562462 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z56rs\" (UniqueName: \"kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-kube-api-access-z56rs\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562697 kubelet[3338]: I0213 16:07:36.562498 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-bpf-maps\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562697 kubelet[3338]: I0213 16:07:36.562536 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-config-path\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562697 kubelet[3338]: I0213 16:07:36.562569 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-lib-modules\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562697 kubelet[3338]: I0213 16:07:36.562601 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-net\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.562697 kubelet[3338]: I0213 16:07:36.562631 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-run\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.563093 kubelet[3338]: I0213 16:07:36.562665 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-etc-cni-netd\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.563093 kubelet[3338]: I0213 16:07:36.562700 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hostproc\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.563093 kubelet[3338]: I0213 16:07:36.562755 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hubble-tls\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.563093 kubelet[3338]: I0213 16:07:36.562811 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-cgroup\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.563093 kubelet[3338]: I0213 16:07:36.562852 3338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e16796d3-d81b-46ee-a5f7-8d13e54c1552-clustermesh-secrets\") pod \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\" (UID: \"e16796d3-d81b-46ee-a5f7-8d13e54c1552\") " Feb 13 16:07:36.563093 kubelet[3338]: I0213 16:07:36.562910 3338 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j6qnj\" (UniqueName: \"kubernetes.io/projected/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-kube-api-access-j6qnj\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.565171 kubelet[3338]: I0213 16:07:36.562938 3338 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae-cilium-config-path\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.565171 kubelet[3338]: I0213 16:07:36.563269 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.565171 kubelet[3338]: I0213 16:07:36.563332 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.565171 kubelet[3338]: I0213 16:07:36.563372 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cni-path" (OuterVolumeSpecName: "cni-path") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.565171 kubelet[3338]: I0213 16:07:36.563408 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.566150 kubelet[3338]: I0213 16:07:36.566067 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.567116 kubelet[3338]: I0213 16:07:36.567021 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.567277 kubelet[3338]: I0213 16:07:36.567132 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.567277 kubelet[3338]: I0213 16:07:36.567211 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.567390 kubelet[3338]: I0213 16:07:36.567295 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hostproc" (OuterVolumeSpecName: "hostproc") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.567869 kubelet[3338]: I0213 16:07:36.567820 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:07:36.572458 kubelet[3338]: I0213 16:07:36.572345 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-kube-api-access-z56rs" (OuterVolumeSpecName: "kube-api-access-z56rs") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "kube-api-access-z56rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:07:36.572896 kubelet[3338]: I0213 16:07:36.572821 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16796d3-d81b-46ee-a5f7-8d13e54c1552-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 16:07:36.576603 kubelet[3338]: I0213 16:07:36.576527 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:07:36.578279 kubelet[3338]: I0213 16:07:36.578212 3338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e16796d3-d81b-46ee-a5f7-8d13e54c1552" (UID: "e16796d3-d81b-46ee-a5f7-8d13e54c1552"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:07:36.663890 kubelet[3338]: I0213 16:07:36.663830 3338 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-lib-modules\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.663890 kubelet[3338]: I0213 16:07:36.663881 3338 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-net\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.663911 3338 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-run\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.663936 3338 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-etc-cni-netd\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.663956 3338 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hostproc\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.663975 3338 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-hubble-tls\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.663995 3338 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e16796d3-d81b-46ee-a5f7-8d13e54c1552-clustermesh-secrets\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.664014 3338 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-cgroup\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.664032 3338 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cni-path\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664096 kubelet[3338]: I0213 16:07:36.664051 3338 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-xtables-lock\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664515 kubelet[3338]: I0213 16:07:36.664071 3338 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-host-proc-sys-kernel\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664515 kubelet[3338]: I0213 16:07:36.664093 3338 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z56rs\" (UniqueName: \"kubernetes.io/projected/e16796d3-d81b-46ee-a5f7-8d13e54c1552-kube-api-access-z56rs\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664515 kubelet[3338]: I0213 16:07:36.664112 3338 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e16796d3-d81b-46ee-a5f7-8d13e54c1552-bpf-maps\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.664515 kubelet[3338]: I0213 16:07:36.664133 3338 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e16796d3-d81b-46ee-a5f7-8d13e54c1552-cilium-config-path\") on node \"ip-172-31-19-223\" DevicePath \"\"" Feb 13 16:07:36.910973 systemd[1]: Removed slice kubepods-burstable-pode16796d3_d81b_46ee_a5f7_8d13e54c1552.slice - libcontainer container kubepods-burstable-pode16796d3_d81b_46ee_a5f7_8d13e54c1552.slice. Feb 13 16:07:36.911202 systemd[1]: kubepods-burstable-pode16796d3_d81b_46ee_a5f7_8d13e54c1552.slice: Consumed 14.354s CPU time. Feb 13 16:07:36.916818 systemd[1]: Removed slice kubepods-besteffort-pod6c75a819_2f96_49d1_bd18_cfbc2f0cc6ae.slice - libcontainer container kubepods-besteffort-pod6c75a819_2f96_49d1_bd18_cfbc2f0cc6ae.slice. Feb 13 16:07:37.063704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3-rootfs.mount: Deactivated successfully. Feb 13 16:07:37.063942 systemd[1]: var-lib-kubelet-pods-6c75a819\x2d2f96\x2d49d1\x2dbd18\x2dcfbc2f0cc6ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6qnj.mount: Deactivated successfully. Feb 13 16:07:37.064084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354-rootfs.mount: Deactivated successfully. Feb 13 16:07:37.064215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354-shm.mount: Deactivated successfully. Feb 13 16:07:37.064367 systemd[1]: var-lib-kubelet-pods-e16796d3\x2dd81b\x2d46ee\x2da5f7\x2d8d13e54c1552-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz56rs.mount: Deactivated successfully. Feb 13 16:07:37.064525 systemd[1]: var-lib-kubelet-pods-e16796d3\x2dd81b\x2d46ee\x2da5f7\x2d8d13e54c1552-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 16:07:37.064678 systemd[1]: var-lib-kubelet-pods-e16796d3\x2dd81b\x2d46ee\x2da5f7\x2d8d13e54c1552-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 16:07:37.392654 kubelet[3338]: I0213 16:07:37.392531 3338 scope.go:117] "RemoveContainer" containerID="00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f" Feb 13 16:07:37.398960 containerd[2030]: time="2025-02-13T16:07:37.397832813Z" level=info msg="RemoveContainer for \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\"" Feb 13 16:07:37.411815 containerd[2030]: time="2025-02-13T16:07:37.411503333Z" level=info msg="RemoveContainer for \"00ca3b4da1b5d28b46cb35152f0437d9db1eb7a8e90b368443a05e7e26b11d6f\" returns successfully" Feb 13 16:07:37.412752 kubelet[3338]: I0213 16:07:37.412194 3338 scope.go:117] "RemoveContainer" containerID="a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0" Feb 13 16:07:37.416382 containerd[2030]: time="2025-02-13T16:07:37.416276849Z" level=info msg="RemoveContainer for \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\"" Feb 13 16:07:37.427295 containerd[2030]: time="2025-02-13T16:07:37.427181717Z" level=info msg="RemoveContainer for \"a553149b79ba24bd17c5ed4bc818efb33347f73cc61a67ba33b2c18e722230f0\" returns successfully" Feb 13 16:07:37.427560 kubelet[3338]: I0213 16:07:37.427524 3338 scope.go:117] "RemoveContainer" containerID="b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23" Feb 13 16:07:37.430039 containerd[2030]: time="2025-02-13T16:07:37.429979541Z" level=info msg="RemoveContainer for \"b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23\"" Feb 13 16:07:37.436068 containerd[2030]: time="2025-02-13T16:07:37.436014389Z" level=info msg="RemoveContainer for \"b74e663fe77c0c128d3ff9e8e139d381f28d4337e8f50f631b1608442533de23\" returns successfully" Feb 13 16:07:37.436630 kubelet[3338]: I0213 16:07:37.436484 3338 scope.go:117] "RemoveContainer" containerID="272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401" Feb 13 16:07:37.441311 containerd[2030]: time="2025-02-13T16:07:37.441179609Z" level=info msg="RemoveContainer for \"272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401\"" Feb 13 16:07:37.448822 containerd[2030]: time="2025-02-13T16:07:37.447707645Z" level=info msg="RemoveContainer for \"272714bddec76a07f067d0f0471b880da394c750911fcb45dfc5cbecc3bf8401\" returns successfully" Feb 13 16:07:37.449207 kubelet[3338]: I0213 16:07:37.449120 3338 scope.go:117] "RemoveContainer" containerID="7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e" Feb 13 16:07:37.453482 containerd[2030]: time="2025-02-13T16:07:37.452759129Z" level=info msg="RemoveContainer for \"7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e\"" Feb 13 16:07:37.459360 containerd[2030]: time="2025-02-13T16:07:37.459284225Z" level=info msg="RemoveContainer for \"7102eac6b01786b7ef90ce12ce9cdbe7585f025f9deb60e95bb3a3f6e4f3f40e\" returns successfully" Feb 13 16:07:37.459941 kubelet[3338]: I0213 16:07:37.459753 3338 scope.go:117] "RemoveContainer" containerID="f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460" Feb 13 16:07:37.462302 containerd[2030]: time="2025-02-13T16:07:37.461925785Z" level=info msg="RemoveContainer for \"f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460\"" Feb 13 16:07:37.467649 containerd[2030]: time="2025-02-13T16:07:37.467596697Z" level=info msg="RemoveContainer for \"f91cb14daae849811b4381dffbfd01056fc146e38128567f88c13ccd194b9460\" returns successfully" Feb 13 16:07:37.983337 sshd[4951]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:37.990368 systemd[1]: sshd@27-172.31.19.223:22-139.178.68.195:37916.service: Deactivated successfully. Feb 13 16:07:37.994592 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:07:37.994994 systemd[1]: session-28.scope: Consumed 2.155s CPU time. Feb 13 16:07:37.996373 systemd-logind[2004]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:07:38.000229 systemd-logind[2004]: Removed session 28. Feb 13 16:07:38.018293 systemd[1]: Started sshd@28-172.31.19.223:22-139.178.68.195:47134.service - OpenSSH per-connection server daemon (139.178.68.195:47134). Feb 13 16:07:38.192080 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 47134 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:38.194798 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:38.202356 systemd-logind[2004]: New session 29 of user core. Feb 13 16:07:38.211050 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 16:07:38.781577 ntpd[1997]: Deleting interface #11 lxc_health, fe80::1849:c3ff:fe99:f2df%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Feb 13 16:07:38.782211 ntpd[1997]: 13 Feb 16:07:38 ntpd[1997]: Deleting interface #11 lxc_health, fe80::1849:c3ff:fe99:f2df%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Feb 13 16:07:38.906793 kubelet[3338]: I0213 16:07:38.905866 3338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae" path="/var/lib/kubelet/pods/6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae/volumes" Feb 13 16:07:38.907502 kubelet[3338]: I0213 16:07:38.907467 3338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" path="/var/lib/kubelet/pods/e16796d3-d81b-46ee-a5f7-8d13e54c1552/volumes" Feb 13 16:07:40.037703 sshd[5113]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:40.051442 systemd[1]: sshd@28-172.31.19.223:22-139.178.68.195:47134.service: Deactivated successfully. Feb 13 16:07:40.061706 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 16:07:40.062109 systemd[1]: session-29.scope: Consumed 1.628s CPU time. Feb 13 16:07:40.075844 kubelet[3338]: E0213 16:07:40.072586 3338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" containerName="mount-cgroup" Feb 13 16:07:40.075844 kubelet[3338]: E0213 16:07:40.072635 3338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" containerName="apply-sysctl-overwrites" Feb 13 16:07:40.075844 kubelet[3338]: E0213 16:07:40.072652 3338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" containerName="mount-bpf-fs" Feb 13 16:07:40.075844 kubelet[3338]: E0213 16:07:40.072667 3338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae" containerName="cilium-operator" Feb 13 16:07:40.075844 kubelet[3338]: E0213 16:07:40.072683 3338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" containerName="cilium-agent" Feb 13 16:07:40.075844 kubelet[3338]: E0213 16:07:40.072698 3338 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" containerName="clean-cilium-state" Feb 13 16:07:40.075844 kubelet[3338]: I0213 16:07:40.072744 3338 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16796d3-d81b-46ee-a5f7-8d13e54c1552" containerName="cilium-agent" Feb 13 16:07:40.075844 kubelet[3338]: I0213 16:07:40.072762 3338 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c75a819-2f96-49d1-bd18-cfbc2f0cc6ae" containerName="cilium-operator" Feb 13 16:07:40.086405 systemd-logind[2004]: Session 29 logged out. Waiting for processes to exit. Feb 13 16:07:40.096594 systemd[1]: Started sshd@29-172.31.19.223:22-139.178.68.195:47138.service - OpenSSH per-connection server daemon (139.178.68.195:47138). Feb 13 16:07:40.100420 systemd-logind[2004]: Removed session 29. Feb 13 16:07:40.121166 systemd[1]: Created slice kubepods-burstable-poda34b1aa6_9ca8_47b4_be7e_d575c64666c9.slice - libcontainer container kubepods-burstable-poda34b1aa6_9ca8_47b4_be7e_d575c64666c9.slice. Feb 13 16:07:40.185802 kubelet[3338]: I0213 16:07:40.185713 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-lib-modules\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.185976 kubelet[3338]: I0213 16:07:40.185803 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-xtables-lock\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.185976 kubelet[3338]: I0213 16:07:40.185851 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-clustermesh-secrets\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.185976 kubelet[3338]: I0213 16:07:40.185890 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkbk2\" (UniqueName: \"kubernetes.io/projected/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-kube-api-access-lkbk2\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.185976 kubelet[3338]: I0213 16:07:40.185931 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-cni-path\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186280 kubelet[3338]: I0213 16:07:40.185981 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-host-proc-sys-net\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186280 kubelet[3338]: I0213 16:07:40.186027 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-cilium-cgroup\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186280 kubelet[3338]: I0213 16:07:40.186066 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-host-proc-sys-kernel\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186280 kubelet[3338]: I0213 16:07:40.186098 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-hubble-tls\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186280 kubelet[3338]: I0213 16:07:40.186132 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-etc-cni-netd\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186280 kubelet[3338]: I0213 16:07:40.186174 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-cilium-run\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186604 kubelet[3338]: I0213 16:07:40.186210 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-hostproc\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186604 kubelet[3338]: I0213 16:07:40.186242 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-cilium-config-path\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186604 kubelet[3338]: I0213 16:07:40.186280 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-cilium-ipsec-secrets\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.186604 kubelet[3338]: I0213 16:07:40.186316 3338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a34b1aa6-9ca8-47b4-be7e-d575c64666c9-bpf-maps\") pod \"cilium-lsjhb\" (UID: \"a34b1aa6-9ca8-47b4-be7e-d575c64666c9\") " pod="kube-system/cilium-lsjhb" Feb 13 16:07:40.319924 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 47138 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:40.327645 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:40.359860 systemd-logind[2004]: New session 30 of user core. Feb 13 16:07:40.367052 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 16:07:40.430264 containerd[2030]: time="2025-02-13T16:07:40.430151348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsjhb,Uid:a34b1aa6-9ca8-47b4-be7e-d575c64666c9,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:40.494579 sshd[5125]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:40.506439 systemd[1]: sshd@29-172.31.19.223:22-139.178.68.195:47138.service: Deactivated successfully. Feb 13 16:07:40.512563 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 16:07:40.524093 systemd-logind[2004]: Session 30 logged out. Waiting for processes to exit. Feb 13 16:07:40.545886 containerd[2030]: time="2025-02-13T16:07:40.543032193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:40.545886 containerd[2030]: time="2025-02-13T16:07:40.543151137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:40.545886 containerd[2030]: time="2025-02-13T16:07:40.543188661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:40.545886 containerd[2030]: time="2025-02-13T16:07:40.543334917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:40.557122 systemd[1]: Started sshd@30-172.31.19.223:22-139.178.68.195:47146.service - OpenSSH per-connection server daemon (139.178.68.195:47146). Feb 13 16:07:40.561972 systemd-logind[2004]: Removed session 30. Feb 13 16:07:40.599105 systemd[1]: Started cri-containerd-7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf.scope - libcontainer container 7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf. Feb 13 16:07:40.642385 containerd[2030]: time="2025-02-13T16:07:40.642306177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsjhb,Uid:a34b1aa6-9ca8-47b4-be7e-d575c64666c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\"" Feb 13 16:07:40.648622 containerd[2030]: time="2025-02-13T16:07:40.648436893Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:07:40.672901 containerd[2030]: time="2025-02-13T16:07:40.672732729Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27\"" Feb 13 16:07:40.674843 containerd[2030]: time="2025-02-13T16:07:40.673436673Z" level=info msg="StartContainer for \"9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27\"" Feb 13 16:07:40.719080 systemd[1]: Started cri-containerd-9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27.scope - libcontainer container 9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27. Feb 13 16:07:40.739955 sshd[5154]: Accepted publickey for core from 139.178.68.195 port 47146 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:40.740992 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:40.755560 systemd-logind[2004]: New session 31 of user core. Feb 13 16:07:40.763146 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 16:07:40.782534 containerd[2030]: time="2025-02-13T16:07:40.780941650Z" level=info msg="StartContainer for \"9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27\" returns successfully" Feb 13 16:07:40.800294 systemd[1]: cri-containerd-9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27.scope: Deactivated successfully. Feb 13 16:07:40.861540 containerd[2030]: time="2025-02-13T16:07:40.861366658Z" level=info msg="shim disconnected" id=9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27 namespace=k8s.io Feb 13 16:07:40.862146 containerd[2030]: time="2025-02-13T16:07:40.861932638Z" level=warning msg="cleaning up after shim disconnected" id=9f3a149040328fddeb42001dd34510edeb26e1e509cac1751e99a0a26d279c27 namespace=k8s.io Feb 13 16:07:40.862146 containerd[2030]: time="2025-02-13T16:07:40.861974242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:40.871795 containerd[2030]: time="2025-02-13T16:07:40.871225846Z" level=info msg="StopPodSandbox for \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\"" Feb 13 16:07:40.871795 containerd[2030]: time="2025-02-13T16:07:40.871365250Z" level=info msg="TearDown network for sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" successfully" Feb 13 16:07:40.871795 containerd[2030]: time="2025-02-13T16:07:40.871389346Z" level=info msg="StopPodSandbox for \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" returns successfully" Feb 13 16:07:40.873256 containerd[2030]: time="2025-02-13T16:07:40.872951458Z" level=info msg="RemovePodSandbox for \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\"" Feb 13 16:07:40.873520 containerd[2030]: time="2025-02-13T16:07:40.873130174Z" level=info msg="Forcibly stopping sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\"" Feb 13 16:07:40.874080 containerd[2030]: time="2025-02-13T16:07:40.873734098Z" level=info msg="TearDown network for sandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" successfully" Feb 13 16:07:40.884880 containerd[2030]: time="2025-02-13T16:07:40.884554150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:40.884880 containerd[2030]: time="2025-02-13T16:07:40.884647570Z" level=info msg="RemovePodSandbox \"6f3995ce628d7f99963905d9108f1b2da2ddf3684237a5e8438aeb1d3aa1f354\" returns successfully" Feb 13 16:07:40.887827 containerd[2030]: time="2025-02-13T16:07:40.886570558Z" level=info msg="StopPodSandbox for \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\"" Feb 13 16:07:40.887827 containerd[2030]: time="2025-02-13T16:07:40.886742254Z" level=info msg="TearDown network for sandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" successfully" Feb 13 16:07:40.887827 containerd[2030]: time="2025-02-13T16:07:40.886808878Z" level=info msg="StopPodSandbox for \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" returns successfully" Feb 13 16:07:40.893614 containerd[2030]: time="2025-02-13T16:07:40.893543986Z" level=info msg="RemovePodSandbox for \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\"" Feb 13 16:07:40.893614 containerd[2030]: time="2025-02-13T16:07:40.893613334Z" level=info msg="Forcibly stopping sandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\"" Feb 13 16:07:40.893940 containerd[2030]: time="2025-02-13T16:07:40.893722234Z" level=info msg="TearDown network for sandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" successfully" Feb 13 16:07:40.914854 containerd[2030]: time="2025-02-13T16:07:40.914113666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:07:40.914854 containerd[2030]: time="2025-02-13T16:07:40.914242594Z" level=info msg="RemovePodSandbox \"b44d7447b7505548fabc82ab401c3cde1e93d4035957fdff371fb1d32fd897e3\" returns successfully" Feb 13 16:07:41.193181 kubelet[3338]: E0213 16:07:41.193008 3338 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:07:41.437593 containerd[2030]: time="2025-02-13T16:07:41.436603497Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:07:41.473057 containerd[2030]: time="2025-02-13T16:07:41.471530361Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52\"" Feb 13 16:07:41.475004 containerd[2030]: time="2025-02-13T16:07:41.474502989Z" level=info msg="StartContainer for \"1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52\"" Feb 13 16:07:41.541153 systemd[1]: Started cri-containerd-1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52.scope - libcontainer container 1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52. Feb 13 16:07:41.586992 containerd[2030]: time="2025-02-13T16:07:41.586922350Z" level=info msg="StartContainer for \"1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52\" returns successfully" Feb 13 16:07:41.600733 systemd[1]: cri-containerd-1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52.scope: Deactivated successfully. Feb 13 16:07:41.646434 containerd[2030]: time="2025-02-13T16:07:41.646001410Z" level=info msg="shim disconnected" id=1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52 namespace=k8s.io Feb 13 16:07:41.646434 containerd[2030]: time="2025-02-13T16:07:41.646145470Z" level=warning msg="cleaning up after shim disconnected" id=1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52 namespace=k8s.io Feb 13 16:07:41.646434 containerd[2030]: time="2025-02-13T16:07:41.646207570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:41.648455 containerd[2030]: time="2025-02-13T16:07:41.647986006Z" level=error msg="collecting metrics for 1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52" error="ttrpc: closed: unknown" Feb 13 16:07:42.296640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1017a931db2eec86f2c97f274432a162a935868078c999b46bd8bd8d0588bd52-rootfs.mount: Deactivated successfully. Feb 13 16:07:42.444416 containerd[2030]: time="2025-02-13T16:07:42.443837698Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:07:42.490601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207119159.mount: Deactivated successfully. Feb 13 16:07:42.496982 containerd[2030]: time="2025-02-13T16:07:42.496696810Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27\"" Feb 13 16:07:42.497747 containerd[2030]: time="2025-02-13T16:07:42.497561614Z" level=info msg="StartContainer for \"f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27\"" Feb 13 16:07:42.555100 systemd[1]: Started cri-containerd-f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27.scope - libcontainer container f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27. Feb 13 16:07:42.612724 containerd[2030]: time="2025-02-13T16:07:42.612658487Z" level=info msg="StartContainer for \"f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27\" returns successfully" Feb 13 16:07:42.618158 systemd[1]: cri-containerd-f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27.scope: Deactivated successfully. Feb 13 16:07:42.664445 containerd[2030]: time="2025-02-13T16:07:42.664364483Z" level=info msg="shim disconnected" id=f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27 namespace=k8s.io Feb 13 16:07:42.664445 containerd[2030]: time="2025-02-13T16:07:42.664441679Z" level=warning msg="cleaning up after shim disconnected" id=f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27 namespace=k8s.io Feb 13 16:07:42.664925 containerd[2030]: time="2025-02-13T16:07:42.664464047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:43.296676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0e305287bd501bd13e0f897eaecacf6b35ecdae83c3e74cf01764623e0a7e27-rootfs.mount: Deactivated successfully. Feb 13 16:07:43.451515 containerd[2030]: time="2025-02-13T16:07:43.451450559Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:07:43.482912 containerd[2030]: time="2025-02-13T16:07:43.482483663Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794\"" Feb 13 16:07:43.486045 containerd[2030]: time="2025-02-13T16:07:43.485984255Z" level=info msg="StartContainer for \"8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794\"" Feb 13 16:07:43.548110 systemd[1]: Started cri-containerd-8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794.scope - libcontainer container 8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794. Feb 13 16:07:43.591129 systemd[1]: cri-containerd-8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794.scope: Deactivated successfully. Feb 13 16:07:43.597617 containerd[2030]: time="2025-02-13T16:07:43.597550224Z" level=info msg="StartContainer for \"8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794\" returns successfully" Feb 13 16:07:43.640504 containerd[2030]: time="2025-02-13T16:07:43.640425096Z" level=info msg="shim disconnected" id=8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794 namespace=k8s.io Feb 13 16:07:43.641080 containerd[2030]: time="2025-02-13T16:07:43.640947348Z" level=warning msg="cleaning up after shim disconnected" id=8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794 namespace=k8s.io Feb 13 16:07:43.641080 containerd[2030]: time="2025-02-13T16:07:43.641008824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:43.643589 kubelet[3338]: I0213 16:07:43.643531 3338 setters.go:600] "Node became not ready" node="ip-172-31-19-223" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T16:07:43Z","lastTransitionTime":"2025-02-13T16:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 16:07:44.297039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b5833d3471c9e39d5b6765f2333c3b1a0c86548eb137fa7cf834be78e289794-rootfs.mount: Deactivated successfully. Feb 13 16:07:44.460120 containerd[2030]: time="2025-02-13T16:07:44.459346128Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:07:44.504811 containerd[2030]: time="2025-02-13T16:07:44.504679464Z" level=info msg="CreateContainer within sandbox \"7f29d45994d4ae8f6598e3dcbad260b405bc04383f2c06e27a253bb57f2a9daf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a\"" Feb 13 16:07:44.506938 containerd[2030]: time="2025-02-13T16:07:44.505450800Z" level=info msg="StartContainer for \"39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a\"" Feb 13 16:07:44.577118 systemd[1]: Started cri-containerd-39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a.scope - libcontainer container 39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a. Feb 13 16:07:44.630029 containerd[2030]: time="2025-02-13T16:07:44.629953585Z" level=info msg="StartContainer for \"39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a\" returns successfully" Feb 13 16:07:45.300574 systemd[1]: run-containerd-runc-k8s.io-39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a-runc.2eAQrM.mount: Deactivated successfully. Feb 13 16:07:45.436853 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 16:07:45.897243 kubelet[3338]: E0213 16:07:45.897160 3338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-rbks7" podUID="0d171998-b358-4843-a9fa-96eac209100b" Feb 13 16:07:49.496589 systemd[1]: run-containerd-runc-k8s.io-39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a-runc.tp5nOw.mount: Deactivated successfully. Feb 13 16:07:49.720163 systemd-networkd[1929]: lxc_health: Link UP Feb 13 16:07:49.730966 (udev-worker)[5988]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:07:49.737326 systemd-networkd[1929]: lxc_health: Gained carrier Feb 13 16:07:50.467153 kubelet[3338]: I0213 16:07:50.466474 3338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lsjhb" podStartSLOduration=10.466429878 podStartE2EDuration="10.466429878s" podCreationTimestamp="2025-02-13 16:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:45.501297877 +0000 UTC m=+124.856392545" watchObservedRunningTime="2025-02-13 16:07:50.466429878 +0000 UTC m=+129.821524522" Feb 13 16:07:51.167155 systemd-networkd[1929]: lxc_health: Gained IPv6LL Feb 13 16:07:53.781692 ntpd[1997]: Listen normally on 14 lxc_health [fe80::543b:f9ff:fe8a:7b59%14]:123 Feb 13 16:07:53.782224 ntpd[1997]: 13 Feb 16:07:53 ntpd[1997]: Listen normally on 14 lxc_health [fe80::543b:f9ff:fe8a:7b59%14]:123 Feb 13 16:07:54.092741 systemd[1]: run-containerd-runc-k8s.io-39444ca26b51053f8644d53f5d6349b5406f5cf3031ab1f444bb6fd78bed362a-runc.0783Bc.mount: Deactivated successfully. Feb 13 16:07:56.473281 sshd[5154]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:56.479529 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 16:07:56.483141 systemd[1]: sshd@30-172.31.19.223:22-139.178.68.195:47146.service: Deactivated successfully. Feb 13 16:07:56.493975 systemd-logind[2004]: Session 31 logged out. Waiting for processes to exit. Feb 13 16:07:56.500801 systemd-logind[2004]: Removed session 31. Feb 13 16:08:10.614695 systemd[1]: cri-containerd-5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777.scope: Deactivated successfully. Feb 13 16:08:10.615818 systemd[1]: cri-containerd-5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777.scope: Consumed 5.079s CPU time, 18.1M memory peak, 0B memory swap peak. Feb 13 16:08:10.660461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777-rootfs.mount: Deactivated successfully. Feb 13 16:08:10.677442 containerd[2030]: time="2025-02-13T16:08:10.677344382Z" level=info msg="shim disconnected" id=5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777 namespace=k8s.io Feb 13 16:08:10.677442 containerd[2030]: time="2025-02-13T16:08:10.677426282Z" level=warning msg="cleaning up after shim disconnected" id=5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777 namespace=k8s.io Feb 13 16:08:10.678613 containerd[2030]: time="2025-02-13T16:08:10.677450306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:11.537761 kubelet[3338]: I0213 16:08:11.537694 3338 scope.go:117] "RemoveContainer" containerID="5ee34cb74a77545bfc88805bd44ad3261e90b170c39e8b3a189178c7513c8777" Feb 13 16:08:11.540955 containerd[2030]: time="2025-02-13T16:08:11.540893343Z" level=info msg="CreateContainer within sandbox \"e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 16:08:11.572324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766900835.mount: Deactivated successfully. Feb 13 16:08:11.573162 containerd[2030]: time="2025-02-13T16:08:11.573104955Z" level=info msg="CreateContainer within sandbox \"e3df1f39f442395d583ed261bfa7e8fc53ba4e3a95efbed4a73b2930c19e904f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"025a7d0eaeed0d4da7e30e29655751e47cb1f8b21bd43dbdbd0a30841a04e426\"" Feb 13 16:08:11.575978 containerd[2030]: time="2025-02-13T16:08:11.575917839Z" level=info msg="StartContainer for \"025a7d0eaeed0d4da7e30e29655751e47cb1f8b21bd43dbdbd0a30841a04e426\"" Feb 13 16:08:11.628078 systemd[1]: Started cri-containerd-025a7d0eaeed0d4da7e30e29655751e47cb1f8b21bd43dbdbd0a30841a04e426.scope - libcontainer container 025a7d0eaeed0d4da7e30e29655751e47cb1f8b21bd43dbdbd0a30841a04e426. Feb 13 16:08:11.698812 containerd[2030]: time="2025-02-13T16:08:11.698717751Z" level=info msg="StartContainer for \"025a7d0eaeed0d4da7e30e29655751e47cb1f8b21bd43dbdbd0a30841a04e426\" returns successfully" Feb 13 16:08:13.778676 kubelet[3338]: E0213 16:08:13.777866 3338 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 16:08:16.569977 systemd[1]: cri-containerd-53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc.scope: Deactivated successfully. Feb 13 16:08:16.572401 systemd[1]: cri-containerd-53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc.scope: Consumed 5.947s CPU time, 15.7M memory peak, 0B memory swap peak. Feb 13 16:08:16.611608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc-rootfs.mount: Deactivated successfully. Feb 13 16:08:16.626258 containerd[2030]: time="2025-02-13T16:08:16.626169764Z" level=info msg="shim disconnected" id=53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc namespace=k8s.io Feb 13 16:08:16.626258 containerd[2030]: time="2025-02-13T16:08:16.626249060Z" level=warning msg="cleaning up after shim disconnected" id=53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc namespace=k8s.io Feb 13 16:08:16.627487 containerd[2030]: time="2025-02-13T16:08:16.626271980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:17.560332 kubelet[3338]: I0213 16:08:17.560286 3338 scope.go:117] "RemoveContainer" containerID="53a549563c03f176b3be9e2be9cf7248122fd6c52a73b7d32da8dd1a45e481fc" Feb 13 16:08:17.564161 containerd[2030]: time="2025-02-13T16:08:17.564026625Z" level=info msg="CreateContainer within sandbox \"1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 16:08:17.595587 containerd[2030]: time="2025-02-13T16:08:17.595418397Z" level=info msg="CreateContainer within sandbox \"1aee21db0e92f5986d30039d4e3a77e180eb1913440e5ca498fa40ae0dc4e510\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"be5fbd2b02500cea7029afcb70b6ece5fce76e851b3d097d907ce737b1d9e8d0\"" Feb 13 16:08:17.596352 containerd[2030]: time="2025-02-13T16:08:17.596291061Z" level=info msg="StartContainer for \"be5fbd2b02500cea7029afcb70b6ece5fce76e851b3d097d907ce737b1d9e8d0\"" Feb 13 16:08:17.649106 systemd[1]: Started cri-containerd-be5fbd2b02500cea7029afcb70b6ece5fce76e851b3d097d907ce737b1d9e8d0.scope - libcontainer container be5fbd2b02500cea7029afcb70b6ece5fce76e851b3d097d907ce737b1d9e8d0. Feb 13 16:08:17.717348 containerd[2030]: time="2025-02-13T16:08:17.717264141Z" level=info msg="StartContainer for \"be5fbd2b02500cea7029afcb70b6ece5fce76e851b3d097d907ce737b1d9e8d0\" returns successfully" Feb 13 16:08:23.779174 kubelet[3338]: E0213 16:08:23.778747 3338 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"